Rocky Linux 8.5 Installation

Since Redhat has decided to sunset CentOS, it is being replaced by Rocky Linux. You might be surprised at how quickly you can get a server up and running. We’ll also install the nginx webserver just to see how quickly we can get that going as well.

Let’s get right into it.

Download the installer

Rocky Linux Download Page

You can get the installer from painlessly by downloading the minimal .iso image, it’s about 2GB. Once you have that, load it up on a USB (or a DVD, I guess. Does anybody still use those?!) or in my case, Qemu in GNS3 for a VM.

Follow the prompts

We’ll immediately be taken to the installer.

Initial installer page

Just select “Install Rocky Linux 8”. There’s just one page where you select your locale, software to install, partitions, etc. We can get through it minimally by just setting the user name and click “Begin Installation”, but feel free to play with the settings.

Installation takes about 5 minutes, depending on what is being installed and how much juice your system has.

Installing Rocky Linux
Installation complete

Reboot, and we’re ready to log in!

Install nginx web server

My login prompt came right up:

Login prompt

If you neglected to connect the network on the installation settings page like I did, you’ll need to do that now, using network manager’s nmcli.

nmcli connection modify ens3 ipv4.method auto
nmcli connection down ens3
nmcli connection up ens3

And with that we’re able to get an IP address via DHCP as well as DNS configuration, so Internet is good to go.

Let’s do something interesting – we’ll install nginx web server. If you’re familiar with Redhat commands, it uses the “yum” package manager to allow for quick installation of pre-built binaries. We can install nginx with one command:

sudo yum install nginx

Just that command will get it installed.

Nginx web server installation

We’ll need to start it up with systemd:

sudo systemctl start nginx

We’ll need to allow traffic to come through the firewall. For me this is a test server in a test environment, so I had no issue with turning off the firewall completely. If you are using this in production or it has a public IP address, obviously don’t do that – configure firewalld responsibly to only allow permitted web traffic. To turn the firewall off, just issue this command for systemd:

sudo systemctl stop firewalld

And now I can access the Rocky Linux Nginx default page!

Nginx web server default page customized for Rocky Linux

Enjoy your fresh installation of Rocky!

Dockerized phpipam in GNS3

If you’re keeping track of all your IP addresses from your environment in a really big and messy Excel file, you may want to consider switching to an IP address management tool. One such tool is phpipam, which is a web-based tool that allows you to store your IP addresses in a central database (a SQL database, to be specific). The reasons why that approach would be far superior to an Excel file are pretty clear – first of all no more emailing a million different copies of that Excel file. But it has other advantages as well, for example if your software development team wants to check the availability/reserve a new IP address, subnet or vlan from code, they can do it via the phpipam API without ever clicking on anything.

A testing instance of phpipam can be brought into your GNS3 environment quickly using Docker! It requires a little hacking, but nothing too ambitious. If you haven’t got GNS3 or Docker installed or you don’t know how to add a Docker image to GNS3, check out my post on that topic.


We’re not doing anything fancy here, just the phpipam docker container connected to the “NAT” cloud node. By default, the NAT cloud node uses a virtual adapter with IP subnet The NAT adapter is at .1, and I’ll set my phpipam container to use .2. This setup will allow us to access the phpipam web server in the container at via a web browser from our desktop computer that runs GNS3.

Build a custom docker image

We’re going to quickly build a custom docker image from the official phpipam image on Docker Hub. If you’re using a GNS3 VM, you can do this via a cli session on the VM. If you’re using Linux, just do this from any terminal. Make a directory for your Dockerfile:

mkdir jamesphpipam
vi Dockerfile

Now we’re going to write the docker commands for our custom image. MySQL server needs to be installed, and also the directory “/run/mysqld” needs to be created as well so MySQL can create a Unix socket there:

FROM phpipam/phpipam-www

RUN apk add mysql mysql-client
RUN mkdir /run/mysqld

Now we have an image (it’s based on Alpine Linux) ready to fire up in GNS3. You’ll need to add it from GNS3 preferences -> docker containers -> new. Go through all the screens and use defaults, except you’ll want to set the “start command” to “/bin/sh” to give you command line access when you double click on it from the GNS3 canvas.

Configure MySQL and Apache

First we need to open up the cli on the container and set its IP address to (ip addr add dev eth0). Start up both mysqld and httpd (MySQL Server and Apache Web Server), like this:

mysqld --user=root &

Make sure you use the ampersand at the end of your mysqld command, so it runs in the background.

To set the MySQL user and password, I had to login to the MySQL cli and run these commands in the phpipam docker container:

mysql -u root
ALTER USER 'root'@'localhost' IDENTIFIED BY 'SomeSecret';

Now we should be able to access the phpipam page at from any web browser!

Configure new phpipam installation

If you click on “New phpipam installation”, it will take you to a page to select the SQL database installation type:

Let’s select “Automatic database installation”. Then we just put in the user “root” and password “SomeSecret” that we entered in our mysql cli earlier:

And our database is installed! Now we just need to set the admin password on the next screen:

Click on “Proceed to login”, login with user “admin” and the password you just set. You’ll be taken to the main phpipam page!

Hit me up if you run into any snags!

Install GNS3 and Docker on Ubuntu 20.04 for Cisco and Linux Network Labs

Every major OS has its place, so I’m not hoping to get into that discussion, but I find that Ubuntu Linux works really well for creating network labs in GNS3. If you’re not familiar with GNS3, you’re missing out. It allows you to pull in real VM’s, and even Docker containers into an emulated network environment for testing and experimentation. You can run Cisco routers and switches, other vendor network vendors, Windows Desktop and Server, Linux and any other OS that is supported by Linux’s QEMU/KVM hypervisor which is pretty much anything. GNS3 has many features, but today we’ll just look at getting it installed, along with Docker.

Why is Ubuntu better to run GNS3? You may have noticed that on Windows or Mac versions of GNS3, the server has to run on a VM to work properly. That server VM runs a Linux OS, specifically Ubuntu. So using Ubuntu as your desktop OS means you’re cutting out all of that complexity with the server VM, not to mention the additional RAM consumed. Simply put, GNS3 runs the way it’s supposed to on Ubuntu. Not to knock the Windows and Mac versions, the GNS3 team worked hard on those. But in my humble and honest opinion, Ubuntu just works better for GNS3.

Most folks stick to using VM’s in GNS3, but the Docker integration is pretty awesome and has some very real benefits over VM’s. Any docker container you have installed on the same system as the GNS3 server can be pulled into GNS3, although whether it will work properly depends somewhat on what the container has installed in it.

GNS3 Installation

The official GNS3 Ubuntu releases can be found at their PPA at:

The PPA can be added and GNS3 installed with just a few quick commands, although it’s a relatively big download:

sudo add-apt-repository ppa:gns3/ppa
sudo apt-get update
sudo apt-get install gns3-gui

When you first run GNS3, you’ll notice that the default option is not a VM, it’s to run the server locally. No VM needed!

At this point, GNS3 is installed, although you may have to run this command to get wireshark captures working:

chmod 755 /usr/bin/dumpcap

Docker Installation

I’ll just be following the official Docker instructions here, they work great:

These are to install the repository, which is probably the “best” option. There is a convenience one-liner script, but we all know that’s not a good habit to get into, so we’ll avoid that.

First install dependencies:

 sudo apt-get update
 sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \

Add the Docker official GPG key:

curl -fsSL | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Add the stable repository:

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

And install:

 sudo apt-get update
 sudo apt-get install docker-ce docker-ce-cli

To avoid getting permissions errors in GNS3, you’ll need to add your user to the docker group. You’ll need to log out/log in or restart for this to take effect:

sudo usermod -aG docker ${USER}

Add a Docker container to your GNS3

Now that Docker is installed, pulling a Docker image from the Docker Hub is easy. A popular one is Alpine Linux because it’s so small, but packs lots of popular tools and libraries:

docker pull alpine

Now you should be able to add this image to GNS3. Go into GNS3, go to preferences, and all the way at the bottom where it says “Docker containers”. Click on “new”, and you should be able to select the Alpine Linux image from the drop-down menu:

Click through and leave the defaults, but you might want two network adapters instead of one, in case you want it to be a router. Now just drag a couple containers out onto the canvas:

At this point, you should be able to double click on these and get a busybox shell, which will let you configure IP settings and the like. You may have noticed that the startup of these containers is near-instantaneous, and they consume very little RAM. One of the many perks of the lightweight nature of Docker containers. Enjoy!

Duo Security Lab – Multi-factor Authentication for RDP

Summary of Steps:

  1. Follow the doc for RDP
  2. Install Duo for Windows
  3. Set up user
  4. Set up MFA device (your phone)

It’s actually fairly quick and painless to get set up with Duo MFA for Windows, with the exception that you have to manually add a user and enroll your phone. With SSH on Linux there was some editing of text files, compiling code and command-line stuff, but with Windows it’s lots of clicking of those old familiar friends, “Next”, “Ok”, and “Finish”. Here is my topology:

First, I log into my account dashboard at (for which I’m MFA prompted on my phone, of course) and go to “Applications”. I click “Protect an Application” and click “Microsoft RDP”. Reading the docs at I download the Duo Windows installer and away I go:

You can find the API hostname, integration key, and secret key by clicking “Protect this Application” for MS-RDP:

Ready to install!

And…. it’s done.

Then we need to manually add a user, as I mentioned. From the Duo dashboard, just click “Add user”. For some reason when I created this VM some time ago I name the user “solarwinds”, I think I was doing some network testing. I regret nothing.

Add my phone:

Send the activation link to you phone, and you can activate Duo Push Mobile app if you have it.

Then log in! I use Remmina on Linux, but of course any RDP client will work.

I’m prompted by Duo and a code is sent to my phone:

And I’m logged in!

There is much rejoicing.

Duo Security Lab – Multi-factor Authentication for SSH

Part of a series of posts related to the cloud security company Duo Security, Inc. I am not affiliated in any way with Duo Security (please read my more extensive disclaimer below), I’m just doing my best to understand their offering.

Following the very clear instructions I found on, I proceed to Duo my SSH. This is my topology:

I start by getting my Ubuntu 18.04 Server ready. Duo says we’ll be building from source and I need a compiler like GCC so I just installed the build-essential package. I also need libssl-dev and libpam-dev.

sudo apt-get install build-essential libssl-dev libpam-dev

Then I need to get the pam_duo source, untar and compile.

tar zxf duo_unix-latest.tar.gz
cd duo_unix-1.11.2
./configure --with-pam --prefix=/usr && make && sudo make install

The make program then spits out a bunch of gibberish that I will never understand, but I know it compiled ok because it didn’t say “error” a bunch of times. I’m very precise like that.

Next up I copy the keys and API hostname (the magic link to my Duo account) found when I click “Protect this application” on my Duo account under “applications”:

Then I put them in /etc/duo/pam_duo.conf file. It should look like this, according to the Duo documentation:

; Duo integration key
; Duo secret key
; Duo API hostname
host =

Then edit the pam sshd configuration file /etc/pam.d/sshd, find this line and comment it:

@include common-auth

Then add three lines so it looks like this:

auth  [success=1 default=ignore] /lib64/security/
auth  requisite
auth  required

Lastly, restart the sshd daemon to apply the changes to sshd and duo.

sudo systemctl restart sshd

Once I get that all in place, I try SSHing (is that a word?) from the client to the server:

james@client:~$ ssh james@
Please enroll at
james@'s password:

I need to enroll my MFA device (my android phone) at, so I head on over there and follow the prompts:

Then I try again SSHing from the client to the server. This time I get an MFA prompt on the command line:

james@client:~$ ssh james@
Duo two-factor login for james

Enter a passcode or select one of the following options:

 1. Duo Push to XXX-XXX-7273
 2. Phone call to XXX-XXX-7273
 3. SMS passcodes to XXX-XXX-7273

Passcode or option (1-3): 1

Entering option 1 gets me a prompt from my Duo Push app on my phone to accept or deny the request (the app restricts taking a screenshot so I couldn’t include it, sorry), options 2 and 3 are pretty much what they look like.

Success. Logging you in...
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-20-generic x86_64)
Last login: Mon Jun 24 20:35:10 2019

There is much rejoicing.

Non-Affiliation Disclaimer:
I am not affiliated, associated, authorized, endorsed by, or in any way officially connected with Duo Security, or any of its subsidiaries or its affiliates. The official Duo Security website can be found at The name Duo Security as well as related names, marks, emblems and images are registered trademarks of its owners.

Duo Security – Overview and Target Market

Part of a series of posts related to the cloud security company Duo Security, Inc. I am not affiliated in any way with Duo Security (please read my more extensive disclaimer below), I’m just doing my best to understand their offering.

History and Products

Duo Security is a cyber-security company based out of Ann Arbor, Michigan, founded in 2009 by Dug Song and Jon Oberheide. In August of 2018 they were acquired by Cisco Systems. Duo’s LinkedIn profile makes a pretty clear and concise statement that they’re going to “democratize security” and that their mission is to “protect the mission of our customers by making security simple for everyone.”

Unaltered screenshot of Duo’s Product page as of 04/29/2019

Duo’s product page makes some pretty big claims about what they can do. Their product lineup targets securing apps and data, but what stood out to me is that they say it works from any location using any device for organizations of all sizes. Duo offers a platform called “Trusted Access” that has multiple parts:

  • Multi-Factor Authentication
  • Endpoint Visibility
  • Adaptive Authentication & Policy Enforcement
  • Remote Access & Single Sign-On

I’ll take a good look at what these actually mean for their customers later, but for now it’s clear they aim to secure and authenticate their customers’ systems.

Duo’s Customers – IT Departments Big and Small

It’s also fairly clear you probably wouldn’t deploy the Trusted Access platform’s features on your home WiFi network to enable trusted secure access to your Google Chromecast, as they target enterprises. They have a really nice use cases section on their homepage that shows some of the different verticals they’re after including:

  • Education
  • Federal
  • Healthcare
  • Legal
  • Retail
  • Technology
  • Finance

I took a look at one use case in particular for their customer Etsy, an online retailer of handmade or “vintage” items.

Authentication: not as easy as it looks. Photo by Jason Blackeye on Unsplash

According to the case study, Etsy’s business problem centered around securing administrators’ access to the internal management systems of their site. They use a number of access tools including SSH and internally developed systems.

Etsy cited “single-factor” authentication as a security problem for their organization, a.k.a. authentication with only a username and associated password between the outside world and access to said management systems. Duo quotes Etsy’s Network Security Manager describing Single-Factor Authentication as a “weak-link” to illustrate this issue.

Etsy used Duo’s Multi-Factor Authentication feature to add another factor to the authentication process for administrators accessing internal management systems of the site. There are multiple options for adding a second factor to the authentication process (which I’ll explore later), but Etsy says they used the Duo Mobile app. The app enables “pushing”, or the sending of an authentication request (after entering the correct password) from Duo’s Trusted Access platform to the app on the administrator’s phone. The administrator approves access from her phone, and is allowed in to the internal management system.

Next I’ll take a closer look at the different features the Trusted Access platform offers.

Non-Affiliation Disclaimer:
I am not affiliated, associated, authorized, endorsed by, or in any way officially connected with Duo Security, or any of its subsidiaries or its affiliates. The official Duo Security website can be found at The name Duo Security as well as related names, marks, emblems and images are registered trademarks of its owners.