IPv6: IPsec Not Mandatory to Use or Exist

One thing is clear, there is a serious amount of misinformation about IPv6. There are many blog posts and even official documents from respected sources that have blatantly incorrect information. Many blog posts that start out with an introduction such as this one, but end up spreading misinformation about IPv6 anyway. To avoid spreading misinformation myself, I’m going to cite the only sources of truth in the network world: the IETF RFC’s. If you’re looking for a clear and simple one-liner to sum up this article here it is:

Since the IETF issued RFC 6434 in 2011, the IPv6 standard does not require devices to implement (to be capable of) IPsec, nor does it require the enablement or use of IPsec for IPv6 nodes that do implement (are capable of) IPsec.

Yes, you read that right. You cannot count on IPsec even existing in nodes and devices running IPv6. For many years now a great number of sources have been trumpeting that IPv6 will herald an era of not having to think about network security anymore. Most respectable sources reject this notion, saying that it’s required to be implemented (to exist) in all IPv6 nodes, but not required to be turned on. Unfortunately that is also false.

RFC 1883 – where it all started

In December 1995, the IETF (Internet Engineering Task Force, a standards body accepted as the source of truth in the networking world) released RFC 1883, which defined IPv6 as a successor to IPv4.

Page 36 of that RFC briefly and quietly stated that Authentication Header (AH) and Encapsulating Security Payload (ESP) be required to secure IPv6 traffic:

This document specifies that the IP Authentication Header [RFC-1826] and the IP Encapsulating Security Payload [RFC-1827] be used with IPv6, in conformance with the Security Architecture for the Internet Protocol [RFC-1825].

https://datatracker.ietf.org/doc/html/rfc1883

This lead to a lot of folks being really excited about the ramifications of mandated secure connections in IPv6. While I can’t find any articles from 1995, this blog post from Bitdefender (a respected name in security) from 2012 is a really good example of an article making really misleading statements about how IPsec works with IPv6:

IPv6 comes with built-in IPSec , a technology that ensures secure host-to-host communication. This means that two clients communicating over IPv6 can automatically do authentication, message integrity and encryption or any combination of those.

https://www.bitdefender.com/blog/hotforsecurity/ipv6-is-here-ready-to-embrace-it

While the implementation (the capability) of IPsec was indeed mandatory in IPv6 for the first 16 years after RFC 1883 was released, no one ever said that the use or enablement of it would be mandatory.

RFC 6434 – IPsec from MUST to SHOULD

In December 2011, the IETF updated the IPv6 Node Requirements RFC with RFC 6434. In section 11 (Security), they make this unmistakable statement:

Previously, IPv6 mandated implementation of IPsec and recommended the key management approach of IKE. This document updates that recommendation by making support of the IPsec Architecture [RFC4301] a SHOULD for all IPv6 nodes.

https://datatracker.ietf.org/doc/html/rfc6434

While IPv6 IPsec is implemented (the capability exists) in major desktop/laptop OS’s such as Windows and macOS, the Internet is made up of much more than that. Internet of Things comes to mind.

It seems the IETF realized that it doesn’t make sense (or in many cases it’s not possible) to include the complexity of IPsec in an IPv6 implementation. They went on to say in RFC 6434:

This document recognizes that there exists a range of device types and environments where approaches to security other than IPsec can be justified.

https://datatracker.ietf.org/doc/html/rfc6434

Loads of misinformation

So yes, starting in 2011, IPsec even existing in a device running IPv6 is a big maybe. Despite this, some of the most respected companies in the world write documentation that describe the mandatory-ness of IPsec in IPv6. Take this IOS configuration guide from Cisco, written in August 2012:

IPsec is a mandatory component of IPv6 specification.

https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipv6/configuration/15-2s/ipv6-15-2s-book/ip6-ipsec.html

Or this article from Microsoft written in 2020:

Internet Protocol Security (IPsec) is a set of security protocols used to transfer IP packets confidentially across the Internet. IPsec is mandatory for all IPv6 implementations and optional for IPv4.

https://docs.microsoft.com/en-us/windows/win32/fwp/ipsec-configuration

Or this article from Redhat written in 2019:

In What you need to know about IPv6, we mentioned that Internet Protocol Security (IPSec) is incorporated into IPv6. This statement simply means that communication between the two endpoints is either authenticated, encrypted, or both, via the extension headers. There is a long-running discussion on the internet regarding whether the interpretation of “IPSec being mandatory” in IPv6 is correct or not. If you need to know more about this topic, see RFC 6434.

https://www.redhat.com/sysadmin/ipv6-packets-and-ipsec

This last one from Redhat is particularly confusing. They mention the very RFC where IPsec became optional and even mention the ongoing debate, but continue to say that communications in IPv6 are either authenticated or encrypted, or both.

What’s actually happening? Not IPv6 IPsec.

All of this RFC and protocol stuff is a bunch of theoretical pie in the sky. What’s actually happening out in the wild? Well, I can’t speak to the whole Internet. Maybe folks are using IPsec like they do in IPv4 – for site-to-site tunnels. Or maybe something else. But to test typical web traffic, I have native IPv6 capability in my home. I took some Wireshark captures to various websites.

Here’s google.com:

IPv6 capture of google.com

Here’s microsoft.com:

IPv6 capture of microsoft.com

Here’s wikipedia.org:

IPv6 capture of wikipedia.org

Where’s the IPsec? I don’t see any IKE, AH or ESP. Looks like typical TLS traffic to me. So… no change in security from IPv4, then.

iperf2 vs iperf3: What’s the difference?

At first glance, you might be tempted to use iperf3 simply because it is one more than iperf2 (don’t worry, I’m guilty of this crime as well). It’s not an unfair assumption to think that iperf3 is the most recent version of the software, because of the name. It’s common to have two different versions of software in parallel existence, so the new one can take hold while the older version slowly dies away. Python2 and Python3 come to mind. This is not the case with iperf, however.

I recently wrote a post on how to use iperf3 to test bandwidth. Shortly after that one of the authors of iperf2, Bob McMahon, reached out to me. He pointed out that iperf2 is very much actively developed with some cool new features having been added recently. Under the surface they are very different projects, maintained by different teams with different goals.

Today we’ll take a look at some of the differences between the two.

Topology

Ubuntu 20.04 and Rocky Linux 8.5 VM’s in GNS3

We have a really basic topology here, Ubuntu 20.04 and Rocky Linux 8.5 connected on a single link with IP subnet 10.0.0.0/30. Both VM’s have iperf2 and iperf3 installed.

Bandwidth Test

For a bandwidth test, the two are almost identical. You can perform a bandwidth test using either with the same commands. For this test, the Ubuntu VM will be the client, and Rocky the server. Start the server on Rocky like this:

iperf -s

------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------

And from Ubuntu perform a test like this:

iperf -c 10.0.0.2
------------------------------------------------------------
Client connecting to 10.0.0.2, TCP port 5001
TCP window size:  238 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.1 port 36528 connected with 10.0.0.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.90 GBytes  1.63 Gbits/sec

These commands will work using iperf2 or iperf3, however it should be noted you can’t use an iperf2 client with an iperf3 server, and vice-versa. Also, they use different TCP ports by default. Even if you used an iperf3 client with an iperf2 server and manually set the TCP port to be the same, you will get an error. They are not compatible:

iperf3 -c 10.0.0.2 -p 5001
iperf3: error - received an unknown control message

Supported Operating Systems

iperf2 is the clear winner here, primarily because it has up-to-date Windows packages available for easy download right on the sourceforge page. I avoid Windows when I can, but it has a tendency to be unavoidable due to it’s sheer installation base. iperf3 apparently had some unofficial builds a while back but nothing officially supported. You’ll need to compile it yourself to work on Windows which can be an inconvenience at best.

iperf2 downloads page

For Linux, many operating systems come with iperf2 preinstalled, Ubuntu 20.04 is one such example. iperf3 is just a command away though, with package managers. For macOS, the Homebrew package manager can quickly get you iperf2 or iperf3.

Feature: iperf3 authentication (not encryption)

Description of authentication features in iperf3

iperf3 supports authenticating clients to the server using public key/private key as well as a users file. I decided to try it out. To avoid a hassle I just used the exact commands they provided in the man file. You first generate a public key and private key on the server:

openssl genrsa -des3 -out private.pem 2048
openssl rsa -in private.pem -outform PEM -pubout -out public.pem
openssl rsa -in private.pem -out private_not_protected.pem -outform PEM

Then create a “credentials.csv” file with hashed passwords. The following commands will get a hashed password for you:

S_USER=james S_PASSWD=james
echo -n "{$S_USER}$S_PASSWD" | sha256sum | awk '{ print $1 }'
----
0b0c98028105e9e4d3f100280eac29bba90af614d1c75612729228e4d160c601 #This is the hash of "james"

Then create a “credentials.csv” file that looks like this:

username,sha256
james,0b0c98028105e9e4d3f100280eac29bba90af614d1c75612729228e4d160c601

Now start the server:

iperf3 -s --rsa-private-key-path ./private_not_protected.pem --authorized-users-path ./credentials.csv

Then from the client, copy the public key over:

scp james@10.0.0.1:public.pem .

Then run the client:

iperf3 -c 10.0.0.1 --rsa-public-key-path ./public.pem --username james

You’ll be asked for the password. If you get it right, the server will display a message that authentication succeeded:

-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Authentication successed for user 'james' ts 1639396545
Accepted connection from 10.0.0.2, port 32784
[  5] local 10.0.0.1 port 5201 connected to 10.0.0.2 port 32786
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   194 MBytes  1.63 Gbits/sec                  
[  5]   1.00-2.00   sec   204 MBytes  1.71 Gbits/sec

Feature: iperf2 isochronous mode

One of the coolest features of iperf2 is its “isochronous” option. This option is designed to simulate video streaming network traffic. You can hear Bob McMahon explain it himself on his youtube video on this feature.

Using the parameters and commands he describes in his video, we’ll run on a test. The Ubuntu server will be the iperf2 server:

iperf -s -e -i 1

Then on Rocky Linux we’ll run the client test:

[james@localhost ~]$ iperf -c 10.0.0.1 -i 1 --isochronous=60:40m,10m
------------------------------------------------------------
Client connecting to 10.0.0.1, TCP port 5001 with pid 1640
UDP isochronous: 60 frames/sec mean=40.0 Mbit/s, stddev=10.0 Mbit/s, Period/IPG=16.67/0.005 ms
TCP window size:  340 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.2 port 49150 connected with 10.0.0.1 port 5001 (ct=1.44 ms)
[ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     Cwnd/RTT        NetPwr
[  3] 0.00-1.00 sec   214 MBytes  1.79 Gbits/sec  1708/0          0       67K/562 us  398346.93
[  3] 1.00-2.00 sec   217 MBytes  1.82 Gbits/sec  1738/0        230      145K/608 us  374676.21
[  3] 2.00-3.00 sec   205 MBytes  1.72 Gbits/sec  1640/0        427      142K/583 us  368710.26
[  3] 3.00-4.00 sec   212 MBytes  1.78 Gbits/sec  1697/0        575      118K/920 us  241770.85
[  3] 4.00-5.00 sec   200 MBytes  1.68 Gbits/sec  1599/0        371      134K/853 us  245702.38
[  3] 5.00-6.00 sec   200 MBytes  1.68 Gbits/sec  1598/0        423      117K/529 us  395941.50

On the server we get our output:

james@u20vm:~$ iperf -s -e -i 1
------------------------------------------------------------
Server listening on TCP port 5001 with pid 3045
Read buffer size:  128 KByte
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  4] local 10.0.0.1 port 5001 connected with 10.0.0.2 port 49150
[ ID] Interval            Transfer    Bandwidth       Reads   Dist(bin=16.0K)
[  4] 0.0000-1.0000 sec   213 MBytes  1.79 Gbits/sec  4631    503:1500:1008:577:276:191:138:438
[  4] 1.0000-2.0000 sec   217 MBytes  1.82 Gbits/sec  4018    570:838:812:502:255:231:164:646
[  4] 2.0000-3.0000 sec   204 MBytes  1.71 Gbits/sec  5074    590:1537:1637:511:316:152:115:216
[  4] 3.0000-4.0000 sec   212 MBytes  1.78 Gbits/sec  3924    599:805:717:464:266:264:246:563
[  4] 4.0000-5.0000 sec   200 MBytes  1.68 Gbits/sec  3876    575:953:672:462:258:242:188:526
[  4] 5.0000-6.0000 sec   200 MBytes  1.68 Gbits/sec  4046    656:1040:687:476:258:242:238:449

Unfortunately the version of iperf that is available in Ubuntu 20.04 repositories (2.0.13) doesn’t support isochronous TCP mode mentioned in the video. You would need to compile from source or use Windows for that. A newer version will be included (probably already has been by the time you’re reading this) in Ubuntu 22.04 LTS.

Various smaller differences

There are many other spots that iperf2 and iperf3 are different.

  • iperf2 supports an “enhanced output mode” using -e that is totally revamped (used it above in the isochronous section).
  • iperf3 supports json output using the -j option.
  • iperf2 supports a bidirectional test which performs tests from the client and server simultaneously using -d
  • iperf2 uses a multi-threaded architecture, while iperf3 uses single-threaded. To be honest, I haven’t seen any way that this actually affects performance of the application. I’d be really curious if anyone has some input on this.

I hope this was helpful, and I hope I did both of these cool programs a small amount of justice. I’m really curious to see if anyone has any other input or differences they know about. Please fee free to comment or reach out directly.

How To Test Network Bandwidth With iperf3 in Linux

Testing network bandwidth in multiple flavors in Linux is simple with a tool called iperf. There’s two main versions – iperf2 and iperf3. Project maintainers apparently completely rewrote iperf3 from scratch to make the the tool simpler and to support some new features.

Update 12/12/2021: One of the authors of iperf2 reached out to me. Iperf2 is currently very much actively developed. You can find the most recent code on its sourceforge.net page. Iperf3 was indeed rewritten from scratch as the wikipedia page says, but mostly to meet the U.S. Department of Energy’s use cases. Iperf3’s github page clearly states the the DoE owns the project.

For testing bandwidth properly, you need to be running in server mode on one endpoint and client mode on the other. For this experiement, we will run the server on Rocky Linux 8.5 and the client on Ubuntu 20.04.

Topology

iperf3 test in GNS3

This is about as simple of a topology as I can think of. Two nodes on either end of a single link, Ubuntu at 10.0.0.1/30 running iperf3 client and Rocky at 10.0.0.2/30 running iperf3 server.

Iperf3 installation

On Ubuntu, iperf3 can be installed from distribution sources with apt-get:

apt-get install iperf3

Same on Rocky Linux but with yum:

yum install iperf3

Run iperf3 bandwidth test

First we need to start the server process on Rocky Linux with one command:

iperf3 -s

Then you should see the server listening for incoming tests:

iperf3 server listening on Rocky Linux 8.5

Then from the Ubuntu client, one command will run the test:

iperf3 -c 10.0.0.2

The output will give us our bandwith test results, which can be see on either the client or server:

Connecting to host 10.0.0.2, port 5201
[  5] local 10.0.0.1 port 59628 connected to 10.0.0.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   176 MBytes  1.48 Gbits/sec  685    230 KBytes       
[  5]   1.00-2.00   sec   173 MBytes  1.45 Gbits/sec  738    113 KBytes       
[  5]   2.00-3.00   sec   170 MBytes  1.42 Gbits/sec  1004    191 KBytes       
[  5]   3.00-4.00   sec   175 MBytes  1.47 Gbits/sec  714    123 KBytes       
[  5]   4.00-5.00   sec   182 MBytes  1.52 Gbits/sec  458    163 KBytes       
[  5]   5.00-6.00   sec   204 MBytes  1.71 Gbits/sec  443    314 KBytes       
[  5]   6.00-7.00   sec   180 MBytes  1.51 Gbits/sec  910    130 KBytes       
[  5]   7.00-8.00   sec   191 MBytes  1.60 Gbits/sec  849    123 KBytes       
[  5]   8.00-9.00   sec   172 MBytes  1.44 Gbits/sec  564    170 KBytes       
[  5]   9.00-10.00  sec   184 MBytes  1.54 Gbits/sec  412    225 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.76 GBytes  1.52 Gbits/sec  6777             sender
[  5]   0.00-10.04  sec  1.76 GBytes  1.51 Gbits/sec                  receiver

iperf Done.

A wireshark capture in GNS3 between the two hosts (or tcpdump on the links if you’re not in GNS3) will show the packets flying while the test is running:

Wireshark capture from GNS3 of iperf3 test

Hope you liked it!

Wireguard VPN on Ubuntu 20.04

Wireguard is an attempt to improve VPN tunnels in a number of ways – simpler code, less compute, easier configuration, the list goes on. If we’re comparing it to IPsec, I would say that yes, it’s a bit easier to configure. One of the main differences is that it does not rely on the classic two IPsec options for keys – PSK and X.509 certificates. Instead, it relies on public key/private key similar to SSH.

Today, we’re going to configure a very simple policy-based site-to-site Wireguard VPN. By “policy-based” I mean that tunneled traffic is determined by a pre-written configuration in the Wireguard configuration file, not by static or dynamic routes. By site-to-site, I mean this is not a remote-access (road warrior) VPN, it’s designed to connect the subnets that sit behind the two VPN peers.

Topology

The two Ubuntu20.04 machines are serving as routers and VPN peers. No routing is in place for PC1 and PC2 to talk to each other. While the two Ubuntu machines can connect to each other, there is no network-level encryption. That’s where Wireguard comes in.

Generate key pairs

If you haven’t installed it yet, wireguard can be installed with apt-get:

apt-get install wireguard

We’ll use the wg command to generate keys on Ubuntu20.04-1. These two commands will generate a private and public key pair:

wg genkey > private1.key
wg pubkey < private1.key > public1.key

We’ll do the same on Ubuntu20.04-2

wg genkey > private2.key
wg pubkey < private2.key > public2.key

Tunnel configurations

Then we’ll write a tunnel config file for Ubuntu20.04-1 in /etc/wireguard/wg0.conf:

[Interface]
PrivateKey = uOnrdfW/ANcU+fh+RUjlb3TQlFWIdwbOpDpAA+NkonY=
Address = 10.0.0.1/30
ListenPort = 8888

[Peer]
PublicKey = JjSDqdzjPOX+iIAaWCjxxg1yZ76jAd6jfSfv/1AlojI=
Endpoint = 12.0.0.2:8888
AllowedIPs = 10.0.0.0/30, 172.16.0.0/24

You’ll notice I took the keys out of the key files and pasted them into the config file. Under “Interface” we have Ubuntu20.04-1’s own private key, while the public key of Ubuntu20.04-2 is under “Peer”. The “Address” parameter is the “glue network” of the tunnel. This will be the virtual subnet that exists inside the tunnel encryption. The “AllowedIPs” parameter is where tunneled traffic is specified (interesting traffic). You need to put the destination subnet here. Since this is Ubuntu20.04-1 and 172.16.0.0/24 is the destination on the other side of the tunnel, we put 172.16.0.0/24 as part of the “AllowedIPs” parameter. Also put the glue network in here as well.

We can write a similar one for Ubuntu20.04-2 in /etc/wireguard/wg0.conf. For Ubuntu20.04-2, 192.168.0.0/24 is on the other side of the tunnel so 192.168.0.0/24 will be in “AllowedIPs”. Under “Interface”, we have its own private key, while the public key of Ubuntu20.04-1 is pasted under “Peer”.

[Interface]
PrivateKey = KJgjkPQVhOX5CyYYWr7B6v1AbI7H2kEtBi4wdhAES2g=
Address = 10.0.0.2/30
ListenPort = 8888

[Peer]
PublicKey = +GOlIMgLAnLZujraI8m4F6JyWZOpxWGRAPSUqkwrZyg=
Endpoint = 11.0.0.2:8888
AllowedIPs = 10.0.0.0/30, 192.168.0.0/24

To bring the tunnels up, on each Ubuntu machine run this command:

wg-quick up wg0

To verify the configuration is loaded, use “wg show”. I ran this one from Ubuntu20.04-2:

wg show
---
interface: wg0
  public key: JjSDqdzjPOX+iIAaWCjxxg1yZ76jAd6jfSfv/1AlojI=
  private key: (hidden)
  listening port: 8888

peer: +GOlIMgLAnLZujraI8m4F6JyWZOpxWGRAPSUqkwrZyg=
  endpoint: 11.0.0.2:8888
  allowed ips: 10.0.0.0/30, 192.168.0.0/24
  latest handshake: 28 minutes, 48 seconds ago
  transfer: 1.09 KiB received, 2.63 KiB sent

We should be able to ping the tunnel glue network IPs (here from Ubuntu20.04):

root@u20vm:/home/james# ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=5.22 ms

And PC1 and PC2 can ping each other too:

PC1> ping 172.16.0.2

84 bytes from 172.16.0.2 icmp_seq=1 ttl=62 time=4.942 ms

You’ll notice we did not have to add any routes for 192.168.0.0/24 and 172.16.0.0/24 to be able to reach each other. The Wireguard configuration added routing automatically, which is why I am calling this type of tunnel “policy-based”.

And the most fun part, Wireshark. We can see the traffic going back and forth, and the protocol is labeled “Wireguard”. Pretty cool, right?

Wireshark capture of Wireguard traffic

Hope you liked this one.

Basic Firewall With Iptables on Ubuntu 20.04

Ubuntu comes with iptables, a configuration utility that allows you to manage rules for Netfilter, the Linux Kernel firewall. Using iptables you can manipulate packets as they leave, enter, or are forwarded across network interfaces on a Linux operating system. Today we’ll look at how to block SSH traffic going through an Ubuntu 20.04 system acting as a router.

Topology

I have a basic configuration here with three IP subnets – 192.168.0.0/24 where the SSH client lives, 10.0.0.0/30 is a transit network between routers, and 172.16.0.0/24 where the SSH server lives. We will be configuring Ubuntu20.04-Firewall with iptables to block SSH traffic.

Installation

Iptables requires no external package installation with apt-get or otherwise, it comes stock-and-standard with a fresh Ubuntu 20.04 Server or Desktop OS. You can verify the status of your iptables rules like so:

iptables -S

-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT

By default, iptables is configured to pretty much do nothing. No packets are filtered and no NAT (network address translation) is configured.

Iptables syntax

The syntax for iptables can be quite confusing, and since I myself am not configuring them on a daily basis, I always need to reference documentation (or someone else’s blog) for how to configure something specific. It’s a good idea to take a quick look at the basic syntax though. The command structure looks like this, taken straight from the iptables manual page:

iptables [-t table] [mode] [chain] [rulenum] [rule-specification] [options]
  • table is which type of table you want to use. Usually it’s filter for dropping disallowed packets, or nat for translating packets. Filter is default if none is specified.
  • mode is an action, -A (append), -I (insert), -D (delete), -R (replace), -L (list), -P (set policy, is default action) are all valid modes.
  • chain is the part of the routing process to which your rule applies, there are five chains – PREROUTING, INPUT, FORWARD, OUTPUT, POSTROUTING.
  • rulenum gives the rule a sequence, rules are applied one-by-one and when there is a match, the corresponding action is taken and all subsequent rules are ignored.
  • rule-specification is the actual rule itself. There are many parameters that can be specified here, like protocol, source, destination, etc.
  • options give you some customization – for example, you can add -v for verbose output.

Now that we have a basic idea of what iptables does, let’s drop some SSH packets.

Verifying what we’re blocking

On Ubuntu20.04-Firewall in my topology, I can see that Ubuntu20.04-SSH_Client at 192.168.0.2 has SSH access to Ubuntu20.04-SSH_Server at 172.16.0.2:

james@client$ssh james@172.16.0.2
james@172.16.0.2's password:

james@server$

SSH is an network application protocol that usually uses TCP port 22 to establish an encrypted command-line session between two computers. If we do a quick wireshark betwen the Ubuntu20.04-Firewall and CiscoIOSv15.6(1)T-1 router, we can see this SSH traffic traversing the link:

Wireshark capture of SSH traffic in GNS3

Starting at the top you can see the TCP handshake on port 22 starting with the SYN flag. A few packets later SSHv2 packets begin. All packets use a randomized source TCP port (source is different per session) and a destination TCP port of 22. So we can safely say that in this experiment, if we drop TCP destination port 22, we will effectively block SSH traffic between the client and server.

Configure the iptables rule

A rule to drop SSH packets on TCP 22 can be configured in one line on Ubuntu20.04-Firewall:

iptables -A FORWARD -p tcp --dport 22 -j DROP
  • -A specifies we are appending a rule.
  • FORWARD applies the rule to packets being forwarded from one interface to another
  • -p tcp is a rule-specification to apply the rule to TCP packets
  • --dport 22 applies the rule to destination TCP port 22
  • -j DROP tells iptables to drop the packet if the previous conditions match

Verify

Let’s see if the client can connect now:

james@client$ ssh james@172.16.0.2
ssh: connect to host 172.16.0.2 port 22: Connection timed out

It worked! Packets on TCP port 22 are being dropped at the Ubuntu firewall.

Please keep in mind – this is a simple block on TCP destination port 22. TCP and UDP ports use the concept of “well-known” which means servers running protocols use a port that everyone “knows”. This is so that someone connecting to a server for the first time with no previous knowledge of its configuration will be able to connect, since they both assume that TCP port 22 is used for SSH. However if someone configures SSH on the client and server in this example to use any other port, it will go right around the filter.

Reach out if you have issues or something to share!

Telnet to Ubuntu Server 20.04 in GNS3 Instead of VNC

If you’re using Ubuntu VM’s inside of GNS3, you’re probably sick of using a VNC client to access its command line.

The first big drawback to using VNC is that you can’t (or at least it’s not immediately obvious how to) paste text or commands you’ve found into the terminal. You have to retype everything, which is a real bummer.

The second big drawback is that a VNC session can’t be automated (or at least I don’t know of a good tool to do that). Since VNC is like RDP in that the session is visual, a human being or really advanced AI is required to interact with the session.

Having access to a VM in GNS3 via telnet to its terminal is a real benefit. You can set it up pretty quickly in Ubuntu 20.04. Full disclosure – this method only gets you access after the device has booted and arrived at the login prompt. There is a way to allow access earlier than that so the boot process can be viewed, I just haven’t gotten to it yet.

Set your VM to not be “linked base”

One mistake I often make in GNS3 is forgetting to make my VM not a “linked base” when I want to make permanent changes. A linked base is basically a clone of your VM. Any changes you make, files you download or programs you install will be blown away when you delete the device from the GNS3 canvas. To disable this functionality temporarily to make permanent changes, go to the device in the left pane and click “configure template”. On the advanced tab, uncheck “Use as a linked base VM”:

When you are done configuring the telnet capability, you can recheck this box. All linked base VM’s you drag out afterwards will have the telnet capability.

Create the ttyS0.service

You first need to create a systemd service for serial access. We need to create a file called ttyS0.service in the /lib/systemd/system/ directory:

vi /lib/systemd/system/ttyS0.service

The file contents should look like this:

[Unit]
Description=Serial Console Service

[Service]
ExecStart=/sbin/getty -L 115200 ttyS0 vt102
Restart=always

[Install]
WantedBy=multi-user.target

getty is program that manages tty sessions, physical or virtual terminals. It will run the login prompt when a connection is detected. 115200 is the baud rate, ttyS0 is a device file that points to the current terminal, and vt102 is the terminal emulator.

Load the service in systemd

Just a few commands will load the new service in systemd, and the script will run on boot to activate your serial device and allow telnet. Run these commands:

#Make file executable
chmod 755 ttyS0.service

#Reload systemd
systemctl daemon-reload

#Enable the service
systemctl enable ttyS0

#Start the service
systemctl start ttyS0

Your service is good to go!

Change the console type to telnet

You need to first shut down your VM so you can change the console type. Once it’s shutdown, you can configure the device on the canvas, or the template in the pane to the left, or both. The template will make changes for all VM’s dragged onto the canvas in the figure. Either way, configure the node by right clicking on it, and clicking on “configure” or “configure template”. At the very bottom, you should see a dropdown for “console type”. Change it to “telnet”:

Log in via telnet!

Just double-click on your VM. You won’t see any output on the telnet window while the VM is booting up because the service hasn’t fired yet. But when it does, you should see the login prompt:

Bonus tip – turn off dhcp in netplan

I had to turn off dhcp in Ubuntu’s netplan network configuration tool to get it to stop hanging at boot. There should be a yaml file in /etc/netplan/ (the yaml file name might differ per system) where you can turn it off. My netplan config looks like this:

network:
  ethernets:
    ens3:
      dhcp4: false
      optional: yes
  version: 2

Hope that helps!

SSH IP VPN Tunnel on Ubuntu 20.04

Today we will create an virtual interface to which you can assign an IP address and use like any other IP interface on Ubuntu. It’s transmissions are encrypted by SSH. This is not SSH port-forwarding. I repeat, this is not layer 4 SSH port-forwarding or what is commonly known as SSH-tunneling. This is full layer-3 connectivity on top of SSH.

SSH is a common tool for network engineers and systems administrators to securely access the CLI (command-line interface) of various systems. OpenSSH is an open-source implementation of the protocol and is included or available to install on most Linux distributions. While it’s a great tool for CLI access, SSH has other, darker powers that some consider to be hacking tools or black magic.

One of OpenSSH’s tools that is somewhat well known is the “SSH Tunnel”, and is basically a port forwarding technique that allows the sending of a single TCP or UDP port through an SSH connection. A Much less known feature is OpenSSH’s ability to create a virtual Ethernet adapter on top of an SSH connection. This allows full layer-3 IP connectivity, not just a single layer-4 TCP or UDP port. You can add routes that point through this virtual connection, just like you would any other Ethernet interface. You can even run a routing protocol across it.

Topology

We are setting up an SSH IP tunnel from Ubuntu20.04Server-1 on the left side at private physical IP 192.168.0.2 to Ubuntu20.04Server-3 on the right with a public physical IP of 12.0.0.2. The SSH tunnel will use a network of 10.0.0.0/30. One fun fact here is that this tunnel is traversing a NAT (PAT) that I set up on the Cisco router that is connecting Ubuntu20.04Server-1 to the Internet. No issues traversing NAT for SSH IP tunnel. Finally, we will add some static routes to allow Ubuntu20.04Server-4 to ping Ubuntu20.04Server-3 through the SSH tunnel.

Installation

Ubuntu20.04Server-3

You probably installed Ubuntu’s OpenSSH server when you installed the OS but you’ll also need a tool called autossh, so run this command on Ubuntu20.04Server-3:

apt-get install openssh-server

Now we’re going to make some changes to the SSH server configuration. Root login is required from the client in order to create a TUN adapter, so we’ll be enabling that. Edit the /etc/ssh/sshd_config file. You will make these changes:

  • Uncomment and change “PermitRootLogin prohibit-password” to “PermitRootLogin without-password”
  • Uncomment and change “PermitTunnel no” to “PermitTunnel yes”
vi /etc/ssh/sshd_config
PermitRootLogin without-password
PermitTunnel yes

The above configuration is confusing – it will allow login as root, but not without a key. It’s relatively secure. Then restart the OpenSSH server:

systemctl restart sshd

Ubuntu20.04Server-1

On Ubuntu20.04Server-1, you’ll need a tool called “autossh” that watches SSH sessions and restarts them if they die. Run this command:

apt-get install autossh

Let’s set up key authentication, so we can log in as root to the server. :

ssh-keygen -t rsa #create an RSA key
cat ~/.ssh/id_rsa.pub | ssh james@12.0.0.2 "mkdir -p ~/.ssh && cat >>  ~/.ssh/authorized_keys" #Copy key to server

Connecting the tunnel

We’re ready to build our tunnel! From Ubuntu 20.04Server-1 (the client at 192.168.0.2), run the following magical command:

autossh -M 0 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -NTC -o Tunnel=point-to-point -w 0:0 12.0.0.2 &

There’s a fair amount going on here, I’ll break it down:

  • ‘-M 0’ refers to monitoring tcp port, do not use
  • ‘-o “ServerAliveInterval 30”’ sends a keepalive every 30 seconds
  • ‘-o “ServerAliveCountMax 3″’ retries keepalive a maximum of 3 times. Autossh ends here, SSH native commands start from next option.
  • ’-N’ instructs SSH not to execute a remote command
  • ’-T’ disables pseudo-tty allocation
  • ’-C’ compression, may improve performance, may degrade
  • ‘-o Tunnel=point-to-point’ creates a virtual interface
  • ’-w 0:0’ gives the local and remote tun adapters a number, in this instance 0. Left side of ‘:’ is local, right side is remote.
  • 12.0.0.2 is the tunnel destination
  • The final ampersand runs the command in the background so you can get your shell back.

If you have done everything correctly, you now have a “tun0” device on both the server and client:

ip link

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 0c:88:6c:69:00:00 brd ff:ff:ff:ff:ff:ff
9: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 500
    link/none 

Now you can configure it with IP settings. You can use “ip route” commands for testing, or netplan to survive reboot. On the server, we’ll add a static route for 192.168.0.0/24 pointing through the newly created tunnel so it can access that network. And on Ubuntu 20.04Server-4 (an innocent bystander in the 192.168.0.0/24 network at 192.168.0.3), we’ll also add a route for the 10.0.0/30 network pointing to 192.168.0.2 so we can see that the tunnel works to route all IP traffic, not just traffic between the client and server. Make sure the client has IP forwarding enabled or that won’t work.

#On Ubuntu 20.04Server-1 (the client)

ip addr add 10.0.0.1/30 dev tun0
ip link set tun0 up

#On Ubuntu 20.04Server-3 (the server)
ip addr add 10.0.0.2/30 dev tun0
ip link set tun0 up
ip route add 192.168.0.0/24 via 10.0.0.1

#On Ubuntu 20.04Server-4 (innocent bystander at 192.168.0.3)
ip route add 10.0.0.0/30 via 192.168.0.2

Let’s trying pinging from Ubuntu 20.04Server-4 to Ubuntu 20.04Server-3:

james@u20vm:~$ ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=63 time=3.87 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=63 time=3.73 ms

It works! Now let’s try to ping from the server Ubuntu 20.04Server-3 all the way through to Ubuntu 20.04Server-4, going right through that NAT at the Cisco router:

james@u20vm:~$ ping 192.168.0.3
PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
64 bytes from 192.168.0.3: icmp_seq=1 ttl=63 time=3.30 ms
64 bytes from 192.168.0.3: icmp_seq=2 ttl=63 time=3.69 ms

It works!

Hope you enjoyed this one, SSH IP tunnels are one of my favorite Linux hacks.

Dockerized phpipam in GNS3

If you’re keeping track of all your IP addresses from your environment in a really big and messy Excel file, you may want to consider switching to an IP address management tool. One such tool is phpipam, which is a web-based tool that allows you to store your IP addresses in a central database (a SQL database, to be specific). The reasons why that approach would be far superior to an Excel file are pretty clear – first of all no more emailing a million different copies of that Excel file. But it has other advantages as well, for example if your software development team wants to check the availability/reserve a new IP address, subnet or vlan from code, they can do it via the phpipam API without ever clicking on anything.

A testing instance of phpipam can be brought into your GNS3 environment quickly using Docker! It requires a little hacking, but nothing too ambitious. If you haven’t got GNS3 or Docker installed or you don’t know how to add a Docker image to GNS3, check out my post on that topic.

Topology

We’re not doing anything fancy here, just the phpipam docker container connected to the “NAT” cloud node. By default, the NAT cloud node uses a virtual adapter with IP subnet 192.168.122.0/24. The NAT adapter is at .1, and I’ll set my phpipam container to use .2. This setup will allow us to access the phpipam web server in the container at 192.168.122.2 via a web browser from our desktop computer that runs GNS3.

Build a custom docker image

We’re going to quickly build a custom docker image from the official phpipam image on Docker Hub. If you’re using a GNS3 VM, you can do this via a cli session on the VM. If you’re using Linux, just do this from any terminal. Make a directory for your Dockerfile:

mkdir jamesphpipam
vi Dockerfile

Now we’re going to write the docker commands for our custom image. MySQL server needs to be installed, and also the directory “/run/mysqld” needs to be created as well so MySQL can create a Unix socket there:

FROM phpipam/phpipam-www

RUN apk add mysql mysql-client
RUN mkdir /run/mysqld

Now we have an image (it’s based on Alpine Linux) ready to fire up in GNS3. You’ll need to add it from GNS3 preferences -> docker containers -> new. Go through all the screens and use defaults, except you’ll want to set the “start command” to “/bin/sh” to give you command line access when you double click on it from the GNS3 canvas.

Configure MySQL and Apache

First we need to open up the cli on the container and set its IP address to 192.168.122.2 (ip addr add 192.168.122.2/24 dev eth0). Start up both mysqld and httpd (MySQL Server and Apache Web Server), like this:

httpd
mysqld --user=root &

Make sure you use the ampersand at the end of your mysqld command, so it runs in the background.

To set the MySQL user and password, I had to login to the MySQL cli and run these commands in the phpipam docker container:

mysql -u root
ALTER USER 'root'@'localhost' IDENTIFIED BY 'SomeSecret';

Now we should be able to access the phpipam page at 192.168.122.2 from any web browser!

Configure new phpipam installation

If you click on “New phpipam installation”, it will take you to a page to select the SQL database installation type:

Let’s select “Automatic database installation”. Then we just put in the user “root” and password “SomeSecret” that we entered in our mysql cli earlier:

And our database is installed! Now we just need to set the admin password on the next screen:

Click on “Proceed to login”, login with user “admin” and the password you just set. You’ll be taken to the main phpipam page!

Hit me up if you run into any snags!

Add Dockerized Bind DNS Server to GNS3

I posted a while ago on how to install and configure the Bind DNS server on Ubuntu 18.04, and got a request from a reader with help on getting Dockerized Bind into GNS3. This post is the result of my tinkering with that lab.

The organization that oversees the Bind open source project also releases an official Docker image through the Docker hub that anyone can access. Docker container technology can be a tricky at first for systems and network engineers to wrap their heads around. Docker containers are not an entire operating system – full operating systems are designed to run many processes at once. Docker containers are designed to run one process and one process only. They only contain the software and libraries needed to run that process.

GNS3 has a very cool integration with Docker, however. It allows you to add full network adapters to your containers and copies in some handy tools to make the command line environment usable. But since many of the familiar OS tools are not included in most Docker containers like they would be with a standard OS, it can be challenging to get things working right.

If you are using Ubuntu Linux, feel free to check out my guide on installing GNS3 and Docker on Ubuntu 20.04. If you are using Windows or Mac with the GNS3 VM, Docker is already installed on the VM.

Topology

My topology is simple – a single vlan and IP subnet of 10.0.0.0/24. My Bind DNS server will reside at 10.0.0.3, with two Alpine Linux containers at 10.0.0.1 and 10.0.0.2. I walk through getting Alpine Linux containers installed on the post I linked above, if you need help.

Build your own image based off the official ISC Bind image

First open up a shell or terminal on the GNS3 VM or wherever the GNS3 server is located. If you don’t know how to open a shell, they walk you through it on the official GNS3 docs:

https://docs.gns3.com/docs/emulators/create-a-docker-container-for-gns3/

Create a directory where you can write your Dockerfile and build the image:

mkdir jamesbind
cd jamesbind

vi Dockerfile

Feel free to use whatever text editor you like, I’m a vi person. We’re going to write a Dockerfile that looks like this:

FROM internetsystemsconsortium/bind9:9.11
RUN apt-get update
RUN apt-get install vim -y

Basically all this does is pull the official Bind Docker image, and run some commands on the image. Namely we are updating apt-get and installing vi. We need to do this because this docker image does not have a text editor installed, and we have to edit the Bind configuration files.

Full disclosure: there is another, much better way than manually editing config files from inside the container. You can write the config files in the same folder as the Dockerfile, and add them to the Docker image when you build it. However, I think it’s best for learning and troubleshooting purposes to manually edit the text files, so that’s the route I’m going.

Build your image(-t switch gives it a “tag”, which is basically a name):

docker build -t jamesbind .

Don’t forget the period at the end, that’s important. You should now have a fresh docker image with bind and vi installed in it.

Add your image to GNS3

From the GNS3 preferences window, you can now add your image to the list of devices available.

Click through and use the defaults except when you get to the “Start command” window. You’ll want to set that to /bin/bash:

Now you’re ready to use your image in GNS3!

Fire up Bind

Drag all the containers out, connect and double click on them to get a terminal. You should be able to configure IP settings normally using iproute2 commands (ip addr add 10.0.0.1/24 dev eth0, etc). For the Bind container, let’s write our config files. As I mentioned many cycles ago in my Bind server post, there are three Bind config files:

/etc/bind/named.conf.options –> Configures BIND9 options
/etc/bind/named.conf.local –> Sets zone file name and gives its location
/etc/bind/zones/db.jamesmcclay.com –> The actual zone file with DNS records.

First let’s hop into our Bind container (just double click on it) and configure named.conf.options. Mine looks like this:

options {
        directory "/var/cache/bind";
        listen-on { any; };
};

Now on to named.conf.local. This is where you declare your zone. Mine is going to be jamesmcclay.com, I just made it up.

zone "jamesmcclay.com" {
    type master;
    file "/etc/bind/zones/db.jamesmcclay.com";
};

Now for the zone file that we indicated above. It needs to be created, so lets create both the zones folder and jamesmcclay.com zone file:

mkdir zones
cd zones
vi db.jamesmcclay.com
@               IN      SOA     ns.jamesmcclay.com.    root.jamesmcclay.com. (
                                2               ; Serial
                                604800	        ; Refresh
                                86400           ; Retry
                                2419200         ; Expire
                                604800 )        ; Negative Cache TTL
;
@               IN      NS      ns.jamesmcclay.com.
ns              IN      A       10.0.0.3
alpine1         IN      A       10.0.0.1
alpine2         IN      A       10.0.0.2

Finally, fire up Bind by running the “named -g” command. This will run it in the foreground, with debug output which will be handy. Alternatively, you can just run “named” and it’ll go in the background. When you run it, you’ll be looking for a line that says your zone file was loaded. “all zones loaded” seems to be a lie, if there’s errors on your zone, it’ll say that and then say all zones were loaded. Make sure you read the output carefully:

named -g
<...removed for brevity...>
26-Oct-2021 23:49:14.231 zone jamesmcclay.com/IN: loaded serial 2
26-Oct-2021 23:31:49.828 all zones loaded
26-Oct-2021 23:31:49.829 running

In your Alpine containers, add “nameserver 10.0.0.3” to resolv.conf to tell them to use the Bind server for DNS resolution:

echo "nameserver 10.0.0.3" > /etc/resolv.conf

Testing your setup

First let’s ping ns.jamesmcclay.com (the Bind container) from alpine-1:

ping ns.jamesmcclay.com

PING ns.jamesmcclay.com (10.0.0.3): 56 data bytes
64 bytes from 10.0.0.3: seq=0 ttl=64 time=1.080 ms
64 bytes from 10.0.0.3: seq=1 ttl=64 time=1.073 ms

It works! We can see in a wireshark packet capture the DNS request from 10.0.0.1 and response from 10.0.0.3:

Pinging to alpine2.jamesmcclay.com also works:

ping alpine2.jamesmcclay.com

PING alpine2.jamesmcclay.com (10.0.0.2): 56 data bytes
64 bytes from 10.0.0.2: seq=0 ttl=64 time=0.999 ms
64 bytes from 10.0.0.2: seq=1 ttl=64 time=1.087 ms

Troubleshooting

The Bind configuration files are really sensitive to anything that’s left out. Be sure and check to see if you forgot a semicolon or that your zone file is properly formatted with all required entries in place. And again, I highly recommend using the “named -g” when you are testing, it’ll give you some big hints as to what is wrong with your configuration.

If your Bind server is running with no config errors and something still isn’t working, it could be a network issue. Make sure and do a packet capture to see if packets are actually flowing and they’re what you expect! Sometimes after troubleshooting for a long time I do a packet capture only to find packets were never leaving the network interface due to something I forget, like adding an IP address or route somewhere.

Good luck! Feel free to reach out with questions about your lab, I’m always happy to help.

Install GNS3 and Docker on Ubuntu 20.04 for Cisco and Linux Network Labs

Every major OS has its place, so I’m not hoping to get into that discussion, but I find that Ubuntu Linux works really well for creating network labs in GNS3. If you’re not familiar with GNS3, you’re missing out. It allows you to pull in real VM’s, and even Docker containers into an emulated network environment for testing and experimentation. You can run Cisco routers and switches, other vendor network vendors, Windows Desktop and Server, Linux and any other OS that is supported by Linux’s QEMU/KVM hypervisor which is pretty much anything. GNS3 has many features, but today we’ll just look at getting it installed, along with Docker.

Why is Ubuntu better to run GNS3? You may have noticed that on Windows or Mac versions of GNS3, the server has to run on a VM to work properly. That server VM runs a Linux OS, specifically Ubuntu. So using Ubuntu as your desktop OS means you’re cutting out all of that complexity with the server VM, not to mention the additional RAM consumed. Simply put, GNS3 runs the way it’s supposed to on Ubuntu. Not to knock the Windows and Mac versions, the GNS3 team worked hard on those. But in my humble and honest opinion, Ubuntu just works better for GNS3.

Most folks stick to using VM’s in GNS3, but the Docker integration is pretty awesome and has some very real benefits over VM’s. Any docker container you have installed on the same system as the GNS3 server can be pulled into GNS3, although whether it will work properly depends somewhat on what the container has installed in it.

GNS3 Installation

The official GNS3 Ubuntu releases can be found at their PPA at:

https://launchpad.net/~gns3/+archive/ubuntu/ppa

The PPA can be added and GNS3 installed with just a few quick commands, although it’s a relatively big download:

sudo add-apt-repository ppa:gns3/ppa
sudo apt-get update
sudo apt-get install gns3-gui

When you first run GNS3, you’ll notice that the default option is not a VM, it’s to run the server locally. No VM needed!

At this point, GNS3 is installed, although you may have to run this command to get wireshark captures working:

chmod 755 /usr/bin/dumpcap

Docker Installation

I’ll just be following the official Docker instructions here, they work great:

https://docs.docker.com/engine/install/ubuntu/

These are to install the repository, which is probably the “best” option. There is a convenience one-liner script, but we all know that’s not a good habit to get into, so we’ll avoid that.

First install dependencies:

 sudo apt-get update
 sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

Add the Docker official GPG key:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Add the stable repository:

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

And install:

 sudo apt-get update
 sudo apt-get install docker-ce docker-ce-cli containerd.io

To avoid getting permissions errors in GNS3, you’ll need to add your user to the docker group. You’ll need to log out/log in or restart for this to take effect:

sudo usermod -aG docker ${USER}

Add a Docker container to your GNS3

Now that Docker is installed, pulling a Docker image from the Docker Hub is easy. A popular one is Alpine Linux because it’s so small, but packs lots of popular tools and libraries:

docker pull alpine

Now you should be able to add this image to GNS3. Go into GNS3, go to preferences, and all the way at the bottom where it says “Docker containers”. Click on “new”, and you should be able to select the Alpine Linux image from the drop-down menu:

Click through and leave the defaults, but you might want two network adapters instead of one, in case you want it to be a router. Now just drag a couple containers out onto the canvas:

At this point, you should be able to double click on these and get a busybox shell, which will let you configure IP settings and the like. You may have noticed that the startup of these containers is near-instantaneous, and they consume very little RAM. One of the many perks of the lightweight nature of Docker containers. Enjoy!