As a roaming AWS trainer, I work on my AWS infrastructure from many different locations to give demos to the course attendees and prepare stuff in EC2 instances when necessary.

When I launch instances, I usually do so in private subnets, not opening the instances to the Internet when not absolutely necessary. To access the instances I use what is called a stepping stone, jump server or bastion host.

The idea of a bastion host is that it is the single point of entrance into your (cloud) infrastructure. Therefor, you should harden and secure that host to the best extent possible. Read my blog about switching on and off the bastion host with the press of an IoT button to see how that can be implemented on AWS.

After the bastion host is started, just how do you get in the bastion and subsequently in the other instances ‘behind’ the bastion host? It all begins with keys. SSH Keys, that is. Of  course, you use a different key for the bastion host than for other instances you want to access. Should one of the keys be compromised not everything is lost!

For the bastion I use a key called bastion.labs.easytocloud whereas for ‘normal’ instances I use a key called labs.easytocloud

Our bastion host has an entry in my .ssh/config file that looks like this:

Host bastion-labs
        User ec2-user
        IdentityFile ~/.ssh/bastion.labs.easytocloud
        Hostname bastion.labs.easytocloud.com
        ForwardAgent no

So when I type ssh bastion-labs I will be logged in to the bastion specified automagically, using the correct key.

Now I could login to the bastion and then ‘hop’ to the target instance behind it. But where to store the key of the target instance? If I store that key ON the bastion host, it defeats the purpose; when the bastion is compromised, the intruder has the key to go to the target host. No extra security there!

In AWS courses, attendees are encouraged to use SSH agent instead. SSH agent runs on your local (desktop/laptop) and serves keys over the very SSH connection that you make with your bastion host. Consider it a kind of ‘call back’ from the bastion to the originating device. Good from the perspective of not storing keys on the bastion, but when the bastion is compromised, the attacker only has to wait for you to login to be able to ‘call back’ to your device to get the key. As both the bastion and the final target keys are stored on my laptop, any intruder on the bastion can get the subsequent keys from my device when I login. Read more about this in SSH agent forwarding considered harmful. Nothing really changed since goto was banned 😉

So, now what? Of course we could run some VPN server in our bastion. And in fact, we do. But as a roaming user, I too often get into locations where VPNs are blocked. That is where port forwarding comes in.

When connecting to an instance in our labs VPC, I first create a tunnel to the bastion and then run an ssh command over the tunnel to the target machine. This is implemented with the following bit in .ssh/config:

Host *.labs
        User ec2-user
        IdentityFile ~/.ssh/labs.easytocloud
        ProxyCommand ssh -q -W %h:%p bastion-labs

As a bonus, my laptop connects to the target machine, rather than to the bastion, so that when I copy files from and to a cloud instance, I don’t have to park them on a relatively slow/small bastion host. The security group on the target instance allows ssh traffic only from the bastion, as that is where it *really* comes from. In fact, we use the security group of the bastion host as an allowed source in the security group definition of the target instances.

The bastion host has a security group attached to it called ‘ErikIsHere’. This security group opens the bastion for VPN and SSH from my present location. To change the security group to reflect my current location, I have a script on my laptop named ‘iamhere’. It reconfigures the security group ‘ErikIsHere’ to open the relevant ports with my laptops current (public) IP address as source through a series of AWS commands.

When I arrive at a (course) location, i type iamhere on my laptop to configure the security group, start the bastion and thereafter can ssh to instances ‘behind’ the bastion host.

To find the IP address of such instances, we use AWS private hosted zones; a DNS service within the VPC. Whenever we launch an instance in the labs VPC, a DNS record for that instance will is added to the private hosted zone. This is achieved with AWS Cloudwatch Rules. A lambda function is called whenever an instance starts or stops and registers that instance’s name (as set in the Tag called Name) in DNS. All instances in the VPC use this private hosted zone for DNS information, so when we launch an instance named foo, foo.labs will be registered and any instance in the labs VPC can resolve foo.labs to the instance private IP address. From my laptop I now can connect to foo.labs by typing ssh foo.labs!

Here is how that works. I added an entry in my ssh config for  *.labs (see above) stating to use ec2-user as username and the labs.easytocloud key as ssh key. But before that, it will setup port forwarding to the bastion host. Real cool bonus here is that the resolving of foo.labs is actually handled by the bastion host! My laptop wouldn’t be able to resolve foo.labs, but the bastion host is.

So, any new instance I launch in the labs VPC using the labs.easytocloud SSH key, will be directly accessible from my laptop using no more than the name of the instance. Not only for ssh login, but also to copy files using scp:

$ scp large-local-file foo.labs:/tmp/remote-file

More often than not, ssh and port forwarding is sufficient for my needs. It has been an alternative in situations where VPN was blocked. With the added value of name resolution of instances in the VPC.