Securing applications with web auth proxy

A deep dive into how we can secure private content, or weaker web applications behind a secure SSO Auth proxy

4 min read
Securing applications with web auth proxy
Photo by Taylor Vick / Unsplash

In a previous blog post, I talked about how VPNs and Single Sign-on technology were critical to the way in which we protect some of our applications that we don't want to be exposed publicly, and that don't always support native and modern authentication technologies.

But why is this even a problem, you have a VPN Right?

Well, sort of... When we performed some major migration activities last year and upgraded the underlying hardware in our French Region, the OpenVZ Legacy Virtualization had some differences in the new server to the old one, this has ultimately broken our VPN Server that we were using internally.

In addition to this, we also started to look at if our previous approach was going to work well for some of the other use-cases that we had. Our underlying corporate infrastructure has substantially changed since we decided that a VPN with AzureAD Single Sign-On was going to be the right way forward, and we've since moved all of our static websites out of AWS and onto another provider.

We also have had the issue where we wanted to be able to be more granular in who actually has access to these resoures and to be able to grant access to individuals that work with ATLAS, but not on our staff team.

Why are you using this at all?

We have two primary use-cases for protecting resources behind a secure authentication proxy, one as we've discussed above - to protect internal resources that should be accessible to a smaller audience, and where the application itself doesn't provide authentication.

The other use-case we've started to use this for, and where we had the urgency to deploy this solution was to ultimately protect a web application that we had reason to believe had weak authentication, and likely very little protection against basic attacks like brute-forcing and the likes. In this case, the application itself is COTS and closed-source, so we couldn't look to identify and fix these issues even if we wanted to, however, it supports one of our projects and needed to continue to exist, we also needed to share it with AzureAD users that were both full users and guests.

What did we use and how does it work?

After doing quite a bit of research and our team standing up a few test applications protected by various options, we ended up going with an open-source project called oauth2-proxy for a few reasons, one main one being that it is an actively maintained project in the open-source space, and another being it had support natively for AzureAD, which is our corporate tool of choice for authentication and authorisation.

In our configuration we are using NGINX as our web server, and have two server blocks running, one to serve the non-secure application on localhost on a specified port, and another to listen on a public HTTPS port (443) hosting the application itself. The application will then authenticate the user and when authenticated proxy the traffic to the internally listening application. This allows us to use the secure identity provider facing the internet, and have the weaker auth only prompted in our scenario (as we can't remove auth from the COTS product) once you are already permitted access to the app in our AzureAD.

The Config

The bit you have all been waiting for... Our configuration!

For our internal app, you can see it's a fairly default Nginx configuration, just a listener and a server name pointing to a local file system folder.

server {
    listen 127.0.0.1:80;
    server_name mydomain.com;

    proxy_busy_buffers_size   512k;
    proxy_buffers   4 512k;
    proxy_buffer_size   256k;

    root /var/www/test-site-1;

    index index.html index.htm index.nginx-debian.html;

    location / {
                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                try_files $uri $uri/ =404;
        }


}

For our auth proxy itself, NGINX acts as a proxy to the oauth2-proxy application which in our configuration listens internally on port 4000 and the config looks like this:

server {
    server_name mydomain.com;

    listen [::]:443 ssl ipv6only=on; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

    location / {
        proxy_pass http://127.0.0.1:4180;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Scheme $scheme;
        proxy_connect_timeout 1;
        proxy_send_timeout 30;
        proxy_read_timeout 30;
        proxy_busy_buffers_size   512k;
        proxy_buffers   4 512k;
        proxy_buffer_size   256k;
    }

}

As we are using Certbot to provide our TLS certificates there is a final server block which is the default HTTP --> HTTPS redirect it provides.

In terms of the configuration file for oauth2-proxy itself, we've gone with the following configuration in our production environment:

## OAuth2 Proxy Config File

provider = "azure"
client_id = "<REDACTED>"
client_secret = "<REDACTED>"
oidc_issuer_url = "https://sts.windows.net/<REDACTED>"
cookie_secret = "<REDACTED>"
redirect_url = "https://mydomain.com/oauth2/callback"
cookie_domains = "mydomain.com"
cookie_secure = true
reverse_proxy = true

## the http url(s) of the upstream endpoint. If multiple, routing is based on path
upstreams = [
     "http://127.0.0.1:80/"
 ]

email_domains = [
     "*"
 ]

The two most critical aspects to this are the upstreams being the internally hosted application you have NGINX already hosting (Your weak auth application in this example) and the email_domains field.

In our specific case, we're using AzureAD, so it's an AD we control as a company. In this AD we have several of non-ATLAS Staff accounts (For those familiar with AzureAD they're "Guest" accounts) which have a mix of origin e-mails so for us, we found it more sensible to offload any requirement to check the e-mail is valid off to Microsoft and AzureAD. If you're not using AzureAD this is absolutely something you should review and validate.

What limitations are there?

For us the biggest limitation is the fact it acts as a fairly "dumb" reverse proxy, so users in our setup are required to auth twice, once against AzureAD and then a second time against the applications internal directory (Weak in this scenario), as a result, we have double the accounts to manage and the user experience is less than ideal.

Otherwise, we've not noticed any major issues so far and are happy with how it operates.