Need to force HTTPS in NGINX behind a classic AWS Load Balancer? It’s important and easy-to-do. Here’s how:
First of all, TLS/SSL is a good thing for your website. Quite often HTTP over TLS will be used only on pages requiring extra security, such as login, signup and “my account” pages. In the past, this extra layer of security was seen as slower and therefore unnecessary for sections of a site that don’t expose or transmit private information and user data. However, while it is more CPU and resource intensive, modern hardware and systems can handle it just fine.
In fact, in 2014 Google began calling for “HTTPS Everywhere” and indicated in a blog post that entire-site HTTPS would begin to be a ranking factor for websites in their search algorithms. It was a lightweight factor at the time, but of increasing importance ever since.
So, yes HTTPS is a good thing for your website… and easy to implement, too. If you’re using an AWS ELB (Elastic Load Balancer), you can load the SSL cert directly on the ELB and it can handle the secure traffic for all server instances behind that balancer. This removes the need to manage the cert on each instance, which is nice. However, I found it to be a bit tricky when I attempted to redirect all traffic to HTTPS in NGINX. Typically when doing this on servers that are not behind an ELB, I would add two server blocks in my configuration file – one for port 80 and one for 443. The 80 block would simply redirect to HTTPS and then the 443 block would accept that traffic and handle the certificate. This just wouldn’t work for me behind the ELB. I would get infinite redirects or in some attempted configurations, it just wouldn’t serve up the HTTPS version.
So, here’s how to force HTTPS in NGINX behind an AWS Load Balancer:
First, attach your SSL cert to the load balancer. And then configure both port 80 and 443 to send traffic to each instance through port 80 like this:
In your NGINX site configuration file (typically in the /etc/nginx/sites-available/ folder), add a single server block listening on port 80 since all traffic is now flowing through that port. Now you only need to a bit to check for HTTPS and redirect if it’s not. You can do that with:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
server { # all traffic, secure or otherwise, comes in on 80 from the ELB listen 80; # ELB stores the protocol used between the client # and the load balancer in the X-Forwarded-Proto request header. # Check for 'https' and redirect if not if ($http_x_forwarded_proto != 'https') { return 301 https://$host$request_uri; } server_name your-secure-site.com www.your-secure-site.com; .... (the rest of your config) } |
And that’s it! All traffic on your site should now run through the secure protocol. Your visitors will be happy and Google will like your site more 🙂
You can now just create a redirect in the AWS Load Balancer Settings.
Just create a new Listener for the Application which redirects from HTTP Port 80 to HTTS Port 443 and it works.
You don’t have to edit the nginx config anymore.
Ok, great – thanks! Yeah, this was posted quite a while back, so things have changed.
Finally, I could fix this even with non-Classic Load Balancer,
Non-Class LBs have an option to listen to port 80 and redirect it to https so I did not have to tinker with any nginx settings.
CAUTION when doing this. You need to ensure that your instance does not allow public access to port 80 when doing something like this. If port 80 is public, then there is very little security involved in your site anymore. If someone has their dns spoiled, the hijacking destination can send that header along with no difficulty to prevent it from redirecting to https.
In cases where people are migrating to use a load balancer, these ports would have originally been open to the public, and now need to be closed off. I recommend removing port 443 entirely from the firewall access to the instances, and only allowing port 80 to the vpc, or even better load balancer directly if that’s possible.
I should note that this particular configuration can cause your ELB health checks to fail if you use HTTP checks on that port. I believe what happens is that they aren’t sent with X-Forwarded-Proto and therefore redirect to HTTPS. Since the EC2 instance isn’t actually listening on HTTPS, the health check is unable to connect and will mark your instances down.
This is the configuration I use:
location / {
if ($http_x_forwarded_proto != ‘https’) {
return 301 https://$host$request_uri;
}
}
location /health-check {
access_log off;
default_type text/plain;
return 200 ‘OK’;
}
This way, you can point your health checks at /health-check, and they’ll succeed without the redirection to HTTPS.
Thanks, great info!
if ($http_x_forwarded_proto != ‘https’) {
return 301 https://$host$request_uri;
}
Would be a better option, great article though very insightful.
Thanks! You’re right, I see that NGINX recommends the tighter “return” syntax according to their “pitfalls” page. I’ve updated the post.
Unfortunately your configuration does not apply to current application load balancers, only to classic load balancers, and even when choosing those, my health check does not pass (when checking /health with port 80 I get a failure with 301 code, and when checking /health on port 443 I get health check failed.
Yes, you’re right – the load balancers we are using, in this case, are the “classic” type. However, I don’t see any issues with health checks when configured as mentioned in the post. I updated the blog text to indicate that it is for the classic type. Thanks!
Thank you for such detailed explanation + with the screenshot. Was scratching my head for 2 days to make it work. Your article has saved my job man. Thanks 🙂
Great – glad it helped! Thanks for letting me know.
Where does one find the nGINX site configuration file?
Oh – sorry, I am referring to the individual file for each site you have configured. This would typically be in a spot like:
/etc/nginx/sites-available/your-site-config-file