Practical Advice for Securing Cloud Applications with Firewalls

In our 3 part series on setting up a minimal cluster for a LAMP web application on a cloud provider such as DigitalOcean, we relied on the private networking feature for network communication between all the nodes.

Many customers looking at enterprise-level cloud providers like AWS, Azure, and GCP are not familiar with the differences between them and how the independent cloud providers handle private networking.

Default Firewall Configuration on AWS, Azure, GCP vs. DigitalOcean or Linode

With the big 3 cloud providers, the first step to deploying services is setting up a VPC (AWS or GCP) or Virtual Network (Azure), which represents a VLAN where your instances reside. Out-of-the-box, all of the resources deployed within a VPC are self-contained from resources belonging to other VPCs owned by you, or other customers.

During the deployment process, you must set up a Security Group which represents a set of rules defining what ports should be opened up (to which sources) at the network firewall level. Without associating the VM with a Security Group and defining some incoming rules such allowing SSH (port 22) access, you won’t even be able to access your instance. On AWS, you must even manually provision an Internet Gateway and set up a routing table to enable your EC2 instance to access the Internet.

This reflects the design philosophy of the major cloud providers where security reigns above all other considerations, to assuage the concerns of IT executives who were initially reluctant to migrate to the public cloud. On the other hand, dev-oriented clouds such as DigitalOcean, Linode, or Vultr, were created with simplicity and speed of deployment in mind.

The default behavior of DigitalOcean and Linode is not to restrict any ports to your virtual machine instances at the network level, both from the public and private interfaces. Even for an inexperienced hobbyist, setting up a server is very quick, but can leave security as an afterthought. Fortunately, many of the machine images (especially DigitalOcean’s One-Click-Apps) are set up to restrict traffic to only ports 22 (SSH), 80 (HTTP), and 443 (HTTPS) with a software firewall using ufw, firewalld, or iptables. If available, it can also be wise to set up a network-level firewall (such as DigitalOcean Cloud Firewalls) in conjunction with software firewalls to minimize the performance impact of port knocking or brute force attacks against your instances. Without a network firewall, a malicious actor can potentially overwhelm your server by flooding your server with too many packets to process against the software firewall rules — resulting in a DDoS attack.

Security of Private Networking on DigitalOcean and Linode

Private networking is another aspect where the independent cloud providers differ significantly from the default behavior of the major players. When DigitalOcean and Linode launched instance-to-instance communication, all of the instances within a datacenter could communicate with the open ports on the internal interface of other instances — whether owned by your account or not. Malicious users could easily spin up a VM within a datacenter and scan for machines with open, listening ports for common services such as MySQL (3306) or Redis (6379) and compromise them if they had blank or weak root passwords. When using the internal network, it was absolutely crucial to secure any services which supported authentication with strong access keys and/or passwords.

This behavior changed with DigitalOcean as of July 2018, where private network access is limited to the droplets within your account or team greatly improving security for inexperienced users, and for services such as NFS, which don’t support authentication by default. With Linode, the private network is still one shared network among all customers in the same datacenter. Based on this interview with Christopher Aker, the founder of Linode, this is set to change in the near future with Linodes getting their own VLANs, supposedly like Amazon’s VPCs or Azure’s Virtual Networks.

As an additional line of defense, the software and cloud firewall rules should be setup to allow only access to internal services from the nodes that require it — for example, opening port 3306 on the DB server to the application server(s) in the cluster. This is in keeping with the “principle of least privilege” drilled into sysadmin and cloud administrators’ minds since time immemorial.

Where 10.132.xxx.xxx/32 is the IP address of the cluster node requiring access to the service.
Repeat –add-source command as many times as needed to create rules for all applicable nodes.
Technically it is also possible to specify valid CIDR ranges such as 10.132.0.0/16, but on providers such as DigitalOcean or Linode, there is no guarantee that any newly created instances will be within a certain IP range.

Example Firewall Rules for MySQL/MariaDB

firewall-cmd --permanent --new-zone=database
firewall-cmd --reload
firewall-cmd --permanent --zone=database --add-source=10.132.xxx.xxx/32
firewall-cmd --permanent --zone=database --add-port=3306/tcp
firewall-cmd --reload

Example Firewall Rules for Redis

firewall-cmd --permanent --new-zone=redis
firewall-cmd --reload
firewall-cmd --permanent --zone=redis --add-source=10.132.xxx.xxx/32
firewall-cmd --permanent --zone=redis --add-port=6379/tcp
firewall-cmd --reload

Example Firewall Rules for NFS

On the NFSv4 server, uncomment and edit the following lines in /etc/sysconfig/nfs

MOUNTD_PORT=892
STATD_PORT=662
LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769

mountd and statd listen on both TCP and UDP, lockd has distinct TCP and UDP ports.

Restart the NFS services.
systemctl restart nfs-config
systemctl restart nfs-server

Configure the firewalld rules.
firewall-cmd --permanent --new-zone=storage
firewall-cmd --reload
firewall-cmd --permanent --zone=storage --add-source=10.132.xxx.xxx/32
firewall-cmd --permanent --zone=storage --add-port=2049/tcp
firewall-cmd --permanent --zone=storage --add-port=2049/udp
firewall-cmd --permanent --zone=storage --add-port=111/tcp
firewall-cmd --permanent --zone=storage --add-port=111/udp
firewall-cmd --permanent --zone=storage --add-port=892/tcp
firewall-cmd --permanent --zone=storage --add-port=892/udp
firewall-cmd --permanent --zone=storage --add-port=662/tcp
firewall-cmd --permanent --zone=storage --add-port=662/udp
firewall-cmd --permanent --zone=storage --add-port=32803/tcp
firewall-cmd --permanent --zone=storage --add-port=32769/udp
firewall-cmd --reload

To prevent packet sniffing & interception of LAN traffic across unrelated customers sharing the same subnets on the DigitalOcean and Linode private networks, both providers enforce measures forbidding machines from going into “promiscuous” mode. In theory, it shouldn’t be necessary to encrypt the data over the LAN to maintain confidentiality, but it’s always a good practice to use transport security such as a VPN if the data is particularly sensitive.

Additional security with third-party services such as CloudFlare

CloudFlare is a DNS-based, CDN and security service rolled into one. Their generous free tier allowing unlimited bandwidth, compared to conventional CDNs that charge per GB transferred, has made them wildly popular with website owners. There are premium upgrades, such as additional Page Rules (to exclude certain URL patterns from caching) or the ability to upload a custom SSL certificate.

Most VPS consumers don’t understand that CloudFlare is not a panacea for security against DDoS attacks and common exploits of popular, open-source scripts such as WordPress. However if properly configured, using CloudFlare as a reverse proxy in front of your origin server (or load balancer) can provide an additional measure of security.

Since CloudFlare functions at the DNS level, it relies on using CloudFlare’s reverse proxy servers at their global edge locations to obfuscate the origin IP of your load balancer or web server. When a DNS request is made to CloudFlare’s authoritative nameservers for your domain, it will respond with the IP address of the CloudFlare proxy server instead of the underlying IP.

From a security standpoint, using CloudFlare is preferable to creating an A record directly to the load balancer or web server, as CF blocks DDoS attacks, suspicious user agents, and requests with its Web Application Firewall (WAF) before they can hit your origin. However, if the attackers know your origin IP they can still attack it directly, unless you undertake some or all of these mitigations:

  • Configure a firewall rule so that backend servers respond only to requests from the load balancer’s internal IP address. If using a managed load balancer on DigitalOcean or Linode rather than self-managed load balancers such as HAProxy or Nginx, the LB does not have a fixed IP address, so this will not be possible. Instead, you could limit incoming HTTP and HTTPS traffic to the eth1 interface for the private network.

firewall-cmd --permanent --new-zone=loadbalancer
firewall-cmd --reload
firewall-cmd --permanent --zone=loadbalancer --add-interface=eth1
firewall-cmd --permanent --zone=loadbalancer --add-service=http
firewall-cmd --permanent --zone=loadbalancer --add-service=https
firewall-cmd --reload

A company reached out to our web security team for assistance after they encountered a DDoS attack from an attacker that was repeatedly hitting xmlrpc.php on their WordPress instance causing 100% CPU usage by PHP. Short of completely disabling XML-RPC through wp-config.php, we configured their backend web servers to only accept traffic from the private network. Even though the attacker was automating attacks against their public IP address, the firewall dropped the traffic immediately, preventing lengthy load times or “503 Service Not Available” errors for legitimate visitors.

  • Create firewall rules on your load balancer or web server to accept only HTTP and HTTPS traffic from CloudFlare’s IP ranges, published on their website here.
  • Avoid revealing your origin IP whenever possible. If you implemented CloudFlare after exposing the real IP address of the server directly to the Internet, consider rotating your public IP address. All domains/subdomains served through your load balancer and/or web servers should be proxied through CloudFlare — showing the orange cloud in the DNS editor. If some of your services need to bypass CloudFlare, consider hosting them at a separate endpoint.
    • SPF records can reveal your origin IP if you send mail directly from your web servers. Using a third-party email gateway such as Amazon SES (for transactional email) and/or Google Apps (for user inboxes) eliminates this issue while keeping your sender reputation high.
    • Also, if there is some functionality on your website which involves fetching a resource from an external server based on user input, use a separate server for that purpose to keep your web server’s IP address out of the external server’s logs.
  • Configure Apache or Nginx on your origin servers to verify the client certificate that CloudFlare’s reverse proxy presents on each origin pull. This feature is known as Authenticated Origin Pull. Save the origin-pull-ca.pem certificate to the web server, then add the following configuration.
    • It is a requirement to select the Full or Full (Strict) SSL “Crypto” settings on CloudFlare to use Authenticated Origin Pulls. This requires either a signed certificate or CloudFlare Origin CA certificate terminated at your load balancer or web server. Read why we recommend using only the Full (Strict) SSL mode with CloudFlare to check for a valid certificate and avoid Man-in-the-Middle attacks.
    • This guarantees all HTTP/HTTPS requests have passed through CloudFlare’s Web Application Server prior to reaching your website.

Apache

SSLVerifyClient require
SSLVerifyDepth 1
SSLCACertificateFile /path/to/origin-pull-ca.pem

Nginx

ssl_client_certificate /etc/nginx/certs/cloudflare.crt;
ssl_verify_client on;