Blue-Green deployment with Apache Web Server

Welcome to the final episode of this series about Web Server and how to achieve the Deployment step in Continuous Delivery (CD).
In this episode I’m going to describe how to get it by using Apache Web Server.

Last, but not least, I’ve made the choice to complete this series of WebServer (the previous episodes Blue-Green deployment with OpenLiteSpeed and Blue-Green/Canary deployment with NGINX) with Apache Web Server.

As popular as it’s, I don’t think I should make any introduction. From its first version in 1995 it’s provided billion on web pages all around the world in millions of web site. It’s built and managed from Apache Foundation which celebrated the 20th birthday last year.

As I wrote in the previous articles, the purpose of the solution is switching from one application version to the new one with zero Web Server downtime.

As I reported in my previous articles, I took the Rest application from Spring Boot tutorial (https://spring.io/guides/gs/rest-service/).

Both application are generated from a Docker image already installed on my local docker repository.

FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY target/hello-world-service-1.0.0.jar hello-world-service.jar
ENV JAVA_OPTS=""
ENTRYPOINT exec java -jar /hello-world-service.jar --debug
EXPOSE 8081

Let’s create the docker’s containers by running docker-compose processing this file:

version: '3'

services:
  hello-world-service-node1:
    image: hello-world-service
    restart: always
    expose:
      - "8081"
    depends_on:
      - apache-hello-world-service  
    networks:
      abrelease:
        ipv4_address: 172.21.1.2
      
  hello-world-service-node2:
    image: hello-world-service
    restart: always
    expose:
      - "8081"
    depends_on:
      - apache-hello-world-service     
    networks:
      abrelease:
        ipv4_address: 172.21.1.3
        
  apache-hello-world-service:
    image: 'bitnami/apache:latest'
    ports:
      - '80:8080'
      - '443:8443' 
    networks:
      abrelease:
        ipv4_address: 172.21.1.4

networks:
  abrelease:
    ipam:
      config:
        - subnet: 172.21.1.0/16 

Check the ip range for the network abrelease in order to be sure they’re available. It depends on your local environment.

Let’s run the command and look at the result:

PS C:\progetti\ABRelease> docker-compose -f .\apache-docker-compose.yml up -d
Creating network “abrelease_abrelease” with the default driver
Creating abrelease_apache-hello-world-service_1 … done
Creating abrelease_hello-world-service-node2_1 … done
Creating abrelease_hello-world-service-node1_1 … done

Apache Configuration

Once completed this step, let’s update the Apache configuration adding the cluster’s nodes.
We need to enable some modules and add the node information with very little information inside.

Update the httpd.conf:

...
# Remove comment from the next line
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so
LoadModule proxy_ftp_module modules/mod_proxy_ftp.so
# Remove comment from the next line
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so
LoadModule proxy_scgi_module modules/mod_proxy_scgi.so
LoadModule proxy_uwsgi_module modules/mod_proxy_uwsgi.so
LoadModule proxy_fdpass_module modules/mod_proxy_fdpass.so
LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so
LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
# Remove comment from the next line
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
...

Add the cluster nodes at the bottom of httpd.conf file:

...
#Load Balancer Configuration
<IfModule mod_proxy_balancer.c>
<Location "/balancer-manager">
SetHandler balancer-manager
Order deny,allow
Deny from all
# Allow from local subnet only
Allow from all
</Location>

<Proxy balancer://mybalancer>
BalancerMember http://172.21.1.2:8081 loadfactor=1
BalancerMember http://172.21.1.3:8081 loadfactor=1
ProxySet lbmethod=byrequests
</Proxy>

ProxyPass /welcome balancer://mybalancer/
</IfModule>
...

Now the request wil be accepted by the two nodes configured inside the proxy balancer node with byrequests policy.
Different policies are available to redirect the request toward the nodes. Just check out the official documentation to find out more.
Another attribute, that I’m going to illustrate later in this article in the Canary Deployment, is loadfactor which set a “weight” to every node (in a range between 1 and 100 but it’s not a percent expression!) in the cluster. This weight let you define the traffic for every node.

In the example above, I’ve set that every node must receive the same traffic (loadfactor=1) for every node (lbmethod=byrequests).

Looking closely to our configuration, we can notice a location called balancer-manager. This location makes available, at Url http://localhost/balancer-manager, a Cluster Dashboard where we could change configuration parameters such include and/or exclude node in the cluster.

Let’s try excluding one node in order to create a node with a new version (green) and the other with the current one (blue).
To achieve this result, it’s enough add the status attribute “D” into the node configuration to exclude.

...
<Proxy balancer://mybalancer>
BalancerMember http://172.21.1.2:8081 loadfactor=1
BalancerMember http://172.21.1.3:8081 status=+D
ProxySet lbmethod=byrequests
</Proxy>
...

Apply a graceful restart at Apache:

PS C:\progetti\ABRelease> docker exec -it abrelease_apache-hello-world-service_1 sh
$ apachectl -k graceful

Browse the test url by running this little batch file (I’m a windows user) and recognize that the number progression in the response Json without duplication as when we had two nodes activated.

@echo off
setlocal enableDelayedExpansion
set "last=%time:~9,1%"
for /l %%N in (1 1 30) do (
  call :wait
  curl http://127.0.0.1/welcome/greeting?name=!random!
   @echo ""
)
exit /b

:wait
if %time:~9,1% equ %last% goto :wait
set "last=%time:~9,1%"
exit /b           
PS C:\progetti\ABRelease> .\callurl.bat
{“id”:5,”content”:”Hello, 18790!”}””
{“id”:6,”content”:”Hello, 25636!”}””
{“id”:7,”content”:”Hello, 15053!”}””
{“id”:8,”content”:”Hello, 18329!”}””
{“id”:9,”content”:”Hello, 7820!”}””
{“id”:10,”content”:”Hello, 21148!”}””
{“id”:11,”content”:”Hello, 14560!”}””
{“id”:12,”content”:”Hello, 14123!”}””
{“id”:13,”content”:”Hello, 16312!”}””

You can make the same with the other node to obtain the same number progression.
We’ve done this changes manually but you could think of putting these commands inside a procedure callable by any Continuous Integration/Delivery tool as Jenkins, Travis, etc..

Canary Deployment

As I described in Blue-Green/Canary deployment with NGINX, sometimes can be helpful to release the new application version for only a little part of traffic before switching to the new version.

Using the loadfactor attribute in proxy node inside the configuration file, we can move traffic from one node to other in order to test the new application version only for a little part of the whole web site users.

For example, let’s have a look at the below configuration file;

...
<Proxy balancer://mybalancer>
BalancerMember http://172.21.1.2:8081 
BalancerMember http://172.21.1.3:8081 loadfactor=3 timeout=1
ProxySet lbmethod=byrequests
</Proxy>
...

It means that the node 172.21.1.3 will receive 3 times the traffic of the entire web site (not 3%!) and, when it takes more than 1 second to serve the request, the request will be moved to the other node.

References

Apache Reverse proxy
https://httpd.apache.org/docs/2.4/howto/reverse_proxy.html

Apache Proxy module
https://httpd.apache.org/docs/2.4/mod/mod_proxy.html

One thought on “Blue-Green deployment with Apache Web Server”

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.