Continuous Deployment : Using AWS Code Commit, AWS Code Deploy and Jenkins (Part 5)

Create a source bundle, which includes the deployment scripts, and upload it to AWS CodeCommit repository
Now you can download the following sample bundle to your local repository and push the change to the central repository along with your application code hosted on AWS CodeCommit. The sample bundle includes everything you need to work AWS CodeDeploy: the Application Specification (AppSpec) file and deployment scripts.
This sample bundle contains the deployment artifacts and a set of scripts that call the AutoScaling EnterStandby and ExitStandby APIs to do both the registration and deregistration of an Amazon EC2 instance from the load balancer.
The installation scripts and deployment artifacts are bundled together with a CodeDeploy AppSpec file. The AppSpec file must be placed in the root of your sourcerepository and describes where to copy the application and how to execute installation scripts.
Simply set the name (or names) of the Elastic Load Balancer your instances are a part of, set the scripts in the appropriate lifecycle events, and the scripts will take care of deregistering the instance, waiting for connection draining, and re-registering after the deployment finishes.
Requirements
The register and deregister scripts have a couple of dependencies in order to properly interact with Elastic Load Balancing and AutoScaling:
  • The AWS CLI. In order to take advantage of AutoScaling’s Standby feature, the CLI must be at least version 1.3.25. If you have Python and PIP already installed, the CLI can simply be installed with pip install awscli
  • An instance profile with a policy that allows, at minimum, the following actions:
elasticloadbalancing:Describe*
elasticloadbalancing:DeregisterInstancesFromLoadBalancer
elasticloadbalancing:RegisterInstancesWithLoadBalancer
autoscaling:Describe*
autoscaling:EnterStandby
autoscaling:ExitStandby
autoscaling:UpdateAutoScalingGroup
autoscaling:SuspendProcesses
autoscaling:ResumeProcesses
  • All instances are assumed to already have the AWS CodeDeploy Agent installed.
Installing the Scripts
To use these scripts in your application follow the steps:
  1. Install the AWS CLI on all your instances.
  2. Update the policies on the EC2 instance profile to allow the above actions.
  3.  Copy the .sh files in this directory into your application source.
  4.  Edit your application’s appspec.yml to run deregister_from_elb.sh on the ApplicationStop event, andregister_with_elb.sh on the ApplicationStart event.
  5. If your instance is not in an Auto Scaling Group, edit common_functions.sh to set ELB_LIST to contain the name(s) of the Elastic Load Balancer(s) your deployment group is a part of. Make sure the entries in ELB_LIST are separated by space. Alternatively, you can set ELB_LIST to _all_ to automatically use all load balancers the instance is registered to, or_any_ to get the same behaviour as _all_ but without failing your deployments if the instance is not part of any ASG or ELB. This is more flexible in heterogeneous tag-based Deployment Groups.
  6. Optionally set up HANDLE_PROCS=true in common_functions.sh
Here is the appspec.yml file from the sample artifact bundle:
version: 0.0
os: linux
files:
  - source: /html
    destination: /var/www/html
 overwrite: yes/skip/fail
hooks:
  BeforeInstall:
 - location: scripts/remove.sh
 ApplicationStop:
    - location: scripts/deregister_from_elb.sh
      timeout: 400
    - location: scripts/stop_server.sh
      timeout: 120
      runas: root
  ApplicationStart:
    - location: scripts/start_server.sh
      timeout: 120
      runas: root
    - location: scripts/register_with_elb.sh
      timeout: 120
The defined commands in the AppSepc file will be executed in the following order (see AWS CodeDeploy AppSpec File Reference for more details):
BeforeInstall deployment lifecycle event: First, the remove.sh scripts remove the all content from /var/www/html folder
ApplicationStop deployment lifecycle event: deregisters the instance from the load balancer (deregister_from_elb.sh). you need to increased the time out for the deregistration script above the 300 seconds that the load balancer waits until all connections are closed, which is the default value if connection draining is enabled. After that it stops the Apache Web Server (stop_server.sh).
Install deployment lifecycle event: The next step of the host agent is to copy the HTML pages defined in the ‘files’ section from the ‘/html’ folder in the archive to ‘/var/www/html’ on the server.
ApplicationStart deployment lifecycle event: It starts the Apache Web Server (start_server.sh). It then registers the instance with the load balancer (register_with_elb.sh).
If you do the deployment for the first time with AWS CodeDeploy, the instance would not get deregistered from the load balancer. In this case you need to skip the BeforeInstall deployment lifecycle event. We need to use BeforeInstall instead of the ApplicationStop deployment lifecycle event; the ApplicationStop event always executes the scripts from the previous deployment bundle.
Step by step process of the scripts
  1. The script gets the instance ID (and AWS region) from the Amazon EC2 metadata service.
  2. It checks if the instance is part of an Auto Scaling group.
  3. After that the script deregisters the instance from the load balancer by putting the instance into standby mode in the Auto Scaling group.
  4. The script keeps polling the Auto Scaling API every second until the instance is in standby mode, which means it has been deregistered from the load balancer.
  5. The deregistration might take a while if connection draining is enabled. The server has to finish processing the ongoing requests first before we can continue with the deployment.
Update the Local Git Repository
Now that your development environment is configured and the Jenkins server is set up, modify the source in your local repository with AWS CodeDeploy scripts and push the change to the central repository hosted on AWS CodeCommit
Monitor Build
Within two minutes of pushing updates, a new build with a Build ID (for example, #2 or #3) should appear in the build history.
image4
Choose Git Polling Log to see the results of polling git for updates. There may be a few failed polls from earlier when the repository was empty
018 Jenkins Git Polling Log
Choose the most recent build. On the Build details page, choose Console Outputto view output from the build
image6
At the bottom of the output, check that the status of the build is SUCCESS
 Let’s move In the CodeDeploy console, choose AWS CodeDeploy, and then choose Deployments.
020 CodeDeploy Dashboard
You have now successfully set up the CodeDeploy Jenkins plugin and used it to automatically deploy a revision to CodeDeploy when code updates are pushed to AWS CodeCommit. You can experiment by committing more changes to the code and then pushing them to deploy the updates automatically.
Important notice about handling AutoScaling processes
When using AutoScaling with CodeDeploy you have to consider some edge cases during the deployment time window:
  1. If you have a scale up event, the new instance(s) will get the latest successful Revision, and not the one you are currently deploying. You will end up with a fleet of mixed revisions.
  2. If you have a scale down event, instances are going to be terminated, and your deployment will (probably) fail.
  3. If your instances are not balanced accross Availability Zones and you are using these scripts, AutoScaling may terminate some instances or create new ones to maintain balance (see this doc),  interfering with your deployment.
  4. If you have the health checks of your AutoScaling Group based off the ELB’sand you are not using these scripts, then instances will be marked as unhealthy and terminated, interfering with your deployment.
There are a few other points to consider in order to achieve zero-downtime deployments:
Graceful shut-down of your application: You do not want to kill a process with running executions. Make sure that the running threads have enough time to finish work before shutting down your application.
Connection draining: The AWS CloudFormation template sets up an Elastic Load Balancing load balancer with connection draining enabled. The load balancer does not send any new requests to the instance when the instance is deregistering, and it waits until any in-flight requests have finished executing. (For more information, see Enable or Disable Connection Draining for Your Load Balancer.)
Sanity test: It is important to check that the instance is healthy and the application is running before the instance is added back to the load balancer after the deployment.
Backward-compatible changes: (for example, database changes) Both application versions must work side by side until the deployment finishes, because only a part of the fleet is updated at the same time.
Warming of the caches and service: This is so that no request suffers a degraded performance after the deployment.
WARNING: If you are using this functionality you should only use CodeDepoyDefault.OneAtATime deployment configuration to ensure a serial execution of the scripts. Concurrent runs are not supported.
This example should help you get started toward improving your deployment process. I hope that this post makes it easier to reach zero-downtime deployments with AWS CodeDeploy and allows shipping your changes continuously in order to provide a great experience.
Reference:

Comments

Popular posts from this blog

Datastax Error : Cannot start node if snitch's data center (dc1) differs from previous data center (dc2)

Datastax Error : Cassandra - Saved cluster name Test Cluster != configured name

Configure Nagios plugin " check_logfiles " for scanning log file

Popular posts from this blog

Datastax Error : Cannot start node if snitch's data center (dc1) differs from previous data center (dc2)

Datastax Error : Cassandra - Saved cluster name Test Cluster != configured name

Configure Nagios plugin " check_logfiles " for scanning log file