Author: Stoyan Gramatikov, AWS Cloud Infrastructure Engineer
In this blog post, I’ll outline how to implement a custom AWS ECS solution. One that allows you to overcome a common challenge often experienced by businesses on AWS: not having a built-in synchronisation mechanism that can implement dependencies between AWS ECS services.
Amazon Elastic Container Service (ECS) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS.
As an AWS Advanced Consulting Partner, HeleCloud works with its customers to overcome traditional business barriers through the adoption of cloud services and solutions. Recently, we helped a large electronic financial markets firm to deliver the key components of its secure and robust trade automation platform.
To do this, we [HeleCloud architecture team] selected AWS ECS as it allowed us to manage containers and allowed developers to run applications in the Cloud without having to configure an environment for the code to run it. Despite using a microservices architecture, which was implemented using Docker containers and run on Kubernetes, there remained a challenge over the synchronisation of dependencies.
In (and out of) sync
Most of the applications developed for the customer, required interactions between different services that were dependent on each other. These dependencies required starting services in a well-known order, specific for each application. Whilst AWS ECS had instruments to synchronise dependencies between containers within a service, using the keyword “dependsOn” within task definitions, total synchronisation between different services remained a serious challenge.
In order to create a successful synchronisation solution, the answer must center around modifying the dependent image. This can be achieved by plugging in a piece of script code that is executed before any other application code in the container. Once the dependency is fulfilled, it makes the contained executing its application code.
The solution eliminates the need for you to install and operate your own container orchestration software, manage and scale a cluster of virtual machines, or schedules containers on those virtual machines as if it was created from the original image.
Two bash scripts are doing the job:
sync-waiter-make-config.sh – bash script creating the image specific plug-in script and creating a new modified image;
sync-waiter-this_YYYY-MM-DD_hh:mm:ss.nanosecs.sh – plug-in bash code, autogenerated by sync-waiter-make-config.sh.
The plug-in has two parameters, provided as environment variables:
Variable name: CHECK_URLs – expects a list of URLs separated by spaces;
Variable name: CHECK_INTERVAL_SECONDS – number of seconds between retries to access an URL.
The algorithm to create new image with plugged-in bash code:
If your docker registry hosting the original image requires authentication, make sure you logged in to it before going to the next step;
Run sync-waiter-make-config.sh by passing one parameter with original image URL.
The script installs “curl” tool. This part of the script might need to be modified, depending on the type of OS in your original image;
The script reads the ENTRYPOINT and CMD of the original image and saves them;
The script generates a new image with name as the original one with capital “S” appended at the end. For example:
The new image ENTRYPOINT and CMD are modified so that firstly plug-in is executed and after the dependencies are fulfilled the original application code is executed.
The algorithm for executing the modified container:
Bash plug-in is executed first on starting the container from modified image;
Plug-in reads the environment variables values of CHECK_URLs and CHECK_INTERVAL_SECONDS;
Plug-in checks the connectivity to the first URL;
If there is successful check of the URL plug-in goes to check the next one;
If check fails plug-in waits CHECK_INTERVAL_SECONDS seconds and retries;
After all the URLs are successfully checked the plug-in executes ENTRYPOINT and CMD code of the original image.
When implementing this solution, it is important to remember that it is focused only on direct network connectivity dependencies. For any other types of dependencies, such as shared storage, bash scripts must be updated. Secondly, the solution hasn’t been tested on all the docker image types. It exposes only the idea of overcoming the given problem. For some images, you may need to modify the part of Bash scripts creating the “Dockerfile”, especially in “curl” installation piece.
Whilst Bash scripting may seem daunting, it is an extremely useful and powerful part of system administration and development. Utilizing its capabilities in this way can help achieve synchronization between AWS ECS services.
Any questions, please feel free to get in touch through my LinkedIn or take a look at how HeleCloud is helping businesses embrace Cloud technologies.