Daniel Rankov, 2018
Building a secure AWS environment has many layers - the AWS account access and resource privileges, keeping inventory of the instances, and managing application configuration. This is of course not a one-time effort but a continuous process - the ability to review AWS recourses and access, the ability to check for installed software and unpatched instances, the ability to check who had access to configuration properties.
A separate whitepaper has been released that addresses all these topics.
Taking an Infrastructure as Code (IaC) approach, where the whole infrastructure (AWS resources and access) is treated as code under version control provides full visibility and makes every change traceable and auditable.
Addressing visibility on the OS level - an immutable infrastructure approach is taken - working with images (AMIs), automating the baking process, and starting regular security scans on these images, creating a continuous delivery pipeline - the AMI Factory.
For handling configuration - multiple solutions are taken into consideration - AWS managed like SSM Parameter Store, Secret Store, as well as HashiCorp Vault.
Below are some highlights:
Infrastructure as code
Access to AWS infrastructure happens through API calls. By using the APIs, one can build reliable, predictable and fully automated service management at scale. The approach is known as Infrastructure as Code (IaC) - programmable infrastructure - helping you create, manage and configure it. The process of developing infrastructure follows standard development processes, such as version control, testing, deployment.
Using an Infrastructure as Code approach, one can create a fully auditable, repeatable and consistent AWS infrastructure. Some of the benefits are:
It's auditable - anyone can check the code, peer reviews can be carried out and it's fully trackable, because it's in the version control system
It's repeatable - the same code can be executed multiple times over different environments and AWS accounts
It's consistent - executing the code guarantees the same result over and over again. In case someone implements a manual change to an AWS resource, it will be overwritten.
Through repeatability, it is easy to isolate different environments. In a standard approach there would typically be a Development, UAT, and Production environments, and they can be all the same by reusing the same code.
In the mutable infrastructure approach servers are configured, updated and modified in place. Administrators can either login to these servers and implement changes or use automation frameworks like Chef, Ansible, Puppet or SaltStack to manage them. Configuration files can be changed, packages can be upgraded or downgraded, users can be added or removed, software can be directly deployed to the servers. These servers are mutable, they can be changed after they were created.
Immutable infrastructure takes a different approach - where servers once deployed cannot be modified. All the needed software is build into the server image. Configuration is applied at server boot time - that is the way to deploy the same image into multiple environments. The process of building an image is named baking. When a new version has to be deployed or any change made - a new image is baked and deployed. Once it's verified the old ones can be decommissioned.
Leveraging AWS APIs the baking process is easy to be automated in a continuous pipeline.
Building AMIs is a repeatable process and in order to be auditable, it has to be fully automated, with no manual intervention.
Here is the solution that we've invented and build in HeleCloud. The pipeline is based on AWS CodePipeline, AWS CodeBuild, the code is saved in AWS CodeCommit. These AWS managed services integrate well with AWS IAM, AWS Lambda, Amazon CloudWatch in order to achieve fine grained permissions model, automation and tackability.
The AMIFactory process is orchestrated by AWS CodePipeline.
1) CodePipeline starts AWS CodeBuild pulling code from a repository it runs HashiCorp Packer
2) Packer process does:
a. Start new EC2 instance
b. Connect to that EC2 instance and executes predefined scripts
i. OS update
ii. Apply OS configuration and tuning
iii. Apply CIS hardening on OS
iv. Install application
v. Install antivirus, IDS, IPS, file integrity check software
vi. Install CloudWatch agent
vii. Install Inspector agent
c. Register new AMI
3) AWS CodePipeline executes an AWS Lambda function to start EC2 instance from the newly created AMI. Tag is applied to it, so that on the next step AWS Inspector can scan only this instance.
4) Inspector scan for CVEs vulnerability scans is started on the EC2 instance
5) Inspector send notification to SNS when finished
6) AWS Lambda function which does the analyses of Inspector findings is triggered.
a. Terminate the EC2 instance
b. AWS Inspector Report is saved to S3 bucket
c. If vulnerabilities are found for an AMI, a TAG is applied to it
d. Findings are being sent to the Security team
Learn more on the full AMI Factory pipeline, Scheduled security scans, Configuration management tools and Automated incident response in the whitepaper.
Working with immutable infrastructure and pre-build images guarantees that instances deployed in different accounts are the same bit by bit. Operating system, specific configuration of the OS, installed software with dependencies are all the same. The only thing that defers is the service configuration. This makes all the deployment easily reproducible. Automating AMI build and putting it in repository makes the whole infrastructure auditable and changes traceable.
Building AMI Factory as pipeline makes the process auditable and replicable, security scans and continuous delivery is achieved.