How does Amazon implement its Docker service

Docker and Amazon Web Services (AWS)

Docker and hosting in the cloud via Amazon Web Services (AWS) go well together. This article looks at some of the basic AWS services needed to deliver an application with Docker. Continuous delivery, rights management and data protection are also dealt with.

Amazon Identity Access Management (IAM)

Access to AWS services and resources must always be controlled. This is exactly where the Amazon IAM rights management comes in. AWS users and groups can be created and managed in the AWS Management Console. The user allows or denies access to AWS resources by means of permissions. Another plus point in terms of security is the multi-factor authentication via smartphone apps or external devices. This service is provided for free by Amazon AWS and is available in any AWS account.

Amazon EC2 Container Registry (ECR)

The Amazon EC2 Container Registry (ECR) is a fully managed Docker Container Registry. The private Docker images can be stored there. The user only pays for the amount of data that he stores in his data storage device and transmits on the Internet. Another advantage is that this Docker Container Registry is linked to Amazon Identity Access Management (IAM) and can therefore be configured who can read the Docker images and who is allowed to update them. Of course, there are also alternatives outside of AWS, such as Docker Hub. There you will be billed according to the number of private Docker repositories per month. The appropriate service should be selected based on the build strategy.

Continuous delivery

The basis for continuous delivery is an automated deployment. Means Automation during build and deployment the susceptibility to errors is drastically reduced and not only developers but also project owners can publish new versions at the push of a button.

A widely used combination is Atlassian Bitbucket as a version control system and Atlassian Bamboo as a build and deploy system. The cloud-based Atlassian products can all be easily integrated with one another. For example, the Bamboo Build for production deployment can only be started if a new version tag has been created by the application. To integrate the Atlassian products into AWS, it is advisable to use a Bamboo Remote Agent for Amazon EC2. The builds are therefore scalable and there are only costs for the runtime or resources used. Bamboo builds the complete application with the corresponding Docker images and pushes the Docker images to the Amazon EC2 Container Registry (ECR). It is advisable to map the build process using Bash scripts, so that they can also be executed locally, provided that you have the appropriate rights.

A free on-premise alternative to the Atlassian tools would be GitLab or Jenkins. With both tools it is also possible to prepare the application with the Docker images for deployment on Amazon AWS. The Amazon EC2 Container Registry can also be used with GitLab or Jenkins. So everyone can use their build / deployment tool of their choice.

Infrastructure as Code (IaC) with cloud formation

It would be laborious and error-prone to set up a complete server landscape manually using the AWS Management Console. In addition, the configuration cannot be reused and it is not logged when which stack was used. With cloud formation templates, everyone has an insight into the current configuration of the server infrastructure. It is possible to set up a complete server stack at any time. This is of course also an advantage if e.g. 3 environments (integration, staging, production) are used for the application. The Docker images should of course be structured in such a way that the same Docker image version runs on all 3 environments. The configuration of the access data is done via environment variables.

Monitor all the things with Amazon Cloudwatch

With the monitoring service Amazon Cloudwatch, all AWS cloud resources and applications can be logged. Different metrics such as CPU or memory usage and network throughput are available to configure different alarms. You can also create your own metrics, such as whether PHP exceptions have occurred, and in the event of an error, notifications are sent by e-mail or the like. sent. A dashboard enables various metrics to be displayed so that the user does not lose track of their own microservices and immediately sees where problems have occurred. The logging is also very extensive and entire log groups with multiple log streams can be searched. If you use JSON as the log format, you can fall back on extensive filter options.

3 ways to use Docker on Amazon AWS

Of course, Amazon wouldn't be Amazon if there weren't several ways to use Docker on AWS. Each variant has its own right to exist.

Elastic Beanstalk Docker applications

The first variant is Elastic Beanstalk. This allows DevOps to get a simple Docker application up and running relatively quickly. Since everything that is required is available as a Docker image in the Amazon ECR, only an Elastic Beanstalk version needs to be created with a file. This file contains the description of how a set of Docker containers should be deployed. Auto-scaling, load balancing, health checks or the deployment process can be configured using a cloud formation template, to name just a few parameters. It is interesting that Elastic Beanstalk uses the Amazon EC2 Container Service and manages it independently. The advantage for the user is the ease of use, the disadvantage is the lack of flexibility.

Amazon EC2 Container Service (ECS)

Amazon EC2 Container Service (ECS) is a container management service and manages a cluster of EC2 instances to run Docker applications there. An application load balancer can also be configured for the ECS cluster in order to direct URLs to specific Docker containers. An ECS cluster requires so-called ECS tasks, which then run in the cluster either permanently or only for a certain period, as is the case with cron jobs. These ECS tasks must be defined in the cloud formation template and, in the case of cron jobs, started via Lambda.

Docker for AWS

Is quite new Docker for AWS. The beta phase officially ended on January 19, 2017. A native Docker Swarm Cluster put on. The advantage here is clearly the native Docker behavior. There is no need to create an Elastic Beanstalk version or an ECS task. All that is needed are the Docker images. Of course, a Docker Swarm Cluster can also be set up locally on a computer for development in order to be as close as possible to the productive system. If you also use Docker Compose, multi-container applications can be deployed with just one command.

Cron jobs with Amazon Cloudwatch Events and Lambda

Since no crontab can be used in a server cluster and anyway not with Docker, Cloudwatch events and Amazon Lambda cron jobs can be carried out on a Docker cluster. The Cloudwatch event, which is executed at a certain time, calls a lambda function, which then starts a defined ECS task on an ECS cluster, for example. Lambda offers many more options that could fill your own blog article.

DNS handling with Amazon Route 53

If the servers are already hosted on Amazon AWS, why not also manage the domain and thus the DNS entries with Amazon Route 53? This enables a fully automatic disaster recovery system via API access, e.g. to be able to change the DNS entries. DNS routing features such as Weighted Round Robin (WRR), "Latency-Based Routing" (LBR) and "GEO-DNS" are also available. For international projects, DNS Traffic Flow can be used to connect the user with the best endpoint in terms of latency, geography and integrity.

Amazon AWS and data protection

Information about data protection is listed on the Amazon AWS data protection page. According to Amazon AWS, customers retain full control over their content. Provided that it is implemented accordingly, volumes and the data on them can also be encrypted. For Germany, a server region, more precisely Frankfurt, is available for selection - an important criterion for customers who have hosting in Germany as a prerequisite for the processing and storage of personal data.


The Amazon Web Services are very extensive and offer many features and possibilities. In this post, we looked at a few Amazon services to get started with Docker on Amazon AWS. Thanks to an automated build process, new releases can be deployed in the cloud at the push of a button and can also be scaled as required using Amazon AWS. Thanks to infrastructure as code using cloud formation, new server landscapes can be set up in minutes.

Web applications in the AWS cloud From developing your web application to setting up a modern cloud infrastructure on AWS, prooph software GmbH offers everything from a single source. We would be happy to work with you to develop a suitable continuous delivery and hosting strategy.

You are looking for a DevOps service provider for your project:Make a project request

published 2017/04/03

Article author


is a Zend Certified PHP Engineer and ZF2 Certified Architect. With his know-how he supports prooph software GmbH as a senior PHP developer and IT consultant.

For feedback and questions switch to the prooph chat on gitter (Github account required).