0% found this document useful (0 votes)
51 views60 pages

Deploy Laravel On AWS With CloudFormation

Uploaded by

wesed80926
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views60 pages

Deploy Laravel On AWS With CloudFormation

Uploaded by

wesed80926
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

Laravel on AWS

How to deploy your application for maximum security,


scalability and availability

Lionel Martin
This book is for sale at http://leanpub.com/laravel-aws

This version was published on 2017-12-19

This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing
process. Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and
many iterations to get reader feedback, pivot until you have the right book and build traction once
you do.

© 2017 Lionel Martin


Tweet This Book!
Please help Lionel Martin by spreading the word about this book on Twitter!
The suggested hashtag for this book is #laravel.
Find out what other people are saying about the book by clicking on this link to search for this
hashtag on Twitter:
#laravel
Contents

A guide to networking, security, autoscaling and high-availability . . . . . . . . . . . . . 1

1. Set up your AWS credentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2. Order SSL certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

3. Create a key pair to be used by your EC2 instances . . . . . . . . . . . . . . . . . . . . 6

4. Launch our CloudFormation stacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

5. Build and push your Laravel Docker image . . . . . . . . . . . . . . . . . . . . . . . . . 22

6. Launch a bastion & run database migrations . . . . . . . . . . . . . . . . . . . . . . . . 31

7. Migrate DNS service to AWS Route53 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

8. Speed up your application by using CloudFront . . . . . . . . . . . . . . . . . . . . . . 37

9. (Optional) Publish your Laravel workers and crons . . . . . . . . . . . . . . . . . . . . 40

10. (Optional) Add an ElasticSearch domain . . . . . . . . . . . . . . . . . . . . . . . . . . 43

11. (Optional) High availability for the storage tier . . . . . . . . . . . . . . . . . . . . . . 45

12. CloudWatch alarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

13. (Optional) Updating your stack manually: vertical / horizontal scaling . . . . . . . . . 50

14. (Optional) Auto scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

15. (Optional) Set up Continuous Deployment with CodePipeline . . . . . . . . . . . . . . 52

16. (Optional) Setup SES and a mail server . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

17. Cost containment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54


CONTENTS

18. (Optional) Deleting your stack and free resources . . . . . . . . . . . . . . . . . . . . . 55


A guide to networking, security,
autoscaling and high-availability
It’s not an easy task to set up durable architecture for your web application. And if you try to
build it as you go, you’ll soon get tired of clicking around the AWS console. What if you had one
go-to architecture and repeatable process for all your projects, while ensuring maximum security,
performance and availability? Here is how you should deploy your Laravel application on AWS.
How we will enforce security:
- Create VPC subnets to deploy our application into. A VPC is your own virtual network within
AWS and lets you design private subnets where instances can’t be accessed directly from outside
your VPC. This is where we will deploy our web and database instances.
- Use temporary bastions (also called jump boxes) that we will deploy in our public subnets when
we need to connect to web and database instances, reducing the surface of attack
- Enforce firewalls rules by whitelisting which servers can talk to each other, using VPC security
groups (SGs). SGs are default-deny stateful firewalls applied at the instance level.
- Simplify secret management by avoiding passwords where possible and instead specifying IAM
roles to control access to our resources. Using IAM roles for EC2 removes the need to store AWS
credentials in a configuration file. Roles use temporary security tokens under the hood which AWS
takes care of rotating so we don’t have to worry about updating passwords.
How we will enforce high availability:
- Span our application instances across Availability Zones (AZs below). An AZ is one or more data
centers within a region that are designed to be isolated from failures in other AZs. By placing
resources in separate AZs, organisations can protect their application from a service disruption
impacting a single location
- Serve our application from an Elastic Load Balancer. ELB is a highly available (distributed) service
that distributes traffic across a group of EC2 instances in one or more AZs. ELB supports health
checks to ensure traffic is not routed to unhealthy or failing instances
- Host our application on ECS, describing through ECS services what minimum number of healthy
application containers should be running at any given time. ECS services will start new containers
if one ever crashes.
- Distribute our database as a cluster across multiple AZs. RDS allows you to place a secondary copy
of your database in another AZ for disaster recovery purposes. You are assigned a database endpoint
in the form of a DNS name that AWS takes responsibility for resolving to a specific IP address. RDS
will automatically fail over to the standby instance without user intervention.
A guide to networking, security, autoscaling and high-availability 2

Preferably we will be using Amazon Aurora, which will maintain a read replica of our database in
a separate AZ and that Amazon will promote as the primary instance should our main instance (or
its AZ) fail.
- Finally, we rely on as many distributed services as possible to delegate failure management to AWS:
services like S3, SQS, ELB/ALB, ECR and CloudWatch are designed for maximum resiliency without
us having to care for the instances they run on.

Laravel, made highly available with almost a one-click deploy

How we will build ourselves a repeatable process:


We will be deploying an empty Laravel application on a fresh domain name using Docker,
CloudFormation and the AWS CLI. CloudFormation defines a templating language that can be used
to describe all the AWS resources that are necessary for a workload. Templates are submitted to
CloudFormation and the service will provision and configure those resources in appropriate order.
Docker container images are stand-alone, executable packages of a piece of software that include
everything needed to run it.
With the AWS CLI, you can control all services from the command line and automate them through
scripts.
By combining all three, both our infrastructure and our application configuration can be written as
code and as such be versioned, branched, documented.
This is the procedure I use to deploy my clients’ Laravel applications on AWS. I hope this can be
helpful to deploy yours. If your use case is more complex, I provide on-going support packages
ranging from mentoring your developers up to hands-on building your application on AWS. Ping
A guide to networking, security, autoscaling and high-availability 3

me at [email protected]¹
¹mailto:[email protected]
1. Set up your AWS credentials
Start with authenticating your command line by downloading the API key and secret for a new user
in the IAM section of your AWS console. This user will need to have to have permissions to create
resources for all the services we will use below. Follow the prompts from:

1 aws configure

Use the –profile option to save different credentials for different projects.
2. Order SSL certificates
We need two certificates: one for our web application itself and another one for our custom domain
on CloudFront. The one for your web application needs to be created in the AWS region you want
to deploy your application into whereas CloudFront will only accept certificates generated in region
us-east-1.
AWS SSL/TLS certificates are free, automatically provisioned and renewed, even if you did not
buy your domain in Route53. They seamlessly integrate with AWS load balancers, CloudFront
distributions and API Gateway endpoints so you can just set them and forget them.

1 # a certificate in your default region for your web application


2 aws acm request-certificate
3 --domain-name laravelaws.com
4 --idempotency-token=random_string_here
5 --subject-alternative-names *.laravelaws.com
6
7 # a certificate from us-east-1 specifically for our CloudFront custom domain
8 aws --region us-east-1 acm request-certificate
9 --domain-name laravelaws.com
10 --idempotency-token=random_string_here
11 --subject-alternative-names *.laravelaws.com
3. Create a key pair to be used by
your EC2 instances
It is recommended to create a new SSH key pair for all EC2 instances of this new project, still using
the CLI:

1 # Create the key pair and extract the private key from the JSON response
2 aws ec2 create-key-pair
3 --key-name=laravelaws
4 --query 'KeyMaterial'
5 --output text > laravelaws.pem
6
7 # Assign appropriate permissions to the key file for it to be usable
8 chmod 400 laravelaws.pem

Remember that AWS won’t store SSH keys for you and you are responsible for storing and sharing
them securely.
4. Launch our CloudFormation stacks
Here comes the infrastructure-as-code! Our whole deployment will be described in one master
YAML template, itself referencing nested stacks YAML templates to make it more readable and
reusable.
This is the directory structure of our templates:

1 |--- master.yaml # the root template


2 |--- infrastructure
3 |--- vpc.yaml # our VPC and security groups
4 |--- storage.yaml # our database cluster and S3 bucket
5 |--- web.yaml # our ECS cluster
6 |--- services.yaml # our ECS Tasks Definitions & Services

And the complete code can be downloaded from the GitHub repository at https://github.com/li0nel/laravelaws².
The vpc.yaml template defines our VPC subnets and route tables:

1 # This template creates a VPC and a pair public and private subnets spanning
2 # the first two AZs of your current region.
3 # Each instance in the public subnet can accessed the internet and be
4 # accessed from the internet thanks to a route table routing traffic through
5 # the Internet Gateway.
6 # Private subnets feature a NAT Gateway located in the public subnet of the
7 # same AZ, so they can receive traffic from within the VPC.
8 VPC:
9 Type: AWS::EC2::VPC
10 Properties:
11 CidrBlock: !Ref VpcCIDR
12 Tags:
13 - Key: Name
14 Value: !Ref EnvironmentName
15
16 InternetGateway:
17 Type: AWS::EC2::InternetGateway
18 Properties:
19 Tags:

²https://github.com/li0nel/laravelaws
4. Launch our CloudFormation stacks 8

20 - Key: Name
21 Value: !Ref EnvironmentName
22
23 InternetGatewayAttachment:
24 Type: AWS::EC2::VPCGatewayAttachment
25 Properties:
26 InternetGatewayId: !Ref InternetGateway
27 VpcId: !Ref VPC
28
29 PublicSubnet1:
30 Type: AWS::EC2::Subnet
31 Properties:
32 VpcId: !Ref VPC
33 AvailabilityZone: !Select [ 0, !GetAZs ]
34 CidrBlock: !Ref PublicSubnet1CIDR
35 MapPublicIpOnLaunch: true
36 Tags:
37 - Key: Name
38 Value: !Sub ${EnvironmentName} Public Subnet (AZ1)
39
40 PublicSubnet2:
41 Type: AWS::EC2::Subnet
42 Properties:
43 VpcId: !Ref VPC
44 AvailabilityZone: !Select [ 1, !GetAZs ]
45 CidrBlock: !Ref PublicSubnet2CIDR
46 MapPublicIpOnLaunch: true
47 Tags:
48 - Key: Name
49 Value: !Sub ${EnvironmentName} Public Subnet (AZ2)
50
51 PrivateSubnet1:
52 Type: AWS::EC2::Subnet
53 Properties:
54 VpcId: !Ref VPC
55 AvailabilityZone: !Select [ 0, !GetAZs ]
56 CidrBlock: !Ref PrivateSubnet1CIDR
57 MapPublicIpOnLaunch: false
58 Tags:
59 - Key: Name
60 Value: !Sub ${EnvironmentName} Private Subnet (AZ1)
61
4. Launch our CloudFormation stacks 9

62 PrivateSubnet2:
63 Type: AWS::EC2::Subnet
64 Properties:
65 VpcId: !Ref VPC
66 AvailabilityZone: !Select [ 1, !GetAZs ]
67 CidrBlock: !Ref PrivateSubnet2CIDR
68 MapPublicIpOnLaunch: false
69 Tags:
70 - Key: Name
71 Value: !Sub ${EnvironmentName} Private Subnet (AZ2)
72
73 NatGateway1EIP:
74 Type: AWS::EC2::EIP
75 DependsOn: InternetGatewayAttachment
76 Properties:
77 Domain: vpc
78
79 NatGateway2EIP:
80 Type: AWS::EC2::EIP
81 DependsOn: InternetGatewayAttachment
82 Properties:
83 Domain: vpc
84
85 NatGateway1:
86 Type: AWS::EC2::NatGateway
87 Properties:
88 AllocationId: !GetAtt NatGateway1EIP.AllocationId
89 SubnetId: !Ref PublicSubnet1
90
91 NatGateway2:
92 Type: AWS::EC2::NatGateway
93 Properties:
94 AllocationId: !GetAtt NatGateway2EIP.AllocationId
95 SubnetId: !Ref PublicSubnet2
96
97 PublicRouteTable:
98 Type: AWS::EC2::RouteTable
99 Properties:
100 VpcId: !Ref VPC
101 Tags:
102 - Key: Name
103 Value: !Sub ${EnvironmentName} Public Routes
4. Launch our CloudFormation stacks 10

104
105 DefaultPublicRoute:
106 Type: AWS::EC2::Route
107 DependsOn: InternetGatewayAttachment
108 Properties:
109 RouteTableId: !Ref PublicRouteTable
110 DestinationCidrBlock: 0.0.0.0/0
111 GatewayId: !Ref InternetGateway
112
113 PublicSubnet1RouteTableAssociation:
114 Type: AWS::EC2::SubnetRouteTableAssociation
115 Properties:
116 RouteTableId: !Ref PublicRouteTable
117 SubnetId: !Ref PublicSubnet1
118
119 PublicSubnet2RouteTableAssociation:
120 Type: AWS::EC2::SubnetRouteTableAssociation
121 Properties:
122 RouteTableId: !Ref PublicRouteTable
123 SubnetId: !Ref PublicSubnet2
124
125 PrivateRouteTable1:
126 Type: AWS::EC2::RouteTable
127 Properties:
128 VpcId: !Ref VPC
129 Tags:
130 - Key: Name
131 Value: !Sub ${EnvironmentName} Private Routes (AZ1)
132
133 DefaultPrivateRoute1:
134 Type: AWS::EC2::Route
135 Properties:
136 RouteTableId: !Ref PrivateRouteTable1
137 DestinationCidrBlock: 0.0.0.0/0
138 NatGatewayId: !Ref NatGateway1
139
140 PrivateSubnet1RouteTableAssociation:
141 Type: AWS::EC2::SubnetRouteTableAssociation
142 Properties:
143 RouteTableId: !Ref PrivateRouteTable1
144 SubnetId: !Ref PrivateSubnet1
145
4. Launch our CloudFormation stacks 11

146 PrivateRouteTable2:
147 Type: AWS::EC2::RouteTable
148 Properties:
149 VpcId: !Ref VPC
150 Tags:
151 - Key: Name
152 Value: !Sub ${EnvironmentName} Private Routes (AZ2)
153
154 DefaultPrivateRoute2:
155 Type: AWS::EC2::Route
156 Properties:
157 RouteTableId: !Ref PrivateRouteTable2
158 DestinationCidrBlock: 0.0.0.0/0
159 NatGatewayId: !Ref NatGateway2
160
161 PrivateSubnet2RouteTableAssociation:
162 Type: AWS::EC2::SubnetRouteTableAssociation
163 Properties:
164 RouteTableId: !Ref PrivateRouteTable2
165 SubnetId: !Ref PrivateSubnet2

This is quite verbose and is everything it takes to set up public and private subnets spanning two
AZs. You can see why you wouldn’t want to implement this in the AWS console!
We also need three SGs. The first one is to secure our EC2 instances and only allow inbound traffic
coming from the load-balancer plus any SSH inbound traffic (remember our instances will be in a
private subnet and won’t be able to receive traffic from the internet anyway):

1 # This security group defines who/where is allowed to access the ECS hosts
2 # directly.
3 # By default we're just allowing access from the load balancer. If you want
4 # to SSH into the hosts, or expose non-load balanced services you can open
5 # their ports here.
6 ECSSecurityGroup:
7 Type: AWS::EC2::SecurityGroup
8 Properties:
9 VpcId: !Ref VPC
10 GroupDescription: Access to the ECS hosts and the tasks/containers that ru\
11 n on them
12 SecurityGroupIngress:
13 # Only allow inbound access to ECS from the ELB
14 - SourceSecurityGroupId: !Ref LoadBalancerSecurityGroup
15 IpProtocol: -1
4. Launch our CloudFormation stacks 12

16 - IpProtocol: tcp
17 CidrIp: 0.0.0.0/0
18 FromPort: '22'
19 ToPort: '22'
20 Tags:
21 - Key: Name
22 Value: !Sub ${EnvironmentName}-ECS-Hosts

The load balancer’s SG will allow any traffic from the internet (while only responding to HTTP and
HTTPS):

1 # This security group defines who/where is allowed to access the Application


2 # Load Balancer.
3 # By default, we've opened this up to the public internet (0.0.0.0/0) but can
4 # you restrict it further if you want.
5 LoadBalancerSecurityGroup:
6 Type: AWS::EC2::SecurityGroup
7 Properties:
8 VpcId: !Ref VPC
9 GroupDescription: Access to the load balancer that sits in front of ECS
10 SecurityGroupIngress:
11 # Allow access from anywhere to our ECS services
12 - CidrIp: 0.0.0.0/0
13 IpProtocol: -1
14 Tags:
15 - Key: Name
16 Value: !Sub ${EnvironmentName}-LoadBalancers

Finally, the database SG only allows ingress traffic on MySQL port and coming from our EC2
instances, and nothing from the internet. Our database will also be hosted inside our private subnets
so it can’t receive any traffic from outside the VPC.

1 # This security group defines who/where is allowed to access the RDS instance.
2 # Only instances associated with our ECS security group can reach to the
3 # database endpoint.
4 DBSecurityGroup:
5 Type: AWS::EC2::SecurityGroup
6 Properties:
7 GroupDescription: Open database for access
8 VpcId: !Ref VPC
9 SecurityGroupIngress:
10 - IpProtocol: tcp
4. Launch our CloudFormation stacks 13

11 FromPort: '3306'
12 ToPort: '3306'
13 SourceSecurityGroupId: !Ref ECSSecurityGroup
14 Tags:
15 - Key: Name
16 Value: !Sub ${EnvironmentName}-DB-Host

Let’s now launch our storage.yaml stack:

1 # I recommend to encrypt your database to make sure your snapshots and logs are
2 # encrypted too.
3 # Automatic snapshots are stored by AWS itself, however manual snapshots will be
4 # stored in your S3 account.
5 # You don't want to accidentally open access to an unencrypted version of you
6 # data! It is also preferable not to use your default AWS master key if you
7 # ever need to transfer a snapshot to another AWS account later as you can't
8 # give cross-account access to your master key.
9 # Note that we only create one primary DB instance for now, no read replica.
10 KmsKey:
11 Type: AWS::KMS::Key
12 Properties:
13 Description: !Sub KMS Key for our ${AWS::StackName} DB
14 KeyPolicy:
15 Id: !Ref AWS::StackName
16 Version: "2012-10-17"
17 Statement:
18 -
19
20 Sid: "Allow administration of the key"
21 Effect: "Allow"
22 Action:
23 - kms:Create*
24 - kms:Describe*
25 - kms:Enable*
26 - kms:List*
27 - kms:Put*
28 - kms:Update*
29 - kms:Revoke*
30 - kms:Disable*
31 - kms:Get*
32 - kms:Delete*
33 - kms:ScheduleKeyDeletion
4. Launch our CloudFormation stacks 14

34 - kms:CancelKeyDeletion
35 Principal:
36 AWS: !Ref AWS::AccountId
37 Resource: '*'
38 -
39 Sid: "Allow use of the key"
40 Effect: "Allow"
41 Principal:
42 AWS: !Ref AWS::AccountId
43 Action:
44 - "kms:Encrypt"
45 - "kms:Decrypt"
46 - "kms:ReEncrypt*"
47 - "kms:GenerateDataKey*"
48 - "kms:DescribeKey"
49 Resource: "*"
50
51 DatabaseSubnetGroup:
52 Type: AWS::RDS::DBSubnetGroup
53 Properties:
54 DBSubnetGroupDescription: CloudFormation managed DB subnet group.
55 SubnetIds: !Ref DatabaseSubnets
56
57 DatabaseCluster:
58 Type: AWS::RDS::DBCluster
59 Properties:
60 Engine: aurora
61 DatabaseName: !Ref DatabaseName
62 MasterUsername: !Ref DatabaseUsername
63 MasterUserPassword: !Ref DatabasePassword
64 BackupRetentionPeriod: 7
65 PreferredBackupWindow: 01:00-02:30
66 PreferredMaintenanceWindow: mon:03:00-mon:04:00
67 DBSubnetGroupName: !Ref DatabaseSubnetGroup
68 KmsKeyId: !GetAtt KmsKey.Arn
69 StorageEncrypted: true
70 VpcSecurityGroupIds:
71 - !Ref DatabaseSecurityGroup
72
73 DatabasePrimaryInstance:
74 Type: AWS::RDS::DBInstance
75 Properties:
4. Launch our CloudFormation stacks 15

76 Engine: aurora
77 DBClusterIdentifier: !Ref DatabaseCluster
78 DBInstanceClass: !Ref DatabaseInstanceType
79 DBSubnetGroupName: !Ref DatabaseSubnetGroup

Plus one public-read S3 bucket:

1 # CloudFormation will generate one unique bucket name for us


2 # Nothing else to do!
3 Bucket:
4 Type: AWS::S3::Bucket
5 Properties:
6 AccessControl: PublicRead

The web.yaml stack is composed of one ECS cluster and a Launch Configuration for our instances.
The LC defines the bootstrap code to execute on each new instance at launch, this is called the User
Data. We use here a third-party Docker credential helper that authenticates the Docker client to our
ECR registry by turning the instance’s IAM role into security tokens.

1 # This template defines our ECS cluster and its desired size.
2 # The Launch Configuration defines how each new instance in our cluster should
3 # be bootstrapped through its User Data.
4 # The Metadata object gets EC2 instances to register in the ECS cluster
5 ECSCluster:
6 Type: AWS::ECS::Cluster
7 Properties:
8 ClusterName: !Ref EnvironmentName
9
10 ECSAutoScalingGroup:
11 Type: AWS::AutoScaling::AutoScalingGroup
12 Properties:
13 VPCZoneIdentifier: !Ref PrivateSubnets
14 LaunchConfigurationName: !Ref ECSLaunchConfiguration
15 MinSize: !Ref ClusterSize
16 MaxSize: !Ref ClusterSize
17 DesiredCapacity: !Ref ClusterSize
18 Tags:
19 - Key: Name
20 Value: !Sub ${EnvironmentName} ECS host
21 PropagateAtLaunch: true
22 CreationPolicy:
23 ResourceSignal:
4. Launch our CloudFormation stacks 16

24 Timeout: PT15M
25 UpdatePolicy:
26 AutoScalingReplacingUpdate:
27 WillReplace: true
28 AutoScalingRollingUpdate:
29 MinInstancesInService: 1
30 MaxBatchSize: 1
31 PauseTime: PT15M
32 SuspendProcesses:
33 - HealthCheck
34 - ReplaceUnhealthy
35 - AZRebalance
36 - AlarmNotification
37 - ScheduledActions
38 WaitOnResourceSignals: true
39
40 ECSLaunchConfiguration:
41 Type: AWS::AutoScaling::LaunchConfiguration
42 Properties:
43 ImageId: !FindInMap [AWSRegionToAMI, !Ref "AWS::Region", AMI]
44 InstanceType: !Ref InstanceType
45 SecurityGroups:
46 - !Ref ECSSecurityGroup
47 IamInstanceProfile: !Ref ECSInstanceProfile
48 KeyName: laravelaws
49 UserData:
50 "Fn::Base64": !Sub |
51 #!/bin/bash
52 yum update -y
53 yum install -y aws-cfn-bootstrap aws-cli go
54 echo '{ "credsStore": "ecr-login" }' > ~/.docker/config.json
55 go get -u github.com/awslabs/amazon-ecr-credential-helper/ecr-logi\
56 n/cli/docker-credential-ecr-login
57 cd /home/ec2-user/go/src/github.com/awslabs/amazon-ecr-credential-\
58 helper/ecr-login/cli/docker-credential-ecr-login
59 go build
60 export PATH=$PATH:/home/ec2-user/go/bin
61 /opt/aws/bin/cfn-init -v --region ${AWS::Region} --stack ${AWS::St\
62 ackName} --resource ECSLaunchConfiguration
63 /opt/aws/bin/cfn-signal -e $? --region ${AWS::Region} --stack ${AW\
64 S::StackName} --resource ECSAutoScalingGroup
65 Metadata:
4. Launch our CloudFormation stacks 17

66 AWS::CloudFormation::Init:
67 config:
68 commands:
69 01_add_instance_to_cluster:
70 command: !Sub echo ECS_CLUSTER=${ECSCluster} >> /etc/ecs/e\
71 cs.config
72 files:
73 "/etc/cfn/cfn-hup.conf":
74 mode: 000400
75 owner: root
76 group: root
77 content: !Sub |
78 [main]
79 stack=${AWS::StackId}
80 region=${AWS::Region}
81 "/etc/cfn/hooks.d/cfn-auto-reloader.conf":
82 content: !Sub |
83 [cfn-auto-reloader-hook]
84 triggers=post.update
85 path=Resources.ECSLaunchConfiguration.Metadata.AWS::Cl\
86 oudFormation::Init
87 action=/opt/aws/bin/cfn-init -v --region ${AWS::Region\
88 } --stack ${AWS::StackName} --resource ECSLaunchConfiguration
89 services:
90 sysvinit:
91 cfn-hup:
92 enabled: true
93 ensureRunning: true
94 files:
95 - /etc/cfn/cfn-hup.conf
96 - /etc/cfn/hooks.d/cfn-auto-reloader.conf

1 # This IAM Role is attached to all of the ECS hosts.


2 #
3 # You can add other IAM policy statements here to allow access from your ECS
4 # hosts to other AWS services. Please note that this role will be used by ALL
5 # containers running on the ECS host.
6 ECSRole:
7 Type: AWS::IAM::Role
8 Properties:
9 Path: /
10 RoleName: !Sub ${EnvironmentName}-ECSRole-${AWS::Region}
4. Launch our CloudFormation stacks 18

11 AssumeRolePolicyDocument: |
12 {
13 "Statement": [{
14 "Action": "sts:AssumeRole",
15 "Effect": "Allow",
16 "Principal": {
17 "Service": "ec2.amazonaws.com"
18 }
19 }]
20 }
21 ManagedPolicyArns:
22 - "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC\
23 2Role"
24 Policies:
25 - PolicyName: ecs-service
26 PolicyDocument: |
27 {
28 "Statement": [{
29 "Effect": "Allow",
30 "Action": [
31 "ecs:CreateCluster",
32 "ecs:DeregisterContainerInstance",
33 "ecs:DiscoverPollEndpoint",
34 "ecs:Poll",
35 "ecs:RegisterContainerInstance",
36 "ecs:StartTelemetrySession",
37 "ecs:Submit*",
38 "logs:CreateLogStream",
39 "logs:PutLogEvents",
40 "ecr:BatchCheckLayerAvailability",
41 "ecr:BatchGetImage",
42 "ecr:GetDownloadUrlForLayer",
43 "ecr:GetAuthorizationToken"
44 ],
45 "Resource": "*"
46 }]
47 }
48 - PolicyName: ec2-s3-write-access
49 PolicyDocument:
50 Statement:
51 - Effect: Allow
52 Action:
4. Launch our CloudFormation stacks 19

53 - s3:PutObject
54 - s3:GetBucketAcl
55 - s3:PutObjectTagging
56 - s3:ListBucket
57 - s3:PutObjectAcl
58 Resource: !Sub arn:aws:s3:::${S3BucketName}/*
59 - PolicyName: ec2-cloudwatch-write-access
60 PolicyDocument:
61 Statement:
62 - Effect: Allow
63 Action:
64 - logs:CreateLogStream
65 - logs:PutLogEvents
66 - logs:CreateLogGroup
67 Resource: "*"
68
69 ECSInstanceProfile:
70 Type: AWS::IAM::InstanceProfile
71 Properties:
72 Path: /
73 Roles:
74 - !Ref ECSRole

1 # One Docker registry that we will use both for the Laravel application
2 # image and our Nginx image.
3 # Note that if you give a name to the repository, CloudFormation can't
4 # update it without a full replacement.
5 ECR:
6 Type: AWS::ECR::Repository
7 Properties:
8 RepositoryPolicyText:
9 Version: "2012-10-17"
10 Statement:
11 -
12 Sid: AllowPushPull
13 Effect: Allow
14 Principal:
15 AWS:
16 - !Sub arn:aws:iam::${AWS::AccountId}:role/${ECSRole}
17 Action:
18 - "ecr:GetDownloadUrlForLayer"
19 - "ecr:BatchGetImage"
4. Launch our CloudFormation stacks 20

20 - "ecr:BatchCheckLayerAvailability"
21 - "ecr:PutImage"
22 - "ecr:InitiateLayerUpload"
23 - "ecr:UploadLayerPart"
24 - "ecr:CompleteLayerUpload"

1 # One ALB with two listeners for HTTP and HTTPS


2 # The HTTP listener will pointed to a specific Nginx container redirecting
3 # traffic to HTTPS because neither ALB or ELB allow you to handle this through
4 # their configuration
5 LoadBalancer:
6 Type: AWS::ElasticLoadBalancingV2::LoadBalancer
7 Properties:
8 Name: !Ref EnvironmentName
9 Subnets: !Ref PublicSubnets
10 SecurityGroups:
11 - !Ref LBSecurityGroup
12 Tags:
13 - Key: Name
14 Value: !Ref EnvironmentName
15
16 LoadBalancerListenerHTTP:
17 Type: AWS::ElasticLoadBalancingV2::Listener
18 Properties:
19 LoadBalancerArn: !Ref LoadBalancer
20 Port: 80
21 Protocol: HTTP
22 DefaultActions:
23 - Type: forward
24 TargetGroupArn: !Ref DefaultTargetGroup
25
26 LoadBalancerListenerHTTPS:
27 Type: AWS::ElasticLoadBalancingV2::Listener
28 Properties:
29 LoadBalancerArn: !Ref LoadBalancer
30 Port: 443
31 Protocol: HTTPS
32 Certificates:
33 - CertificateArn: !Ref LBCertificateArn
34 DefaultActions:
35 - Type: forward
36 TargetGroupArn: !Ref DefaultTargetGroup
4. Launch our CloudFormation stacks 21

37
38 # We define a default target group here, as this is a mandatory Parameters
39 # when creating an Application Load Balancer Listener. This is not used, instead
40 # a target group is created per-service in each service template (../services/*)
41 DefaultTargetGroup:
42 Type: AWS::ElasticLoadBalancingV2::TargetGroup
43 Properties:
44 Name: !Sub ${EnvironmentName}-default
45 VpcId: !Ref VPC
46 Port: 80
47 Protocol: HTTP

In more complex setups, we can have our freshly created load balancer registering itself to Route53
so that your service is always available at the same DNS address. This design pattern is called service
discovery and is not possible out of the box in CloudFormation. Instead, we will manually point our
domain name to our load-balancer on Route53 in step 7 below.
In the meantime, our load balancer responds with an HTTP 503 error since it can’t find a single
healthy instance returning a correct HTTP status code in our cluster pool. Of course, this will change
as soon as we deploy our application in our cluster.

Our load balancer responding but with no healthy container instances behind it
5. Build and push your Laravel Docker
image
In the previous step, we created one ECR registry to store both the Docker image of our Laravel
application and the one of our Nginx server. ECRs are standard Docker registries which you
authenticate to using tokens, that the AWS CLI can generate for us:

1 # The get-login command outputs the "docker login" command you need to execute
2 # with a temporary token. You can run both directly using eval.
3 # The --no-include-email tells get-login not to return the -e option that does
4 # not work for all of Docker versions
5 eval $(aws ecr get-login --no-include-email)

Below are the two Dockerfiles we use to build our Docker images:

1 FROM php:7.1-fpm
2
3 # Update packages and install composer and PHP dependencies.
4 RUN apt-get update && \
5 DEBIAN_FRONTEND=noninteractive apt-get install -y \
6 postgresql-client \
7 libpq-dev \
8 libfreetype6-dev \
9 libjpeg62-turbo-dev \
10 libmcrypt-dev \
11 libpng12-dev \
12 libbz2-dev \
13 php-pear \
14 cron \
15 && pecl channel-update pecl.php.net \
16 && pecl install apcu
17
18 # PHP Extensions
19 RUN docker-php-ext-install mcrypt zip bz2 mbstring pdo pdo_pgsql pdo_mysql pcntl\
20 \
21 && docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir\
22 =/usr/include/ \
5. Build and push your Laravel Docker image 23

23 && docker-php-ext-install gd
24
25 # Memory Limit
26 RUN echo "memory_limit=2048M" > $PHP_INI_DIR/conf.d/memory-limit.ini
27 RUN echo "max_execution_time=900" >> $PHP_INI_DIR/conf.d/memory-limit.ini
28 RUN echo "extension=apcu.so" > $PHP_INI_DIR/conf.d/apcu.ini
29 RUN echo "post_max_size=20M" >> $PHP_INI_DIR/conf.d/memory-limit.ini
30 RUN echo "upload_max_filesize=20M" >> $PHP_INI_DIR/conf.d/memory-limit.ini
31
32 # Time Zone
33 RUN echo "date.timezone=${PHP_TIMEZONE:-UTC}" > $PHP_INI_DIR/conf.d/date_timezon\
34 e.ini
35
36 # Display errors in stderr
37 RUN echo "display_errors=stderr" > $PHP_INI_DIR/conf.d/display-errors.ini
38
39 # Disable PathInfo
40 RUN echo "cgi.fix_pathinfo=0" > $PHP_INI_DIR/conf.d/path-info.ini
41
42 # Disable expose PHP
43 RUN echo "expose_php=0" > $PHP_INI_DIR/conf.d/path-info.ini
44
45 # Install Composer
46 RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local\
47 /bin --filename=composer
48
49 ADD . /var/www/html
50 WORKDIR /var/www/html
51
52 RUN mkdir storage/logs
53 RUN touch storage/logs/laravel.log
54 RUN chmod 777 storage/logs/laravel.log
55
56 RUN composer install
57 RUN php artisan optimize --force
58 # RUN php artisan route:cache
59
60 RUN chmod -R 777 /var/www/html/storage
61
62 RUN touch /var/log/cron.log
63
64 ADD deploy/cron/artisan-schedule-run /etc/cron.d/artisan-schedule-run
5. Build and push your Laravel Docker image 24

65 RUN chmod 0644 /etc/cron.d/artisan-schedule-run


66 RUN chmod +x /etc/cron.d/artisan-schedule-run
67
68 # CMD ["php-fpm"]
69
70 CMD ["/bin/sh", "-c", "php-fpm -D | tail -f storage/logs/laravel.log"],

We install cron here so we can reuse the same image for our Laravel scheduled tasks and our Laravel
workers

1 FROM nginx
2
3 ADD deploy/nginx/nginx.conf /etc/nginx/
4 ADD deploy/nginx/default.conf /etc/nginx/conf.d/
5
6 ADD public /usr/share/nginx/html
7
8 WORKDIR /usr/share/nginx/html

Here we simply add our custom Nginx config and the public assets from the Laravel public directory
into the Docker image. Each time you rebuild your front-end assets, you will need to re-build both
the Laravel and Nginx images
And the command to build them:

1 # Building our Nginx Docker image and tagging it with the ECR URL
2 docker build -f Dockerfile-nginx -t YOUR_ECR_REGISTRY_URL_HERE:nginx .
3 docker push YOUR_ECR_REGISTRY_URL_HERE:nginx
4
5 # Building our Laravel Docker image and tagging it with the ECR URL
6 docker build -t YOUR_ECR_REGISTRY_URL_HERE:laravel .
7 docker push YOUR_ECR_REGISTRY_URL_HERE:laravel

Finally, we launch our web service with ECS.


At the core level, task definitions describe which Docker images should be used to create containers,
how containers should be linked together and which environment variables to run them with.
At an higher level, an ECS service maintains a specified number of instances of a task definition
simultaneously in an ECS cluster. The cluster is the pool of EC2 instances ie the infrastructure on
which the tasks are hosted.
5. Build and push your Laravel Docker image 25

1 Service:
2 Type: AWS::ECS::Service
3 DependsOn:
4 - ListenerRuleHTTPS
5 Properties:
6 Cluster: !Ref Cluster
7 Role: !Ref ServiceRole
8 DesiredCount: !Ref DesiredCount
9 TaskDefinition: !Ref TaskDefinition
10 LoadBalancers:
11 - ContainerName: nginx
12 ContainerPort: 80
13 TargetGroupArn: !Ref TargetGroup
14
15 ServiceRedirect:
16 Type: AWS::ECS::Service
17 DependsOn:
18 - ListenerRuleHTTP
19 Properties:
20 Cluster: !Ref Cluster
21 Role: !Ref ServiceRole
22 DesiredCount: 1
23 TaskDefinition: !Ref TaskDefinitionRedirectHTTPtoHTTPS
24 LoadBalancers:
25 - ContainerName: nginx-to-https
26 ContainerPort: 80
27 TargetGroupArn: !Ref TargetGroupRedirectHTTPSToHTTP
28
29 TaskDefinitionRedirectHTTPtoHTTPS:
30 Type: AWS::ECS::TaskDefinition
31 Properties:
32 Family: nginx-to-https
33 ContainerDefinitions:
34 - Name: nginx-to-https
35 Essential: true
36 Image: getlionel/nginx-to-https
37 Memory: 128
38 PortMappings:
39 - ContainerPort: 80
40
41 TaskDefinition:
42 Type: AWS::ECS::TaskDefinition
5. Build and push your Laravel Docker image 26

43 Properties:
44 Family: laravel-nginx
45 ContainerDefinitions:
46 - Name: nginx
47 Essential: true
48 Image: !Join [ ".", [ !Ref "AWS::AccountId", "dkr.ecr", !Ref "AWS::R\
49 egion", !Join [ ":", [ !Join [ "/", [ "amazonaws.com", !Ref ECR ] ], "nginx" ] ]\
50 ] ]
51 Memory: 128
52 PortMappings:
53 - ContainerPort: 80
54 Links:
55 - app
56 LogConfiguration:
57 LogDriver: awslogs
58 Options:
59 awslogs-group: !Ref AWS::StackName
60 awslogs-region: !Ref AWS::Region
61 - Name: app
62 Essential: true
63 Image: !Join [ ".", [ !Ref "AWS::AccountId", "dkr.ecr", !Ref "AWS::R\
64 egion", !Join [ ":", [ !Join [ "/", [ "amazonaws.com", !Ref ECR ] ], "laravel" ]\
65 ] ] ]
66 Memory: 256
67 LogConfiguration:
68 LogDriver: awslogs
69 Options:
70 awslogs-group: !Ref AWS::StackName
71 awslogs-region: !Ref AWS::Region
72 Environment:
73 - Name: APP_NAME
74 Value: Laravel
75 - Name: APP_ENV
76 Value: production
77 - Name: APP_DEBUG
78 Value: false
79 - Name: APP_LOG_LEVEL
80 Value: error
81 - Name: APP_KEY
82 Value: base64:h2ASblVGbCXbC1buJ8KToZkKIEY69GSiutkAeGo77B0=
83 - Name: APP_URL
84 Value: !Ref APPURL
5. Build and push your Laravel Docker image 27

85 - Name: DB_CONNECTION
86 Value: !Ref DBCONNECTION
87 - Name: DB_HOST
88 Value: !Ref DBHOST
89 - Name: DB_PORT
90 Value: !Ref DBPORT
91 - Name: DB_DATABASE
92 Value: !Ref DBDATABASE
93 - Name: DB_USERNAME
94 Value: !Ref DBUSERNAME
95 - Name: DB_PASSWORD
96 Value: !Ref DBPASSWORD
97 - Name: CACHE_DRIVER
98 Value: file
99 - Name: SESSION_DRIVER
100 Value: database
101 - Name: MAIL_DRIVER
102 Value: !Ref MAILDRIVER
103 - Name: MAIL_HOST
104 Value: !Ref MAILHOST
105 - Name: MAIL_PORT
106 Value: !Ref MAILPORT
107 - Name: MAIL_USERNAME
108 Value: !Ref MAILUSERNAME
109 - Name: MAIL_PASSWORD
110 Value: !Ref MAILPASSWORD
111 - Name: MAIL_FROM_ADDRESS
112 Value: !Ref MAILFROMADDRESS
113 - Name: MAIL_FROM_NAME
114 Value: !Ref MAILFROMNAME
115 # - Name: ELASTICSEARCH_HOST
116 # Value: !Ref ELASTICSEARCHHOST
117 # - Name: ELASTICSEARCH_PORT
118 # Value: !Ref ELASTICSEARCHPORT
119 - Name: FILESYSTEM_DRIVER
120 Value: !Ref FILESYSTEMDRIVER
121 - Name: AWS_REGION
122 Value: !Sub ${AWS::Region}
123 - Name: AWS_BUCKET
124 Value: !Ref AWSBUCKET
125
126 CloudWatchLogsGroup:
5. Build and push your Laravel Docker image 28

127 Type: AWS::Logs::LogGroup


128 Properties:
129 LogGroupName: !Ref AWS::StackName
130 RetentionInDays: 365
131
132 TargetGroup:
133 Type: AWS::ElasticLoadBalancingV2::TargetGroup
134 Properties:
135 VpcId: !Ref VPC
136 Port: 80
137 Protocol: HTTP
138 Matcher:
139 HttpCode: 200-301
140 HealthCheckIntervalSeconds: 10
141 HealthCheckPath: /
142 HealthCheckProtocol: HTTP
143 HealthCheckTimeoutSeconds: 5
144 HealthyThresholdCount: 2
145
146 TargetGroupRedirectHTTPSToHTTP:
147 Type: AWS::ElasticLoadBalancingV2::TargetGroup
148 Properties:
149 VpcId: !Ref VPC
150 Port: 80
151 Protocol: HTTP
152 Matcher:
153 HttpCode: 200-301
154 HealthCheckIntervalSeconds: 10
155 HealthCheckPath: /
156 HealthCheckProtocol: HTTP
157 HealthCheckTimeoutSeconds: 5
158 HealthyThresholdCount: 2
159
160 ListenerRuleHTTP:
161 Type: AWS::ElasticLoadBalancingV2::ListenerRule
162 Properties:
163 ListenerArn: !Ref ListenerHTTP
164 Priority: 1
165 Conditions:
166 - Field: path-pattern
167 Values:
168 - !Ref Path
5. Build and push your Laravel Docker image 29

169 Actions:
170 - TargetGroupArn: !Ref TargetGroupRedirectHTTPSToHTTP
171 Type: forward
172
173 ListenerRuleHTTPS:
174 Type: AWS::ElasticLoadBalancingV2::ListenerRule
175 Properties:
176 ListenerArn: !Ref ListenerHTTPS
177 Priority: 1
178 Conditions:
179 - Field: path-pattern
180 Values:
181 - !Ref Path
182 Actions:
183 - TargetGroupArn: !Ref TargetGroup
184 Type: forward
185
186 # This IAM Role grants the service access to register/unregister with the
187 # Application Load Balancer (ALB)
188 ServiceRole:
189 Type: AWS::IAM::Role
190 Properties:
191 RoleName: !Sub ecs-service-${AWS::StackName}
192 Path: /
193 AssumeRolePolicyDocument: |
194 {
195 "Statement": [{
196 "Effect": "Allow",
197 "Principal": { "Service": [ "ecs.amazonaws.com" ]},
198 "Action": [ "sts:AssumeRole" ]
199 }]
200 }
201 Policies:
202 - PolicyName: !Sub ecs-service-${AWS::StackName}
203 PolicyDocument:
204 {
205 "Version": "2012-10-17",
206 "Statement": [{
207 "Effect": "Allow",
208 "Action": [
209 "ec2:AuthorizeSecurityGroupIngress",
210 "ec2:Describe*",
5. Build and push your Laravel Docker image 30

211 "elasticloadbalancing:DeregisterInstancesFromLoadBa\
212 lancer",
213 "elasticloadbalancing:Describe*",
214 "elasticloadbalancing:RegisterInstancesWithLoadBala\
215 ncer",
216 "elasticloadbalancing:DeregisterTargets",
217 "elasticloadbalancing:DescribeTargetGroups",
218 "elasticloadbalancing:DescribeTargetHealth",
219 "elasticloadbalancing:RegisterTargets"
220 ],
221 "Resource": "*"
222 }]
223 }

It will take a few seconds for our instances to be considered healthy by ELB so it starts directing
traffic to them, and that what we see then is:

At least this is a Laravel page, though displaying the default HTTP 500 error message. By checking
Laravel logs which are streamed to CloudWatch, we see that we’re missing the session table in the
DB. So how can we now connect to one of our instances in the private subnets, across the internet,
to run our database migrations?
6. Launch a bastion & run database
migrations
A bastion (also called jump box) is a temporary EC2 instance that we will place in a public subnet
of our VPC. It will enable us to SSH into it from outside the VPC and from there still being able to
access our instances (including database instances) in private subnets.
When creating the bastion, make sure to associate to it the SG allowing access to the database.

1 aws ec2 run-instances


2 --image-id ami-c1a6bda2
3 --key-name laravelaws # the SSH key pair we created earlier
4 --security-group-ids sg-xxxxxxxx # our previous SG allowing access to the DB
5 --subnet-id subnet-xxxxxxxx # one of our public subnets
6 --count 1
7 --instance-type t2.micro # the smallest instance type allowed
8 --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=bastion}]'

Launch one bastion, to be deleted once we’re done.

1 # Add your key to your SSH agent


2 ssh-add -K laravelaws.pem
3
4 # Verify that your private key is successfully loaded in your local SSH agent
5 ssh-add –L
6
7 # Use the -A option to enable forwarding of the authentication agent connection
8 ssh –A ec2-user@ <bastion-public-IP-address>
9
10 # Once you are connected to the bastion, you can SSH into a private subnet insta\
11 nce
12 # without copying any SSH key on the bastion
13 ssh ec2-user@ <instance-private-IP-address>

You’re now connected to an instance inside your VPC private subnets without copying keys around
6. Launch a bastion & run database migrations 32

1 # Use the Docker exec command to execute the Artisan commands inside the applica\
2 tion container
3 docker exec -it CONTAINER_ID php artisan session:table
4 docker exec -it CONTAINER_ID php artisan migrate?--force

The bastion can also be a host for a SSH tunnel between our machine and our public subnet so we
can connect a local mysql/pgsql client to our remote database. Below is an example for PostgreSQL:

1 # create a SSH tunnel to RDS through your bastion:


2 ssh -L 54320:your_rds_database_endpoint_here.your_region_here.rds.amazonaws.com:\
3 5432
4 ec2-user@<bastion_public_ip>
5 -i ./laravelaws.pem
6
7 # Your remote database is now accessible from port 54320 on your local machine
8 # I strongly recommend to create first thing a read-only user in your database
9 psql -h localhost -p 54320 -U postgres -W db_name_here
10 > CREATE ROLE lionel LOGIN PASSWORD 'a_unique_password_here';
11 > GRANT CONNECT ON DATABASE crvs TO lionel;
12 > GRANT USAGE ON SCHEMA public TO lionel;
13 > GRANT SELECT ON ALL TABLES IN SCHEMA public TO lionel;
14 > GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO lionel
15
16 # You can then use pg_dump, pg_restore, or pgsql command line tools to create/re\
17 store a DB dump
18 pg_dump -h localhost -U lionel -W -p 54320 db_name_here > dump_db_name_here_$(da\
19 te +"%m_%d_%Y").sql
20
21 # Import it into a local database using:
22 psql -U lionel -w db_name_here -f dump_db_name_here_11_23_2017.sql

Back to our database migrations that we just ran. Here’s how it looks now when connecting to the
load balancer:
6. Launch a bastion & run database migrations 33

Laravel served through our load balancer URL

Yay! Our application is now served through our load balancer and our EC2 and database instances
are running from the safety of a private subnet. The next step is to point our domain name to our
load balancer.
7. Migrate DNS service to AWS
Route53
If you have bought your domain name outside of AWS, you usually don’t need to migrate either the
registration or the DNS service to your AWS account.
There is an edge case though if you want your root domain (also known as APEX) to point to your
load balancer. This needs a CNAME record which is not allowed for APEXs but AWS Route53 offers
a special type of ALIAS records that lets you do just that.
First we will migrate our DNS service to AWS:

1 # create a hosted zone for AWS to select NS servers for your domain
2 aws route53 create-hosted-zone
3 --name laravelaws.com
4 --caller-reference random_string_here
5
6 # wait for the hosted zone to be created
7
8 # retrieve NS records
9 aws route53 get-hosted-zone
10 --id /hostedzone/YOUR_HOSTED_ZONE_ID
11
12 # the NS addresses in the response are the one to upload to your current domain \
13 name registrar
14 {
15 "HostedZone": {
16 "Id": "/hostedzone/YOUR_HOSTED_ZONE_ID",
17 "Name": "laravelaws.com.",
18 "CallerReference": "RISWorkflow-RD:824653d6-3f9d-415a-a2e8-8d6fa63fb4c8",
19 "Config": {
20 "Comment": "HostedZone created by Route53 Registrar",
21 "PrivateZone": false
22 },
23 "ResourceRecordSetCount": 6
24 },
25 "DelegationSet": {
26 "NameServers": [
27 "ns-1308.awsdns-03.org",
7. Migrate DNS service to AWS Route53 35

28 "ns-265.awsdns-32.com",
29 "ns-583.awsdns-08.net",
30 "ns-1562.awsdns-03.co.uk"
31 ]
32 }
33 }
34
35 # retrieve the TTL for your NS records.
36 # This is the maximum time it will take for all clients to point to Route53
37 # after you uploaded them to your current domain name registrar
38 dig laravelaws.com

Once the DNS service is assumed by Route53, we can create an ALIAS record to our ELB URL.

1 # Add an ALIAS record to ELB URL


2 aws route53 change-resource-record-sets?
3 --hosted-zone-id /hostedzone/YOUR_HOSTED_ZONE_ID
4 --change-batch '{
5 "Changes":[
6 {
7 "Action":"CREATE",
8 "ResourceRecordSet":{
9 "Name":"laravelaws.com.",
10 "Type":"A",
11 "AliasTarget":{
12 "DNSName":"laravelaws2-1297867430.ap-southeast-2.elb.amazonaws.com",
13 "EvaluateTargetHealth":true,
14 "HostedZoneId":"YOUR_HOSTED_ZONE_ID"
15 }
16 }
17 }
18 ]
19 }'
20
21 # Track the propagation of the record
22 aws route53 get-change?--id /change/YOUR_CHANGE_ID
23
24 # Test your record even before it is propagated
25 aws route53 test-dns-answer
26 --hosted-zone-id /hostedzone/YOUR_HOSTED_ZONE_ID
27 --record-name laravelaws.com?
28 --record-type A
7. Migrate DNS service to AWS Route53 36

All done!

Domain name pointing to the load balancer, SSL certificate working

You are potentially done at this point. You can also improve your stack and deployment systems by
following the steps below.
8. Speed up your application by using
CloudFront
Add a CloudFront distribution in your CloudFormation template and update your stack:

1 CloudFrontDistribution:
2 Type: AWS::CloudFront::Distribution
3 Properties:
4 DistributionConfig:
5 Origins:
6 - DomainName: !Ref S3BucketDNSName
7 Id: myS3Origin
8 S3OriginConfig:
9 OriginAccessIdentity: !Ref CloudFrontOAI
10 Enabled: 'true'
11 Aliases:
12 - !Ref CDNAlias
13 DefaultCacheBehavior:
14 Compress: 'true'
15 AllowedMethods:
16 - GET
17 - HEAD
18 - OPTIONS
19 TargetOriginId: myS3Origin
20 ForwardedValues:
21 QueryString: 'false'
22 Cookies:
23 Forward: none
24 ViewerProtocolPolicy: redirect-to-https
25 ViewerCertificate:
26 AcmCertificateArn: !Ref CertificateArn

You will need to create beforehand a CloudFront Origin Access Identity, which is a special
CloudFront user who will be able query objects in your S3 bucket:
8. Speed up your application by using CloudFront 38

1 aws cloudfront create-cloud-front-origin-access-identity


2 --cloud-front-origin-access-identity-config CallerReference=random_string_her\
3 e,Comment=

Create an ALIAS record to point files.yourdomain.com to your CF distribution:

1 # Add an ALIAS record to ELB URL


2 aws route53 change-resource-record-sets
3 --hosted-zone-id /hostedzone/YOUR_HOSTED_ZONE_ID
4 --change-batch '{
5 "Changes":[
6 {
7 "Action":"CREATE",
8 "ResourceRecordSet":{
9 "Name":"files.laravelaws.com.",
10 "Type":"A",
11 "AliasTarget":{
12 "DNSName":"d165d2lrm1x3fz.cloudfront.net",
13 "EvaluateTargetHealth":true,
14 "HostedZoneId":"YOUR_HOSTED_ZONE_ID"
15 }
16 }
17 }
18 ]
19 }'

Add a sub_filter Nginx directive to rewrite all URLs to your S3 buckets as links to your CF
distribution instead.

1 location ~ \.php$ {
2 root /var/www/html/public;
3 fastcgi_cache cache_key;
4 fastcgi_cache_valid 200 204 1m;
5 fastcgi_ignore_headers Cache-Control;
6 fastcgi_no_cache $http_authorization $cookie_laravel_session;
7 fastcgi_cache_lock on;
8 fastcgi_cache_lock_timeout 10s;
9
10 add_header X-Proxy-Cache $upstream_cache_status;
11
12 sub_filter_types *;
13 sub_filter_once off;
8. Speed up your application by using CloudFront 39

14 sub_filter 'laravelaws-bucket-jjua0wgxhi7i.s3-ap-southeast-2.amazonaws.com' '\


15 files.laravelaws.com';
16
17 fastcgi_pass app:9000;
18 fastcgi_index index.php;
19 fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
20 fastcgi_read_timeout 900s;
21 include fastcgi_params;
22 }
9. (Optional) Publish your Laravel
workers and crons
Well done! Our Laravel application is now highly available in the cloud. This step will show how we
can reuse our exact same Laravel Docker image to deploy our scheduled tasks and workers. They
will run in their own containers and be managed by another ECS service so we can scale them
independently to the php-fpm containers. We also make sure we have only a single instance of cron
running, even if we have multiple front-end containers.
For the worker jobs, we create an SQS queue using CloudFormation, for the front-end to dispatch
jobs to our workers in the background:

1 # That's all it takes to create a queue in CloudFormation


2 # CloudFormation will assign a unique name to it, that we
3 # will pass to our Laravel containers
4 Queue:
5 Type: AWS::SQS::Queue
6
7 # Then in the web.yaml stack, we update our ECSRole to grant
8 # our ECS instances access to this one queue we just created
9 - PolicyName: sqs-read-write-access
10 PolicyDocument:
11 Statement:
12 - Effect: Allow
13 Action:
14 - sqs:*
15 Resource: !GetAtt Queue.Arn

Finally we create two more tasks definitions in CloudFormation by starting from the same Laravel
Docker image, same environment variables, but just overriding the Docker CMD (i.e. the command
executed by Docker when the container starts):
9. (Optional) Publish your Laravel workers and crons 41

1 # The worker containers simply execute the Laravel artisan queue:work


2 # command instead of php-fpm
3 TaskDefinitionWorker:
4 Type: AWS::ECS::TaskDefinition
5 Properties:
6 Family: laravel-workers
7 ContainerDefinitions:
8 - Name: app
9 Essential: true
10 Image: !Join [ ".", [ !Ref "AWS::AccountId", "dkr.ecr", !Ref "AWS::R\
11 egion", !Join [ ":", [ !Join [ "/", [ "amazonaws.com", !Ref ECR ] ], "laravel" ]\
12 ] ] ]
13 Command:
14 - "/bin/sh"
15 - "-c"
16 - "php artisan queue:work"
17 Memory: 128
18 LogConfiguration:
19 LogDriver: awslogs
20 Options:
21 awslogs-group: !Ref AWS::StackName
22 awslogs-region: !Ref AWS::Region
23 Environment:
24 - Name: APP_NAME
25 Value: Laravel
26 ......
27
28 # The cron container command is a bit more intricate
29 # since we need to load the container's environment
30 # variables in the same console session context than cron
31 # for Laravel to use them
32 TaskDefinitionCron:
33 Type: AWS::ECS::TaskDefinition
34 Properties:
35 Family: laravel-cron
36 ContainerDefinitions:
37 - Name: app
38 Essential: true
39 Image: !Join [ ".", [ !Ref "AWS::AccountId", "dkr.ecr", !Ref "AWS::R\
40 egion", !Join [ ":", [ !Join [ "/", [ "amazonaws.com", !Ref ECR ] ], "laravel" ]\
41 ] ] ]
42 EntryPoint:
9. (Optional) Publish your Laravel workers and crons 42

43 - /bin/bash
44 - -c
45 Command:
46 - env /bin/bash -o posix -c 'export -p' > /etc/cron.d/project_env.\
47 sh && chmod +x /etc/cron.d/project_env.sh && crontab /etc/cron.d/artisan-schedul\
48 e-run && cron && tail -f /var/log/cron.log
49 Memory: 128
50 LogConfiguration:
51 LogDriver: awslogs
52 Options:
53 awslogs-group: !Ref AWS::StackName
54 awslogs-region: !Ref AWS::Region
55 Environment:
56 - Name: APP_NAME
57 Value: Laravel

The crontab file we use to call the artisan scheduler loads the container’s environment variables in
the cron console session. If you don’t, Laravel won’t see your container’s env vars when called from
the cron.

1 * * * * * root . /etc/cron.d/project_env.sh ; /usr/local/bin/php /var/www/html/a\


2 rtisan schedule:run &> /var/log/cron.log
3 # An empty line is required at the end of this file for a valid cron file.

That’s it! We now have in our cluster a mix of Laravel front-end containers (php-fpm with Nginx as
a reverse proxy), Laravel workers and one cron.
10. (Optional) Add an ElasticSearch
domain
Most web applications would need a search engine like ElasticSearch. This is how you can create a
managed ES cluster with CloudFormation.

1 Elasticsearch:
2 Type: AWS::Elasticsearch::Domain
3 Properties:
4 DomainName: !Sub ${AWS::StackName}-es
5 ElasticsearchVersion: 5.5
6 ElasticsearchClusterConfig:
7 InstanceType: t2.small.elasticsearch
8 ZoneAwarenessEnabled: false
9 InstanceCount: 1
10 EBSOptions:
11 EBSEnabled: true
12 VolumeSize: 10
13 AccessPolicies:
14 Version: 2012-10-17
15 Statement:
16 - Effect: Allow
17 Principal:
18 AWS: "*"
19 Action:
20 - es:ESHttpDelete
21 - es:ESHttpGet
22 - es:ESHttpHead
23 - es:ESHttpPost
24 - es:ESHttpPut
25 Resource: !Sub arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${\
26 AWS::StackName}-es/*
27 Condition:
28 IpAddress:
29 aws:SourceIp:
30 - !GetAtt VPC.Outputs.NatGateway1EIP
31 - !GetAtt VPC.Outputs.NatGateway2EIP
10. (Optional) Add an ElasticSearch domain 44

Note that we only allow ingress traffic from both our NAT gateway IPs, ie only instances from our
private subnets
11. (Optional) High availability for the
storage tier
As we discussed previously, we only have one database instance and no read replica in a separate
AZ. You can add a replica in CloudFormation with the below template:

1 DatabaseReplicaInstance:
2 Type: AWS::RDS::DBInstance
3 DependsOn: DatabasePrimaryInstance
4 Properties:
5 Engine: aurora
6 DBClusterIdentifier: !Ref DatabaseCluster
7 DBInstanceClass: !Ref DatabaseInstanceType
8 DBSubnetGroupName: !Ref DatabaseSubnetGroup

Use the DependsOn directive to avoid your replica to be instantiated before and be promoted as the
primary instance by Aurora
Note that Aurora only supports instances starting at db.r4.large size for PostgreSQL whereas Aurora
MySQL does start at db.t2.small instances.
12. CloudWatch alarms
Below we set up CPU, memory and replication alarms for our database:

1 StackAlarmTopic:
2 Type: AWS::SNS::Topic
3 Properties:
4 DisplayName: Stack Alarm Topic
5
6 DatabasePrimaryCPUAlarm:
7 Type: AWS::CloudWatch::Alarm
8 Properties:
9 AlarmDescription: Primary database CPU utilization is over 80%.
10 Namespace: AWS/RDS
11 MetricName: CPUUtilization
12 Unit: Percent
13 Statistic: Average
14 Period: 300
15 EvaluationPeriods: 2
16 Threshold: 80
17 ComparisonOperator: GreaterThanOrEqualToThreshold
18 Dimensions:
19 - Name: DBInstanceIdentifier
20 Value:
21 Ref: DatabasePrimaryInstance
22 AlarmActions:
23 - Ref: StackAlarmTopic
24 InsufficientDataActions:
25 - Ref: StackAlarmTopic
26
27 DatabaseReplicaCPUAlarm:
28 Type: AWS::CloudWatch::Alarm
29 Properties:
30 AlarmDescription: Replica database CPU utilization is over 80%.
31 Namespace: AWS/RDS
32 MetricName: CPUUtilization
33 Unit: Percent
34 Statistic: Average
35 Period: 300
12. CloudWatch alarms 47

36 EvaluationPeriods: 2
37 Threshold: 80
38 ComparisonOperator: GreaterThanOrEqualToThreshold
39 Dimensions:
40 - Name: DBInstanceIdentifier
41 Value:
42 Ref: DatabaseReplicaInstance
43 AlarmActions:
44 - Ref: StackAlarmTopic
45 InsufficientDataActions:
46 - Ref: StackAlarmTopic
47
48 DatabasePrimaryMemoryAlarm:
49 Type: AWS::CloudWatch::Alarm
50 Properties:
51 AlarmDescription: Primary database freeable memory is under 700MB.
52 Namespace: AWS/RDS
53 MetricName: FreeableMemory
54 Unit: Bytes
55 Statistic: Average
56 Period: 300
57 EvaluationPeriods: 2
58 Threshold: 700000000
59 ComparisonOperator: LessThanOrEqualToThreshold
60 Dimensions:
61 - Name: DBInstanceIdentifier
62 Value:
63 Ref: DatabasePrimaryInstance
64 AlarmActions:
65 - Ref: StackAlarmTopic
66 InsufficientDataActions:
67 - Ref: StackAlarmTopic
68
69 DatabasePrimaryReplicationAlarm:
70 Type: AWS::CloudWatch::Alarm
71 Properties:
72 AlarmDescription: Database replication latency is over 200ms.
73 Namespace: AWS/RDS
74 MetricName: AuroraReplicaLag
75 Unit: Milliseconds
76 Statistic: Average
77 Period: 300
12. CloudWatch alarms 48

78 EvaluationPeriods: 2
79 Threshold: 200
80 ComparisonOperator: GreaterThanOrEqualToThreshold
81 Dimensions:
82 - Name: DBInstanceIdentifier
83 Value:
84 Ref: DatabaseReplicaInstance
85 AlarmActions:
86 - Ref: StackAlarmTopic
87
88 DatabaseReplicaReplicationAlarm:
89 Type: AWS::CloudWatch::Alarm
90 Properties:
91 AlarmDescription: Database replication latency is over 200ms.
92 Namespace: AWS/RDS
93 MetricName: AuroraReplicaLag
94 Unit: Milliseconds
95 Statistic: Average
96 Period: 300
97 EvaluationPeriods: 2
98 Threshold: 200
99 ComparisonOperator: GreaterThanOrEqualToThreshold
100 Dimensions:
101 - Name: DBInstanceIdentifier
102 Value:
103 Ref: DatabaseReplicaInstance
104 AlarmActions:
105 - Ref: StackAlarmTopic

And for the ECS instances:

1 AlarmTopic:
2 Type: AWS::SNS::Topic
3 Properties:
4 Subscription:
5 - Endpoint: [email protected]
6 Protocol: email
7
8 CPUAlarmHigh:
9 Type: AWS::CloudWatch::Alarm
10 Properties:
11 EvaluationPeriods: '1'
12. CloudWatch alarms 49

12 Statistic: Average
13 Threshold: '50'
14 AlarmDescription: Alarm if CPU too high or metric disappears indicating in\
15 stance is down
16 Period: '60'
17 # AlarmActions:
18 # - Ref: ScaleUpPolicy
19 AlarmActions:
20 - Ref: AlarmTopic
21 Namespace: AWS/EC2
22 Dimensions:
23 - Name: AutoScalingGroupName
24 Value: !Ref ECSAutoScalingGroup
25 ComparisonOperator: GreaterThanThreshold
26 MetricName: CPUUtilization
13. (Optional) Updating your stack
manually: vertical / horizontal scaling
To create your CloudFormation stack the first time, use the below command:

1 # Create your CloudFormation stack from scratch using the create-stack command
2 aws cloudformation create-stack
3 --stack-name=laravel
4 --template-body=file://master.yaml
5 --capabilities CAPABILITY_NAMED_IAM
6 --parameters
7 ParameterKey=CloudFrontOAI,ParameterValue=origin-access-identity/cloudfro\
8 nt/YOUR_CF_OAI_HERE
9 ParameterKey=CertificateArnCF,ParameterValue=arn:aws:acm:us-east-1:your_c\
10 loudfront_certificate_arn_here
11 ParameterKey=CertificateArn,ParameterValue=arn:aws:acm:us-east-1:your_cer\
12 tificate_arn_here
13 ParameterKey=BaseUrl,ParameterValue=laravelaws.com
14 ParameterKey=DBMasterPwd,ParameterValue=your_master_db_pwd_here
15 ParameterKey=ECSInstanceType,ParameterValue=t2.micro
16 ParameterKey=ECSDesiredCount,ParameterValue=1

If you later want to modify the number or size of the instances in your cluster, update the parameters
ECSInstanceType and ECSDesiredCount in your command line and call the update-stack command
instead. CloudFormation will un-provision your previous instances and launch the new instances
without further intervention needed from you.
14. (Optional) Auto scaling
Here we will use a combination of CloudWatch alarms, ScalableTargets and ScalingPolicies to trigger
scaling of both our ECS cluster size and the desired number of container instances in our ECS. Scaling
will happen both ways, so our infrastructure will typically be as light as possible at night and then
scale up for peak times!
Coming soon
15. (Optional) Set up Continuous
Deployment with CodePipeline
This is where we’ll automate the building of our images from our GitHub repository. Once images
are built and tested (using built-in Laravel unit and integration tests), they will be deployed in
production without further clicking.
Containers will be replaced in sequence using a deployment pattern called Blue-Green deployment,
so we get absolutely no downtime.
Coming soon
16. (Optional) Setup SES and a mail
server
If you’ve bought your domain name from Route53 instead of another domain name registrar, you
don’t have a mail service ie you can’t receive emails on your new domain name. AWS has no other
solution for you than letting you host a mail server on an EC2 instance and get your MX records
to point at it, or to set up a custom Lambda function to redirect your incoming emails to GMail for
example.
Coming soon
17. Cost containment
If you are running this architecture at scale, there are a couple ways to contain your AWS bill. First
you could point your application to the Aurora Read Replicas for read-only queries, to offload your
primary instance and avoid vertically scaling too much.
Then you could commit to EC2 Reserved instances and pay for some of your instances cost upfront.
Doing so can reduce your EC2 bill by as much as 75%. If your traffic fluctuates a lot throughout
the day, you could have reserved instances running continuously and scale up with On-Demand
instances during peak times.
Finally, a more sophisticated approach would be to scale using EC2 Spot instances but it is only
recommended for your background workload as Spot instances can be terminated by AWS at a
short notice.
18. (Optional) Deleting your stack and
free resources
Once you’re done experimenting, you can wind down all the resources you created through
CloudFormation with one single command. Now you can be sure you did not forget an instance
or NAT gateway somewhere silently adding to your AWS bill.

1 aws cloudformation delete-stack --stack-name=laravelaws

I hope that was helpful and got you to adopt infrastructure-as-code. If it has been helpful, please
send feedback or share!

You might also like