WebServer With EFS , S3 and Cloudfront Using Terraform

Azeemushan Ali
9 min readSep 19, 2020

Hello and welcome. In this article we will be getting some hands on knowledge over some of the leading technology and that is Cloud Computing with Automation by Terraform. Before moving on further with the task let us first understand the agenda.

This is the hands on task which will do following things -

1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

What Is AWS?

AWS (Amazon Web Services) is a comprehensive, evolving cloud computing platform provided by Amazon that includes a mixture of infrastructure as a service (IaaS), platform as a service (PaaS) and packaged software as a service (SaaS) offerings. AWS services can offer an organization tools such as compute power, database storage and content delivery services. More than 100 services comprise the Amazon Web Services including -

  • Compute
  • Storage databases
  • Data management
  • Hybrid cloud
  • Networking
  • Security
  • Big data management
  • Artificial intelligence (AI)

What Is Terraform?

Terraform (https://www.terraform.io/) is an open source project by Hashicorp written in golang (https://golang.org/). It lets you define cloud resources (servers, s3 buckets, lambda functions, IAM policies, etc.) in code and check them into a source control. You can then “execute” the configuration and create/modify/delete all the cloud resources with a single command.

If you have any resources in AWS/Google Cloud/Azure, etc. its a high likelihood that terraform can improve your workflow and make management of your cloud resources a breeze! I have used it with AWS, so, most of this post will discuss terraform in context of AWS. But, it works fine with Google Cloud, Azure, Alibaba cloud, etc.

Using Terraform:

  • terraform init : Creates a .terraform directory
  • terraform plan : Outputs how terraform interprets the main.tf file and what resources it will create/modify/delete. Its a dry-run. Which is very critical because you would like know exactly what changes it will do your cloud resources. Surprises are bad!
  • terraform apply : Reads the main.tf and makes all the changes to the cloud. This step outputs a .tfstate file that contains identifiers of cloud resources. This generated file is very important and you should never edit this manually(Recommended).

A best practice is to set up a terraform role in IAM on AWS, use that to manage resource access to terraform and then execute it on the machine with that role

Steps toward our work -

Before proceeding ahead with terraform use your Command Prompt to configure IAM role as Profile in your AWS CLI.

  1. Plugins Initialization :-Now Create a file with Extension .tf in separate folder and run the following command to initialize with Terraform Environment and downloading and Installing Terraform Plugins.

2. Creating Key-Pairs :- This is the Snippet of Code which will Create Key Pair and will download and save in the current working directory .Which will the be used it future when launching instance.

resource "tls_private_key" "this" {
algorithm = "RSA"
}


resource "local_file" "private_key" {
content = tls_private_key.this.private_key_pem
filename = "mykey.pem"
}


resource "aws_key_pair" "mykey" {
key_name = "mykey_new"
public_key = tls_private_key.this.public_key_openssh
}

It works on two different keys i.e public key and private key using tls_private_key resource.

Firstly tls_private_key will generate a private key after that local_file will save that key and then aws_key_pair will create a new key pair in your AWS account

resource "aws_vpc" "prod_vpc" {
cidr_block = "192.168.0.0/16"
enable_dns_support = "true"
enable_dns_hostnames = "true"
instance_tenancy = "default"



tags = {
Name = "myfirstVPC"
}
}
resource "aws_subnet" "mysubnet" {
vpc_id = "${aws_vpc.prod_vpc.id}"
cidr_block = "192.168.1.0/24"
map_public_ip_on_launch = "true"
availability_zone = "ap-south-1b"



tags = {
Name = "myfirstsubnet"

}
}


resource "aws_internet_gateway" "igw" {
vpc_id = "${aws_vpc.prod_vpc.id}"



tags = {
Name = "mygw"
}

}


resource "aws_route_table" "mypublicRT" {
vpc_id = "${aws_vpc.prod_vpc.id}"



route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.igw.id}"
}
tags = {
Name = "myRT1"
}

}


resource "aws_route_table_association" "public_association" {
subnet_id = aws_subnet.mysubnet.id
route_table_id = aws_route_table.mypublicRT.id
}

Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined. This virtual network closely resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

You can launch AWS resources into a specified subnet. If user wants to launch an instance , directly user cannot launch instance in Data Center they require subnet and while launching an instance they internally create a DHCP server.

A public subnet for resources that must be connected to the internet world

A network gateway joins two networks so the devices on one network can communicate with the devices on another network and a route table contains a set of rules, called routes , that are used to determine where network traffic from your VPC is directed. You can explicitly associate a subnet with a particular route table. Otherwise, the subnet is implicitly associated with the main route table.

resource "aws_security_group" "allow_traffic" {
name = "allow_nfs"
description = "NFS "
vpc_id = "${aws_vpc.prod_vpc.id}"
ingress {
description = "HTTP from VPC"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "NFS"
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
}
ingress {
description = "SSH from VPC"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "myfirewall"
}
}

Now created security group in which we allow HTTP , SSH and NFS which uses 80,22 and 2049 port respectively in the VPC which we have already created or you can also use default VPC

By default, AWS creates an ALLOW ALL egress rule when creating a new Security Group inside in a VPC.

resource "aws_efs_file_system" "myefs" {
creation_token = "EFS"
tags = {
Name = "MyEFS"
}
}
resource "aws_efs_mount_target" "mytarget" {
file_system_id = aws_efs_file_system.myefs.id
subnet_id = aws_subnet.mysubnet.id
security_groups = [aws_security_group.allow_traffic.id]
}

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources

Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability. Amazon EC2 instances can access your file system across AZs, regions, and VPCs, while on-premises servers can access using AWS Direct Connect or AWS VPN.

Here we created EFS system and then mount it and therefore we have provided file system id , subnet id and security group id

resource "aws_instance" "myefsOS" {
depends_on = [ aws_efs_mount_target.mytarget ]
ami = "ami-0ebc1ac48dfd14136"
instance_type = "t2.micro"
key_name = aws_key_pair.mykey.key_name
subnet_id = aws_subnet.mysubnet.id
vpc_security_group_ids = [aws_security_group.allow_traffic.id]
user_data = <<-EOF
#! /bin/bash

sudo yum install httpd -y
sudo systemctl start httpd
sudo systemctl enable httpd
sudo rm -rf /var/www/html/*
sudo yum install -y amazon-efs-utils
sudo apt-get -y install amazon-efs-utils
sudo yum install -y nfs-utils
sudo apt-get -y install nfs-common
sudo file_system_id_1="${aws_efs_file_system.myefs.id}
sudo efs_mount_point_1="/var/www/html"
sudo mkdir -p "$efs_mount_point_1"
sudo test -f "/sbin/mount.efs" && echo "$file_system_id_1:/ $efs_mount_point_1 efs tls,_netdev" >> /etc/fstab || echo "$file_system_id_1.efs.ap-south-1.amazonaws.com:/$efs_mount_point_1 nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,_netdev 0 0" >> /etc/fstab
sudo test -f "/sbin/mount.efs" && echo -e "\n[client-info]\nsource=liw" >> /etc/amazon/efs/efs-utils.conf
sudo mount -a -t efs,nfs4 defaults
cd /var/www/html
sudo yum insatll git -y
sudo mkfs.ext4 /dev/xvdf1
sudo rm -rf /var/www/html/*
sudo yum install git -y
sudo git clone https://github.com/azeemushanali/Hybrid-Task-2.git /var/www/html

EOF
tags = {
Name = "myOS"
}
}
}

we are creating EC2 instance now , here we have used key name and security groups that we have already created.

When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts and i have installed httpd server and and amazon efs utils and nfs utils for mouting efs volume

resource "aws_s3_bucket" "mybucket" {
bucket = "azeem2210"
acl = "public-read"
force_destroy = true
policy = <<EOF
{
"Id": "MakePublic",
"Version": "2012-10-17",
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::azeem2210/*",
"Principal": "*"
}
]
}
EOF
provisioner "local-exec" {
command = "git clone https://github.com/azeemushanali/Hybrid-Task-2.git AWS_task2"

}
provisioner "local-exec" {
when = destroy
command = "echo Y | rmdir /s AWS_task2"
}
tags = {
Name = "azeem2210"
}
}
resource "aws_s3_bucket_object" "Upload_image" {
depends_on = [
aws_s3_bucket.mybucket
]
bucket = aws_s3_bucket.mybucket.bucket
key = "mypic.jpeg"
source = "AWS_task2/myimage.jpeg"
acl = "public-read"
}

After that we are creating S3 bucket and give the access control as public read and upload the image in the bucket.Amazon S3 provides APIs for creating and managing buckets.

Public access is granted to buckets and objects through access control lists (ACLs), bucket policies, or both. To help you manage public access to Amazon S3 resources, Amazon S3 provides block public access settings.

resource "aws_s3_bucket_object" "Upload_image" {
depends_on = [
aws_s3_bucket.mybucket
]
bucket = aws_s3_bucket.mybucket.bucket
key = "mypic.jpeg"
source = "AWS_task2/myimage.jpeg"
acl = "public-read"
}
locals {
s3_origin_id = "S3-${aws_s3_bucket.mybucket.bucket}"
image_url = "${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.Upload_image.key}"
}
resource "aws_cloudfront_distribution" "s3_distribution" {
depends_on = [
aws_instance.myefsOS
]
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
}
enabled = true
origin {
domain_name = aws_s3_bucket.mybucket.bucket_domain_name
origin_id = local.s3_origin_id
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
connection {
type = "ssh"
user = "ec2-user"
host = aws_instance.myefsOS.public_ip
port = 22
private_key = tls_private_key.this.private_key_pem

}
provisioner "remote-exec" {
inline = [
"sudo su << EOF",
"echo \"<img src='http://${self.domain_name}/${aws_s3_bucket_object.Upload_image.key}'>\" >> /var/www/html/test.html",
"EOF"
]
}
}
output "myoutput" {
value = "http://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.Upload_image.key}"
}

Now we have created the bucket and created the cloud front distribution . Here we can upload the data like video ,images etc to the s3 bucket and that can be accessed by the cloudfront i.e we need to use the url which is given by the cloudfront to access that data.

CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.

Using the outputs, we will verify the CDN URL for the image that has been sent to the webpage

# To validate the configuration file in the directory
Terraform validate
# To initialize the plugins
Terraform init
# To create the infrastructure
Terraform apply
#To destroy the infrastructure
Terraform destroy

OUTPUT :

Test Page of HTTP server is working

Displays my image this shows CDN is working fine

All the codes discussed earlier can be found on my Github Repo & connect with me on Linkedin !!

Thank you Everyone for reading .!! Bella Ciao

--

--