EKS-First-Project

Azeemushan Ali
12 min readAug 13, 2023

Welcome to my Medium blog, where technology and innovation collide! Join me on a journey as we delve into the ever-evolving world of tech, exploring ideas, insights, and trends that shape our digital landscape.

Kubernetes and K8s in AWS —

Kubernetes, often referred to as K8s, is the crown jewel of container orchestration. Imagine a conductor leading an orchestra of containers, coordinating their movements seamlessly. Kubernetes does just that for containers, simplifying the management and deployment of complex applications across clusters of machines. It offers automatic scaling, self-healing, and easy updates, enabling developers to focus on their code rather than infrastructure intricacies. With Kubernetes, the orchestration symphony becomes harmonious, enhancing agility and efficiency in modern software development.

Amazon EKS: Managed Kubernetes Service of AWS

Amazon Elastic Kubernetes Service (Amazon EKS) takes the power of Kubernetes and marries it with the ease of Amazon Web Services (AWS). EKS eliminates the challenges of setting up and maintaining Kubernetes clusters by providing a fully managed environment. From patching and scaling to security, EKS handles the heavy lifting, allowing you to concentrate on crafting exceptional applications. With EKS, you inherit the benefits of Kubernetes while enjoying the seamless integration, scalability, and reliability that AWS is known for. It’s like having a backstage crew that ensures your orchestration performance shines brilliantly.

Setting up pre-requisites for EKS —

Welcome to the hands-on journey of setting up your own Amazon Elastic Kubernetes Service (EKS) environment. Whether you’re a seasoned developer or just dipping your toes into container orchestration, this guide will walk you through the essential steps to get your EKS cluster up and running.

Step 1: AWS Account Setup
Before diving in, ensure you have an AWS account. If not, sign up and configure necessary permissions to create and manage EKS clusters.

Step 2: Install and Configure AWS CLI
Install the AWS Command Line Interface (CLI) to interact with your AWS services seamlessly. Configure it with your AWS credentials for authentication.

Go to “IAM” → ”Users” → “Add user → Enter username and access type. → Set permissions → get access keys (Access Key ID and Secret Access Key)

Use aws configure command to setup the aws in your host OS.

Step 3: Install kubectl & eksctl
kubectl — It is your command-line interface to manage Kubernetes clusters. Install it to communicate with your EKS cluster effectively. Follow this link to setup kubectl command.

eksctl — A command line tool for working with EKS clusters that automates many individual tasks. For more information, see Installing or updating. Follow this link to setup eksctl command.

Creating EKS-Cluster

Leverage the simplicity of eksctl command to swiftly establish Amazon EKS clusters. This command streamlines the process by automatically provisioning resources, defining networking components, and ensuring compatibility with Kubernetes, allowing you to focus on deploying and managing your containerised applications.

eksctl create cluster --name azeem-demo --region us-west-2 --fargate

This command with the specified options — name azeem-demo — region us-west-2 — fargate initiates the creation of an Amazon EKS cluster named “azeem-demo” in the “us-west-2” region. The — fargate flag signifies that the cluster will utilize AWS Fargate, a serverless compute engine for containers, eliminating the need to manage underlying infrastructure. This streamlined process simplifies cluster setup and management, allowing developers to focus on deploying containerized applications without the overhead of infrastructure maintenance.

Configure kubectl to Access the Cluster -
Once your cluster is up, configure kubectl to communicate with it. Fetch the necessary credentials and set up the kubeconfig file.

aws eks update kube-config — name azeem-demo — region us-west-2

The command aws eks update kube-config is used to update the local Kubernetes configuration file (usually located at ~/.kube/config) with the necessary configuration to access an Amazon EKS cluster. In this specific case, the command aws eks update kube-config — name azeem-demo — region us-west-2 is being used to update the Kubernetes configuration for the EKS cluster named “azeem-demo” located in the “us-west-2” AWS region.

After executing this command, the local Kubernetes configuration file will be updated with the appropriate context and authentication details, allowing you to interact with the specified EKS cluster using the kubectl command-line tool without the need to manually configure authentication. This streamlines the process of accessing and managing your EKS clusters from your local machine.

Fargate Profile —

A Fargate profile, within the context of Amazon EKS (Elastic Kubernetes Service), is a configuration that defines how pods should be scheduled and executed on AWS Fargate. AWS Fargate is a serverless compute engine designed to run containers without the need to manage the underlying infrastructure. Fargate profiles enable developers to specify which pods within a Kubernetes namespace should utilize Fargate, while allowing others to run on traditional EC2 instances.

When deploying applications on EKS, Fargate profiles offer a simplified way to optimize resource utilization and manage containerized workloads. By using Fargate, you can eliminate the burden of provisioning, scaling, and maintaining the compute infrastructure, focusing solely on deploying and managing your applications. This abstraction helps streamline the development process and enables you to focus on the core functionalities of your applications without getting caught up in the intricacies of infrastructure management.

eksctl create fargateprofile — cluster azeem-demo-1 — region us-west-2 — name sample-app — namespace game-2048

Now, let’s look at the provided command in this context.The command eksctl create fargateprofile is used to create a Fargate profile for an Amazon EKS cluster. A Fargate profile defines how pods should run on AWS Fargate, a serverless compute engine for containers, within the specified EKS cluster.

In the provided command:
* — cluster azeem-demo-1 specifies the name of the EKS cluster to which the Fargate profile is being associated.
* — region us-west-2 indicates the AWS region in which the cluster is located.
* — name sample-app assigns the name “sample-app” to the Fargate profile being created.
* — namespace game-2048 specifies the Kubernetes namespace (“game-2048”) to which the Fargate profile applies. This means that pods in the specified namespace will be scheduled on Fargate according to this profile.

Essentially, this command streamlines the process of configuring Fargate to manage the execution of pods within the specified namespace of the given EKS cluster. It automates the provisioning and management of Fargate resources, allowing developers to focus on deploying and managing containerized applications without concerning themselves with underlying infrastructure.

Deploying 2048 Game

Deploying the popular 2048 game on Amazon EKS involves creating Kubernetes manifests that define the deployment, service, and ingress resources. Using tools like kubectl, apply these manifests to your EKS cluster. This process orchestrates the deployment of the 2048 game as pods, makes it accessible through a service, and potentially exposes it to the internet via an ingress controller for an immersive gaming experience.

kubectl apply -f https://raw.githubusercontent.com/azeemushanali/DevOps_Projects/main/EKS-demo/game-2048.yml

This Kubernetes manifest defines the deployment of the 2048 game on Amazon EKS:

- The first section creates a Kubernetes namespace named “game-2048”.
- The Deployment resource specifies that the “deployment-2048” should run 5 replicas of the containerized game application using the public image public.ecr.aws/l6m2t8p7/docker-2048:latest.
- The Service resource defines “service-2048” to expose the game application within the namespace on port 80 using NodePort.
- The Ingress resource named “ingress-2048” configures an AWS ALB Ingress Controller for external access to the game. It routes traffic to the service using annotations for internet-facing load balancing.

In summary, this manifest orchestrates the deployment of the 2048 game within a specified namespace, makes it accessible through a Kubernetes service, and uses an Ingress controller to expose the game to the internet via an Application Load Balancer (ALB).You can get this manifest file from this link.

Check for the deployments —

The command kubectl get ns retrieves and lists all the namespaces in your Kubernetes cluster, providing a quick overview of the logical isolation environments for organizing and managing your applications and resources.

The command kubectl get pods -n game-2048 displays the status of all pods within the “game-2048” namespace, offering a concise overview of the deployment’s running instances and their current conditions.

The command kubectl get svc -n game-2048 provides a summary of the services within the “game-2048” namespace, offering insights into the exposed network endpoints for accessing the deployed application.

The command kubectl get ingress -n game-2048 retrieves and displays the ingress resources within the “game-2048” namespace, showcasing the routing rules and configuration for accessing the application from external sources.

Exposing the application to internet —

Ingress:In Kubernetes, an Ingress is an API object that manages external access to services within a cluster. It acts as a traffic manager, directing incoming requests to specific services based on hostnames, paths, or other criteria. Ingress simplifies the exposure of services to the outside world, offering an efficient way to manage routing and load balancing without altering application code. It’s a crucial component for enabling external access to applications hosted on Kubernetes.

ALB Ingress Controller:The AWS ALB (Application Load Balancer) Ingress Controller is a Kubernetes-native solution designed to work specifically with AWS Application Load Balancers. It automates the creation and management of ALBs, leveraging their advanced features for routing and traffic distribution. The ALB Ingress Controller streamlines the process of exposing applications to the internet by automatically configuring ALBs based on Ingress resources. This integration enhances the scalability and reliability of applications hosted on Amazon EKS, providing an optimal solution for load balancing and external access.

Associating an OIDC provider with an EKS cluster is necessary for secure interaction between Kubernetes and AWS IAM. This trust relationship enables Kubernetes service accounts to assume IAM roles, which are crucial for managing Ingress resources effectively. By linking Kubernetes RBAC with AWS IAM through OIDC, only authenticated and authorized entities can manage Ingress-related AWS resources, enhancing security and control.Ingress controllers often require permissions to manage AWS resources, like ALBs (Application Load Balancers), on behalf of your Kubernetes services. Associating an OIDC provider allows Kubernetes service accounts to assume IAM roles, enabling secure and controlled access to AWS resources.

eksctl utils associate-iam-oidc-provider — cluster $cluster_name — approve

Here’s a detailed breakdown of the command:

  • eksctl utils associate-iam-oidc-provider: This is the main command that instructs eksctl to associate the IAM OIDC provider.
  • --cluster $cluster_name: Specifies the name of the Amazon EKS cluster with which you want to associate the IAM OIDC provider. You need to replace $cluster_name with the actual name of your EKS cluster.
  • --approve: This flag indicates that you approve the association. It's important because associating an IAM OIDC provider creates a new OIDC identity provider in your AWS account, and this step requires your confirmation.

## Check if there is an IAM OIDC provider configured already(Optional)

export cluster_name=demo-cluster
oidc_id=$(aws eks describe-cluster - name $cluster_name - query "cluster.identity.oidc.issuer" - output text | cut -d '/' -f 5)
aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4\n

Service Account in EKS — A service account functions as an identity for pods running within the cluster. It enables pods to authenticate with the EKS API server and access AWS resources using AWS IAM roles. By associating IAM roles with service accounts, EKS ensures that pods have the appropriate permissions to interact with AWS services securely. Service accounts play a crucial role in implementing IAM roles for service accounts (IRSA), which facilitates fine-tuned access controls and enhances the security of applications deployed on EKS clusters.
This can be best understood from this link — AWS-Documnetaion

curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json
aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json
eksctl create iamserviceaccount \
--cluster=azeem-demo-1 \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--role-name AmazonEKSLoadBalancerControllerRole \
--attach-policy-arn=arn:aws:iam::xyz:policy/AWSLoadBalancerControllerIAMPolicy \
--approve

The eksctl create iamserviceaccount command creates an IAM (Identity and Access Management) service account within an Amazon EKS cluster to manage AWS resources. Here’s a breakdown of the provided command:

  • — cluster=azeem-demo-1: Specifies the EKS cluster (“azeem-demo-1”) in which to create the service account.
  • — namespace=kube-system: Identifies the Kubernetes namespace (“kube-system”) in which the service account will reside.
  • — name=aws-load-balancer-controller: Sets the name of the created service account as “aws-load-balancer-controller”.
  • — role-name AmazonEKSLoadBalancerControllerRole: Assigns the IAM role name to be used by the service account.
  • — attach-policy-arn=arn:aws:iam::111122223333:policy/AWSLoadBalancerControllerIAMPolicy-1: Specifies an IAM policy ARN to attach to the role associated with the service account.
  • — approve: Approves the creation of the IAM service account, confirming the changes.

In summary, this command facilitates the creation of an IAM service account for the AWS Load Balancer Controller, enabling it to manage load balancer resources in the specified EKS cluster and namespace, while also attaching a specific IAM policy to grant the required permissions.

Since we have created ingress,service accounts but who should look for ingress we created, answer is ingress controller.Creating an Ingress controller for ALB (Application Load Balancer) in Kubernetes is essential for managing external traffic and routing it to services within the cluster. The Ingress controller acts as a bridge, configuring ALBs based on Ingress resources’ rules, which simplifies load balancing, SSL termination, and path-based routing. It’s crucial for efficiently exposing applications to the internet, streamlining access management, and enhancing scalability while providing a seamless and controlled way to handle incoming requests.

In our case we need to install AWS ALB ingress controller,here’s how we can do this —

helm repo add eks https://aws.github.io/eks-charts
helm repo update eks

helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system \
--set clusterName=azeem-demo-1 \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller \
--set region=us-west-2 \
--set vpcId=vpc-someID

The helm install command installs the AWS Load Balancer Controller in the “kube-system” namespace using Helm charts. Here’s what each parameter does:

  • eks/aws-load-balancer-controller: Specifies the Helm chart repository and chart name for the AWS Load Balancer Controller.
  • -n kube-system: Sets the namespace where the controller will be installed.
  • — set clusterName=azeem-demo-1: Defines the name of the EKS cluster.
  • — set serviceAccount.create=false: Specifies not to create a new service account.
  • — set serviceAccount.name=aws-load-balancer-controller: Associates the existing service account “aws-load-balancer-controller” with the controller.
  • — set region=us-west-2: Specifies the AWS region.
  • — set vpcId=vpc-some-id: Sets the ID of the VPC where the Load Balancer Controller will operate.

In summary, this Helm command deploys the AWS Load Balancer Controller to manage ALBs in the EKS cluster, utilizing an existing service account, defining the AWS region, and connecting to the specified VPC.

kubectl get deployment -n kube-system aws-load-balancer-controller
kubectl get pods -n kube-system

Check for running pods for ingress controller. Also check for the ALB state if it is availabe, hit the DNS of ALB and at port 80 our game-2048 is running.

Links to refer → Github Repo, AWS Controller Documentation

Summary

In this article, we have explored the seamless integration of Amazon EKS, which simplifies cluster management.Also about how to deploy a basic app in EKS,created EKS clusters with ease using eksctl,resource utilization with Fargate profiles, deployed the 2048 game, and manage deployments efficiently,role of ALB Ingress Controllers for load balancing, while associating IAM roles ensures secure interaction. With insights on IAM service accounts and ALB Ingress Controller installation, this comprehensive guide empowers you to navigate the dynamic landscape of EKS and Kubernetes on AWS.

I hope you will like my efforts to make EKS journey easy.If you find this article helpful, show your love through that clap . Keep Learning!!

Connect with me on -

https://www.linkedin.com/in/azeemushan-ali/

--

--