Setup Amazon EKS Cluster

This article shows you how to create and manage the Amazon EKS cluster. EKS means Amazon’s managed Kubernetes cluster which is used to orchestrate container based infrastructure. The below steps are given clearly to understand easiest way of creating EKS cluster and connect to it. Please note you will be incurred to change for AWS resources, so make sure you pick the lowest price resources for this experiment.

User Setup for EKS Cluster

  • Create a separate user where with following privileges. Setup user with IAM policies.
    • AmazonEC2FullAccess
    • IAMFullAccess
    • AmazonVPCFullAccess
    • CloudFormation-Admin-policy
    • EKS-Admin-policy
  • Create IAM role, IAM roles should be consisted with following permissions. just go to AIM -> Roles EKS
    • AmazonEKSClusterPolicy
    • AmazonEKSServicePolicy

Client Setup for EKS Cluster

  1. Install  kubectl
    curl -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.13.8/2019-08-14/bin/linux/amd64/kubectl
    
    chmod +x kiubectl
    
    mkdir $HOME/bin
    
    mv kubectl $HOME/bin
    
    echo 'export PATH=$HOME/bin:$PATH' >> ~/.bashrc
    
    source .bashrc
  2. Install IAM Authenticator
    curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.13.7/2019-06-11/bin/linux/amd64/aws-iam-authenticator
    chmod +x ./aws-iam-authenticator
    mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$HOME/bin:$PATH
    echo 'export PATH=$HOME/bin:$PATH' >> ~/.bashrc
    aws-iam-authenticator help
    aws-iam-authenticator token -i 

Setup AWS Credentials

Setup AWS cli and configure AWS credentials which can be communicate with Amazon EKS.

Amazon EKS Cluster Setup

    1. Create separate VPC configurations in order to use under EKS cluster and underlying worker nodes. Please refer this CloudFormation script for creating VPC and related subnets.

  1. Create EKS Cluster control plane in Amazon EKS managed console. EKS control plane is fail over clustered and which is provisioned across different availability zones and which contains Kubernetes Etcd and API service.
    1. Go to EKS page in Amazon web console and give a meaningful name for that, this name is going to be used in worker nodes as well.
    2. In this step, add the Kubernetes general information including version, EKS Role ARN.
    3. Next add the networking information including VPC and related subnets. You can pickup the VPC details above created.
    4. Click next and finish the setup, this might take around 5-10mins to complete creating the EKS cluster plane.
    5. If you are familiar with AWS cli, use following command to create the EKS control plain.
      aws eks create-cluster --name <> --role-arn arn:aws:iam::xxxxxxxxxxx:role/EKSManageRole --resources-vpc-config subnetIds=subnet-xxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxx,securityGroupIds=sg-xxxxxxxxxxxxxx

Create worker nodes.

  1. Creating worker nodes is little complex since it should be connected with the existing EKS cluster control plane, here we have used Amazon cloud Formation template in order to create EKS worker nodes. Cloudformation script is located under this file location.
    Once you get this cloudFormation script, fill the EKS cluster name and the VPC with related subnets. As further, add a proper SSH key file to get access to the worker nodes.
  2. Special note here, that you need to specify the following parameters in the cloud formation stack.
    • NodeAutoScalingGroupMinSize : This is a parameter to set the minimum number of nodes for autoscalling. As an example when there is no traffic, it automatically scales down for mentioned #of instances.
    • NodeAutoScalingGroupMaxSize : This parameter sets for max number of instances when the worker nodes cluster is busy.
    • NodeImageId : Image ID should be picked up from available AMI image page provided by Amazon.
  3. Once the worker node cluster is created, log in to one of the node and see the logs for verification.
    sudo journalctl -f -u kubelet
    Unauthorized error is on the log when you get this output.(please refer to the image given below). In order to resolve this issue, let’s configure kube-config.yaml and aws-auth.yaml files in below steps.

Integrate EKS Cluster & Worker Nodes

  1. Next, we are going to configure kube-config.yaml file which is the file where all the configurations to access kubernetes cluster using kubectl client, hence create kube-config-eks file in the kubectl host where you installed. Use following commands to setup it.
    1. Download the sample kube-config-eks file here and add it to your kubectl client host. Use following commands and export it as a environment variable.
      vim ~/.kube/kube-config-eks
      export KUBECONFIG=~/.kube/kube-config-eks
    2. Change the required values for following parameters, and replace the exact value with the placeholders.
      server: Server is URL endpoint for the EKS control plane created above, get the value form EKS dashboard as below.
      certificate-authority-data: the authorized key file to EKS cluster control plane.
      <> : Add the EKS cluster control plane name.
    3. Run the kubectl command and get the successful output as given below.
      kubectl get svc

  2. Create aws-auth.yaml and apply it. Auth file template is available in this location, please download it and change the necessary parameters.
    Add the specific role ARN for kubenetes worker nodes cluster In the auth yaml file. Below screenshot shows how to copy the role ARN from cloudFormation outputs. So go to cloudFormation outputs and copy it.

    vim aws-auth.yaml
    kubectl apply -f aws-auth.yaml
    kubectl get nodes

You are done and able to see that success output 🙂

One thought on “Setup Amazon EKS Cluster

Leave a comment