Binary Data Upload to S3 Using API Gateway + Lambda

Earlier tutorial I showed you how to upload image to a s3 bucket using api gateway directly, but here my focus is for uploading images to s3 bucket using API gateway and lambda integration. In this integration is might be helpful for most of the software/system deployment because lambda functions gives more control over the code and it leads to configure it with more further integrations.

Let’s look at how we can configure API Gateway and Lambda together for this binary data uploading. In this tutorial which shows the easy steps to setup it.

Login to the AWS console and create a new API gateway & s3 bucket where you want to upload binary files.

Read More »

Upload Images to S3 Using API Gateway

This tutorial describing about the way how to gateway can be used to upload images in to amazon S3 buckets. In modern day software require to upload/retrieve information stored in shared locations rapidly. Amazon S3 is a almost highly used shared location by software developers.

Create IAM Role

  1. Login to Amazon Console and navigate to IAM management console
  2. Click on create role and select AWS Service and API Gateway as the service, Once you select the service, select the API Gateway as the use case.
  3. Click on the Permissions and click on Next, then add a meaningful name for role.
  4. Once all done click on create role.
Read More »

Ansible Setup with Terraform

Ansible is used for automate infrastructure tools/configurations in using a managed platform which is an opensource tool and mostly based on the infrastructure setup, update, maintain, etc. on customer requirements. This tutorial describes about how this tools used in a AWS infrastructure in order to install web server in a target EC2 machine. Ansible manger server is also setup in a separate EC2 machine, in order to install that terraform script will be used.

Following diagram shows a sample of Ansible architecture with three target nodes.

Read More »

Bitbucket Wehook Integration with Argo-Workflow

This document describes about the steps of integrate bitbucket and Argo-workflow using Bitbucket webhooks. AWS EKS is used as the Kubernetes cluster environment in order to setup this workflow. Please find detail perquisites before starting those workflows.

  1. EKS access in AWS management Console, if you do not have any AWS account, you can use AWS free account.
  2. Docker installed in your local computer.
  3. kubectls installed in your local computer.
  4. awscli is setup with latest version.

Argo-Workflow is officially supporting github but not supporting to Bitbucket webhooks and there is no any release so far from them even they have been requested in some official threads. Therefor this task can be done using a workaround with webhook integration using Golang server.

Above diagram shows the proposed solution. Bitbucket server webooks are sent to Golang server as a middle layer and which forward the webhook request to Argo-workflow. Then it executes all the workflow yaml files in the argo-worflow.

Install Argo-Workflow Server in EKS

Log in to the AWS console, navigate the the EKS cluster console, then you can start creating EKS cluster providing all required information. If you need any help of creating cluster, you can go to this link here for AWS official documents. I you want to create it quickly using aws cli, you can use the following comand.

aws eks create-cluster --region us-east-2 --name argo-wf --kubernetes-version 1.19 --role-arn arn:aws:iam::416199234731:role/<role_name> --resources-vpc-config subnetIds=subnet-<ID>,subnet-<ID>,subnet-<ID>,securityGroupIds=sg-<ID> --profile <profile_name>

Once the cluster is created, EKS context should be added to kubeconfig, please go throgh the below steps to soncfigure it.

Read More »

List GitHub Branches Dynamically In Jenkins Jobs

When it comes to a plae where you want to build your code among several github branches you or your team is working on, there might be some challenges on the selecting those branches. Here I’m going to talk about how you can setup your Jenkins job to dynamically fetch all the branches available in gitHub.

There are some plugins available in order to do the same thing but plugins are not supported for every version of the Jenkins, therefor you can use the below method inside Jenkins job configurations to easily setup it on Job without depending on plugins.

Pre-Requisites

  1. Jenkins server setup
  2. Necessary permission to create and configure jobs.
  3. Active choice parameter plug-in installed. If you are not installed the plugin you can do it easily using the plugins option.

Login to the Jenkins server and create a new sample project with “Freestyle” type.

Read More »

Push AWS WAF logs in to Kibana in Elasticsearch Service

Here I’m going to show you how we can push our WAF logs in to Kibana in AWS elasticsearch service.

Prerequisites :

  • Up & running configured WAF.
  • Elasticseach service on the same AWS account.
  • Admin Permission to access Amazon Kinesis.

Configure Amazon Kinesis for the New Data Stream

Kinesis is the easy way of collect, process, analyze real time streaming data in to data streams. This is a fully managed service offered by AWS. You can use it for any kind of data source and push them in to desired data streams. Let’s follow the steps given below.

Log into AWS console and navigate to the Amazon Kinesis dashboard. click on the Kinesis Data Firehose – Create delivery stream.

There are 5 steps to be completed to get it done.

1: Name and sourceStep :

Name : Add a meaningful name in the name field.
Source : Select Direct PUT or other source and click next.

2: Process recordsStep :
Data transformation : Keep the default value
Record format conversion : Keep the default value and click Next.

Read More »

Spring Boot App Deploy on Amazon EKS

This document guides you through the process of deploying your first Springboot module in the Amazon EKS. Here we use sample project which is in Github account. This sample project contains basic Java Springboot Hello World application and which should be built using Gradle. For your references all the required basic kubernetes files have been uploaded to the git repo.

There are two files on Kubernetes deployment plan, deployment & service yaml.

Deployment.yaml is the file where you specify all the required details about the application, as in the file below, you can see different sections described. This deployment file basically divide in to sections including metadata, spec & containers.

  • Metadata : Includes the information about the application.
  • Spec : Spec mentions number of replicas and templates and labels.
    • replicas : Number of pods you want to deploy on the kubernetes cluster.
  • containers : describe about container details including docker image and application startup arguments.
    • name : Name of the container that you would wish to use.
    • image : Add the docker image location of what you would wish to deploy, this may be a your own docker hub, Amazon ECR, etc.
    • commands : Application startup commands.
    • args : Other argument that would required when the application runs.
  • Ports : Ports which are which to run the deployment.
    • containerPort : Mention about the port number.

Service.yaml file is used where the application is being exposed as a service. In service.yaml file has several sections to be described.
Read More »

Setup Nagios alert for Specific Time

Setup and monitoring and alerting are once of major priority of the production distribution and Nagios playing a vast role on that. Nagios has a several features on it. In this tutorial, it’s going to discuss that how we can monitor application in a specific time period at any given time.

  1. SSH in to the Nagios server and open the file called “timeperiods.cfg”. In my server which is located under the following location.

    /usr/local/nagios/etc/objects/timeperiods.cfg

  2.  Add the required time periods to the file as sample given below. If you wish to add more time periods together, you can use comma separated spaces.
    Read More »

Setup Amazon EKS Cluster

This article shows you how to create and manage the Amazon EKS cluster. EKS means Amazon’s managed Kubernetes cluster which is used to orchestrate container based infrastructure. The below steps are given clearly to understand easiest way of creating EKS cluster and connect to it. Please note you will be incurred to change for AWS resources, so make sure you pick the lowest price resources for this experiment.

User Setup for EKS Cluster

  • Create a separate user where with following privileges. Setup user with IAM policies.
    • AmazonEC2FullAccess
    • IAMFullAccess
    • AmazonVPCFullAccess
    • CloudFormation-Admin-policy
    • EKS-Admin-policy
  • Create IAM role, IAM roles should be consisted with following permissions. just go to AIM -> Roles EKS
    • AmazonEKSClusterPolicy
    • AmazonEKSServicePolicy

Client Setup for EKS Cluster

  1. Install  kubectl
    curl -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.13.8/2019-08-14/bin/linux/amd64/kubectl
    
    chmod +x kiubectl
    
    mkdir $HOME/bin
    
    mv kubectl $HOME/bin
    
    echo 'export PATH=$HOME/bin:$PATH' >> ~/.bashrc
    
    source .bashrc
  2. Install IAM Authenticator
    curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.13.7/2019-06-11/bin/linux/amd64/aws-iam-authenticator
    chmod +x ./aws-iam-authenticator
    mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$HOME/bin:$PATH
    echo 'export PATH=$HOME/bin:$PATH' >> ~/.bashrc
    aws-iam-authenticator help
    aws-iam-authenticator token -i 

Setup AWS Credentials

Setup AWS cli and configure AWS credentials which can be communicate with Amazon EKS.

Amazon EKS Cluster Setup

    1. Create separate VPC configurations in order to use under EKS cluster and underlying worker nodes. Please refer this CloudFormation script for creating VPC and related subnets.

Read More »