Earlier tutorial I showed you how to upload image to a s3 bucket using api gateway directly, but here my focus is for uploading images to s3 bucket using API gateway and lambda integration. In this integration is might be helpful for most of the software/system deployment because lambda functions gives more control over the code and it leads to configure it with more further integrations.
Let’s look at how we can configure API Gateway and Lambda together for this binary data uploading. In this tutorial which shows the easy steps to setup it.
Login to the AWS console and create a new API gateway & s3 bucket where you want to upload binary files.
This tutorial describing about the way how to gateway can be used to upload images in to amazon S3 buckets. In modern day software require to upload/retrieve information stored in shared locations rapidly. Amazon S3 is a almost highly used shared location by software developers.
Create IAM Role
Login to Amazon Console and navigate to IAM management console
Click on create role and select AWS Service and API Gateway as the service, Once you select the service, select the API Gateway as the use case.
Click on the Permissions and click on Next, then add a meaningful name for role.
Ansible is used for automate infrastructure tools/configurations in using a managed platform which is an opensource tool and mostly based on the infrastructure setup, update, maintain, etc. on customer requirements. This tutorial describes about how this tools used in a AWS infrastructure in order to install web server in a target EC2 machine. Ansible manger server is also setup in a separate EC2 machine, in order to install that terraform script will be used.
Following diagram shows a sample of Ansible architecture with three target nodes.
This document describes about the steps of integrate bitbucket and Argo-workflow using Bitbucket webhooks. AWS EKS is used as the Kubernetes cluster environment in order to setup this workflow. Please find detail perquisites before starting those workflows.
EKS access in AWS management Console, if you do not have any AWS account, you can use AWS free account.
Docker installed in your local computer.
kubectls installed in your local computer.
awscli is setup with latest version.
Argo-Workflow is officially supporting github but not supporting to Bitbucket webhooks and there is no any release so far from them even they have been requested in some official threads. Therefor this task can be done using a workaround with webhook integration using Golang server.
Above diagram shows the proposed solution. Bitbucket server webooks are sent to Golang server as a middle layer and which forward the webhook request to Argo-workflow. Then it executes all the workflow yaml files in the argo-worflow.
Install Argo-Workflow Server in EKS
Log in to the AWS console, navigate the the EKS cluster console, then you can start creating EKS cluster providing all required information. If you need any help of creating cluster, you can go to this link here for AWS official documents. I you want to create it quickly using aws cli, you can use the following comand.
When it comes to a plae where you want to build your code among several github branches you or your team is working on, there might be some challenges on the selecting those branches. Here I’m going to talk about how you can setup your Jenkins job to dynamically fetch all the branches available in gitHub.
There are some plugins available in order to do the same thing but plugins are not supported for every version of the Jenkins, therefor you can use the below method inside Jenkins job configurations to easily setup it on Job without depending on plugins.
Pre-Requisites
Jenkins server setup
Necessary permission to create and configure jobs.
Active choice parameter plug-in installed. If you are not installed the plugin you can do it easily using the plugins option.
Login to the Jenkins server and create a new sample project with “Freestyle” type.
Here I’m going to show you how we can push our WAF logs in to Kibana in AWS elasticsearch service.
Prerequisites :
Up & running configured WAF.
Elasticseach service on the same AWS account.
Admin Permission to access Amazon Kinesis.
Configure Amazon Kinesis for the New Data Stream
Kinesis is the easy way of collect, process, analyze real time streaming data in to data streams. This is a fully managed service offered by AWS. You can use it for any kind of data source and push them in to desired data streams. Let’s follow the steps given below.
Log into AWS console and navigate to the Amazon Kinesis dashboard. click on the Kinesis Data Firehose – Create delivery stream.
There are 5 steps to be completed to get it done.
1: Name and sourceStep :
Name : Add a meaningful name in the name field. Source : Select Direct PUT or other source and click next.
2: Process recordsStep : Data transformation : Keep the default value Record format conversion : Keep the default value and click Next.
This document guides you through the process of deploying your first Springboot module in the Amazon EKS. Here we use sample project which is in Github account. This sample project contains basic Java Springboot Hello World application and which should be built using Gradle. For your references all the required basic kubernetes files have been uploaded to the git repo.
There are two files on Kubernetes deployment plan, deployment & service yaml.
Deployment.yaml is the file where you specify all the required details about the application, as in the file below, you can see different sections described. This deployment file basically divide in to sections including metadata, spec & containers.
Metadata : Includes the information about the application.
Spec : Spec mentions number of replicas and templates and labels.
replicas : Number of pods you want to deploy on the kubernetes cluster.
containers : describe about container details including docker image and application startup arguments.
name : Name of the container that you would wish to use.
image : Add the docker image location of what you would wish to deploy, this may be a your own docker hub, Amazon ECR, etc.
commands : Application startup commands.
args : Other argument that would required when the application runs.
Ports : Ports which are which to run the deployment.
containerPort : Mention about the port number.
Service.yaml file is used where the application is being exposed as a service. In service.yaml file has several sections to be described. Read More »
Setup and monitoring and alerting are once of major priority of the production distribution and Nagios playing a vast role on that. Nagios has a several features on it. In this tutorial, it’s going to discuss that how we can monitor application in a specific time period at any given time.
SSH in to the Nagios server and open the file called “timeperiods.cfg”. In my server which is located under the following location.
/usr/local/nagios/etc/objects/timeperiods.cfg
Add the required time periods to the file as sample given below. If you wish to add more time periods together, you can use comma separated spaces. Read More »
This article shows you how to create and manage the Amazon EKS cluster. EKS means Amazon’s managed Kubernetes cluster which is used to orchestrate container based infrastructure. The below steps are given clearly to understand easiest way of creating EKS cluster and connect to it. Please note you will be incurred to change for AWS resources, so make sure you pick the lowest price resources for this experiment.
User Setup for EKS Cluster
Create a separate user where with following privileges. Setup user with IAM policies.
AmazonEC2FullAccess
IAMFullAccess
AmazonVPCFullAccess
CloudFormation-Admin-policy
EKS-Admin-policy
Create IAM role, IAM roles should be consisted with following permissions. just go to AIM -> Roles EKS
Setup AWS cli and configure AWS credentials which can be communicate with Amazon EKS.
Amazon EKS Cluster Setup
Create separate VPC configurations in order to use under EKS cluster and underlying worker nodes. Please refer this CloudFormation script for creating VPC and related subnets.
Jenkins server comes with a traditional User interface and which is not attractive in some ways, therefor which can be reset with to modern attracted appearance. And it does not take too long time to move to new Jenkins.
Beautify Jenkins would be done through in-built CSS scripts and those can be configured as several methods, those methods are described here.Read More »