Tag Archive

Below you'll find a list of all posts that have been tagged as "docker"
blogImage

2 Approaches to Ensure Success in Your DevOps Journey

DevOps makes continuous software delivery simple for both development and operation teams, with a set of tools and best practices. In order to understand the power of DevOps, we chose a standard development environment with a suite of applications such as git, Gerrit, Jenkins, JIRA and Nagios. We studied setting up such a traditional environment and compared the same with a more modern approach involving docker containers. Introduction In this article we will discuss about DevOps, traditional and container based ways of approaching it. For this purpose, we will have a fictitious software company (client) which wants to streamline its development and delivery process. What is DevOps? DevOps means many things to many people. The one that is closest to our view is “DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support”. The process of continuous delivery of good quality code with the help of tools that make it easy. There are many tools that work in tandem to ensure that only good quality code ends up to the production. Our client wants to use the following tools. DevOps Tools Git – Most popular distributed version control system. Gerrit – Code review tool. Jenkins -Continuous integration tool. JIRA – Bug tracking tool. Development Workflow We came up with the following workflow that captures a typical development life cycle: A developer will commit their changes to the staging area of a branch. Gerrit will watch for commits in the staging area. Jenkins will watch Gerrit for new change sets to review. It will then trigger a set of jobs to run on the patch set. The results are shared with both Gerrit and JIRA. Based on the commit message the appropriate JIRA issue will be updated. When reviewers accept the change, it will be ready to commit. In order to let Jenkins auto update JIRA issue, a pattern was enforced for all commits. This allowed us to automate Jenkins to find and update a specific JIRA issue. DevOps Operations Workflow Operation teams were more concerned with provisioning machines (physical or virtual), installing suite of applications and eventually monitoring those machines and applications. Alert notifications are important too, in order to address any anomalies at the earliest. For monitoring and alert notification we used Nagios. Two Types of DevOps Approach Traditional Approach The traditional approach is to manually install these tools on bare metal boxes or virtual machines and configuring them to talk to each other. Following are the brief steps for traditional DevOps infrastructure. Git/Gerrit, Jenkins, JIRA is installed on single or multiple machines. Gerrit project,accounts are created and access is provided as per the requirement. Required plugins are installed on Jenkins( i.e. Gerrit-trigger, Git client, JIRA issue updater, Git etc). Previously installed plugins are configured on Jenkins. Jenkins ssh key is added against Gerrit account. Multiple accounts with few issues are created in JIRA. Now the whole DevOps infrastructure is ready to be used. DevOps Automation Services via Python Script We automated the workflow for both installation, configuration and monitoring using python script. Actual python code for downloading, installing and configuring Git, Gerrit, Jenkins, JIRA and Nagios can be found in the following Github repository. https://github.com/sujauddinmullick/dev_ops_traditional Container Approach The automation of installation and configuration relieves us of some pain in setting up infrastructure. Think of a situation where the client’s environment dependencies conflict with our DevOps infrastructure dependencies. In order to solve this problem, we tried isolating the DevOps environment from existing environment. We used docker engine to setup these tools. A docker engine builds over a linux container. A linux container is an operating system-level virtualization method for running multiple isolated linux systems (continers) on a single control host[2]. It differs from virtual machines in many ways. One stricking difference is containers share host’s kernel and library files but virtual machines do not. This makes containers lightweight than virtual machines. Getting a linux container up and running is once again a difficult task. Docker makes it simple. Docker is built on top of Linux containers, which makes it easy to create, deploy and run applications using containers. Dockerfile is used to create a container image. It contains instructions for docker to assemble an image. For example, following Dockerfile will build a Jenkins image. Sample Dockerfile for Jenkins: FROM jenkins MAINTAINER sujauddin # Install plugins COPY plugins.txt /usr/local/etc/plugins.txt RUN /usr/local/bin/plugins.sh /usr/local/etc/plugins.txt # Add gerrit-trigger plugin config file COPY gerrit-trigger.xml /usr/local/etc/gerrit-trigger.xml COPY gerrit-trigger.xml /var/jenkins_home/gerrit-trigger.xml # Add Jenkins URL and system admin e-mail config file COPY jenkins.model.JenkinsLocationConfiguration.xml /usr/local/etc/jenkins.model.JenkinsLocationConfiguration.xml COPY hudson.plugins.JIRA.JIRAProjectProperty.xml /var/jenkins_home/hudson.plugins.JIRA.JIRAProjectProperty.xml COPY jenkins.model.JenkinsLocationConfiguration.xml /var/jenkins_home/jenkins.model.JenkinsLocationConfiguration.xml #COPY jenkins.model.ArtifactManagerConfiguration.xml /var/jenkins_home/jenkins.model.ArtifactManagerConfiguration.xml # Add setup script. COPY jenkins-setup.sh /usr/local/bin/jenkins-setup.sh # Add cloud setting in config file. COPY config.xml /usr/local/etc/config.xml COPY jenkins-cli.jar /usr/local/etc/jenkins-cli.jar COPY jenkins_job.xml /usr/local/etc/jenkins_job.xml We can run the previously built images inside a docker container. To setup and run a set of containers, docker-compose tool is used. This command will take a docker-compose.yml file and builds and runs all the containers defined there. The actual compose file we used to build git, gerrit, jenkins and JIRA is given below. docker-compose.yml final_gerrit: image: sujauddin/docker_gerrit_final restart: always ports: - 8020:8080 - 29418:29418 final_JIRA: build: ./docker-JIRA ports: - 8025:8080 restart: always final_jenkins: build: ./docker-jenkins restart: always ports: - 8023:8080 - 8024:50000 links: - final_JIRA - final_gerrit final_DevOpsnagios: image: tpires/nagios ports: - 8036:80 - 8037:5666 - 8038:2266 restart: always With one command we got all the containers up and running with all the necessary configurations done so that the whole DevOps workflow runs smoothly. Isn’t that cool? Conclusion Clearly docker based approach is easy to setup and more efficient. Containers can be quickly deployed (usually in a few seconds), can be ported along with the application and its dependencies and has minimal memory footprint. Resulting into a happy client!

Aziro Marketing

blogImage

An Introduction to Serverless and FaaS (Functions as a Service)

Evolution of Serverless ComputingWe started with building monolithic applications for installing and configuring OS. This was followed by installing application code on every PC to VM’s to meet their user’s demand. It simplified the deployment and management of the servers. Datacenter providers started supporting a virtual machine, but this still required a lot of configuration and setup before being able to deploy the application code.After a few years, Containers came to the rescueDockers made its mark in the era of Containers, which made the deploying of applications easier. They provided a simpler interface to shipping code directly into production. They also made it possible for platform providers to get creative. Platforms could improve the scalability of users’ applications. But what if developers could focus on even less? It can be possible with Serverless Computing. What exactly is “Serverless”?Serverless computing is a cloud computing model which aims to abstract server management and low-level infrastructure decisions away from developers. In this model, the allocation of resources is managed by the cloud provider instead of the application architect, which brings some serious benefits. In other words, serverless aims to do exactly what it sounds like—allow applications to be developed without concerns for implementing, tweaking, or scaling a server.In the below diagram, you can understand that you wrap your Business Logic inside functions. In response to the events, these functions execute on the cloud. All the heavy lifting like Authentication, DB, File storage, Reporting, Scaling will be handled by your Serverless Platform. For Example AWS Lamba, Apache IBM openWhisk.When we say “Serverless Computing,” does it mean no servers involved?The answer is No. Let’s switch our mindset completely. Think about using only functions — no more managing servers. You (Developer) only care about the business logic and leave the rest to the Ops to handle.Functions as a Service (FaaS)It is an amazing concept based on Serverless Computing. It provides means to achieve the Serverless dream allowing developers to execute code in response to events without building out or maintaining a complex infrastructure. What this means is that you can simply upload modular chunks of functionality into the cloud that are executed independently. Sounds simple, right? Well, it is.If you’ve ever written a REST API, you’ll feel right at home. All the services and endpoints you would usually keep in one place are now sliced up into a bunch of tiny snippets, Microservices. The goal is to completely abstract away servers from the developer and only bill based on the number of times the functions have been invoked.Key components of FaaS:Function: Independent unit of the deployment. E.g.: file processing, performing a scheduled taskEvents: Anything that triggers the execution of the function is regarded as an event. E.g.: message publishing, file uploadResources: Refers to the infrastructure or the components used by the function. E.g.: database services, file system servicesQualities of a FaaS / Functions as a ServiceExecute logic in response to events. In this context, all logic (including multiple functions or methods) are grouped into a deployable unit, known as a “Function.”Handle packaging, deployment, scaling transparentlyScale your functions automatically and independently with usageMore time focused on writing code/app specific logic—higher developer velocity.Built-in availability and fault tolerancePay only for used resourcesUse cases for FaaSWeb/Mobile ApplicationsMultimedia processing: The implementation of functions that execute a transformational process in response to a file uploadDatabase changes or change data capture: Auditing or ensuring changes meet quality standardsIoT sensor input messages: The ability to respond to messages and scale in responseStream processing at scale: Processing data within a potentially infinite stream of messagesChatbots: Scaling automatically for peak demandsBatch jobs scheduled tasks: Jobs that require intense parallel computation, IO or network accessSome of the platforms for ServerlessIntroduction to AWS Lambda (Event-driven, Serverless computing platform)Introduced in November 2014, Amazon provides it as part of Amazon Web Services. It is a computing service that runs code in response to events and automatically manages the computing resources required by that code. Some of the features are:Runs Stateless – request-driven code called Lambda functions in Java, NodeJS & PythonTriggered by events (state transitions) in other AWS servicesPay only for the requests served and the compute timeAllows to Focus on business logic, not infrastructureHandles your codes: Capacity, Scaling, Monitoring and Logging, Fault Tolerance, and Security PatchingSample code on writing your first lambda function:This code demonstrates simple-cron-job written in NodeJS which makes HTTP POST Request for every 1 minute to some external service.For detail tutorial, you can read on https://parall.ax/blog/view/3202/tutorial- serverless-scheduled-tasksOutput: Makes a POST call for every minute. The function that is firing POST request is actually running on AWS Lambda (Serverless Platform).Conclusion: In conclusion, serverless platforms today are useful for tasks requiring high-throughput rather than very low latency. It also helps to complete individual requests in a relatively short time window. But the road to serverless can get challenging depending on the use case. And like any new technology innovations, serverless architectures will continue to evolve to become a well-established standard.References: https://blog.cloudability.com/serverless-computing-101/ https://www.doc.ic.ac.uk/~rbc/papers/fse-serverless-17.pdf https://blog.g2crowd.com/blog/trends/digital-platforms/2018-dp/serverless-computing/ https://www.manning.com/books/serverless-applications-with-node-js

Aziro Marketing

blogImage

Make Your Docker Setup a Success with these 4 Key Components

Building a web application to deploy on an infrastructure, which needs to be on HA mode, while being consistent across all zones is a key challenge. Thanks to the efforts of enthusiasts and technologists, we now have the answer to this challenge in the form of Docker Swarm. A Docker Container architecture will allow the deployment of web applications on the required infrastructure.As a part of this write-up, I will run you through Docker Setup while emphasizing on the challenges and key concern on deploying the web applications on such infrastructure; such that it is highly available, load balanced and deployable quickly, every time changes or releases take place. This may not sound easy, but we gave it a shot, and we were not disappointed.Background:The Docker family is hardly restrained by environments. When we started analyzing all container and cluster technologies, the main consideration was easy to use and simple to implement. With the latest version of Docker swarm, that became possible. Though swarm seemed to lack potential in the initial phase, it matured over time and dispels any doubts that may have been expressed in the past.Docker swarm:Docker swarm is a great cluster technology from Docker. Unlike its competitors like Kubernetes, Mesos and CoreOS Fleet, Swarm is relatively easier to work with. Swarm holds the clusters of all similar functions and communicates between them.So after much POC and analysis, we decided to go ahead with Docker, we got our web application up and running, and introduced it to the Docker family. We realized that the web application might take some time to adjust in the container deployment so we considered revisiting the design and testing the compatibility; but thanks to dev community, the required precautions were already taken care of while development.The web application is a typical 3-tier application – client, server, and database.Key Challenges of Web Application DeploymentSlow deploymentHAoad balancerNow let’s Docker-Implementation Steps:Create a package using Continuous Integration.Once the web application is built and packaged, modify the Docker file and append the latest version of the web app built using Jenkins. This was automated E2E.Create the image using Docker file and deploy it to container. Start the container and verify whether the application is up and running.The UI cluster exclusively held the UI container, and the DB cluster was holding all DB containers. Docker swarm made the clustering very easy and communication between each container occurred without any hurdle.Docker Setup:Components:Docker containers, Docker swarm, UCP, load balancer (nginx)In total there are 10 containers deployed which communicate with DB nodes and fetch the data as per requirement. The containers we deployed were slightly short of 50 for this setup. Docker UCP is an amazing UI for managing E2E containers orchestration. UCP is not only responsible for on-premise container management, but also a solution for VPC (virtual private cloud). It manages all containers regardless of infrastructure and application running on any instance.UCP comes in two flavors: open source as well as enterprise solution.Port mappings:The application is configured to listen in on port 8080, which gets redirected from the load balancer. The URL remains same and common, but eventually, it gets mapped to the available container at that time and the UI is visible to the end user.Key Docker Setup concerns:One concern we faced with swarm is that the existing containers cannot be registered to newly created Docker swarm setup.We had to create the Docker swarm setup first and create the images / containers in the respective cluster.UI nodes will be deployed in UI cluster and DB nodes are deployed in DB cluster.Docker UCP and nginx load balancer are deployed on single host which are exposed to the external network.mysqlDB is deployed on DB cluster.Following is the high level workflow and design:

Aziro Marketing

EXPLORE ALL TAGS
2019 dockercon
Advanced analytics
Agentic AI
agile
AI
AI ML
AIOps
Amazon Aws
Amazon EC2
Analytics
Analytics tools
AndroidThings
Anomaly Detection
Anomaly monitor
Ansible Test Automation
apache
apache8
Apache Spark RDD
app containerization
application containerization
applications
Application Security
application testing
artificial intelligence
asynchronous replication
automate
automation
automation testing
Autonomous Storage
AWS Lambda
Aziro
Aziro Technologies
big data
Big Data Analytics
big data pipeline
Big Data QA
Big Data Tester
Big Data Testing
bitcoin
blockchain
blog
bluetooth
buildroot
business intelligence
busybox
chef
ci/cd
CI/CD security
cloud
Cloud Analytics
cloud computing
Cloud Cost Optimization
cloud devops
Cloud Infrastructure
Cloud Interoperability
Cloud Native Solution
Cloud Security
cloudstack
cloud storage
Cloud Storage Data
Cloud Storage Security
Codeless Automation
Cognitive analytics
Configuration Management
connected homes
container
Containers
container world 2019
container world conference
continuous-delivery
continuous deployment
continuous integration
Coronavirus
Covid-19
cryptocurrency
cyber security
data-analytics
data backup and recovery
datacenter
data protection
data replication
data-security
data-storage
deep learning
demo
Descriptive analytics
Descriptive analytics tools
development
devops
devops agile
devops automation
DEVOPS CERTIFICATION
devops monitoring
DevOps QA
DevOps Security
DevOps testing
DevSecOps
Digital Transformation
disaster recovery
DMA
docker
dockercon
dockercon 2019
dockercon 2019 san francisco
dockercon usa 2019
docker swarm
DRaaS
edge computing
Embedded AI
embedded-systems
end-to-end-test-automation
FaaS
finance
fintech
FIrebase
flash memory
flash memory summit
FMS2017
GDPR faqs
Glass-Box AI
golang
GraphQL
graphql vs rest
gui testing
habitat
hadoop
hardware-providers
healthcare
Heartfullness
High Performance Computing
Holistic Life
HPC
Hybrid-Cloud
hyper-converged
hyper-v
IaaS
IaaS Security
icinga
icinga for monitoring
Image Recognition 2024
infographic
InSpec
internet-of-things
investing
iot
iot application
iot testing
java 8 streams
javascript
jenkins
KubeCon
kubernetes
kubernetesday
kubernetesday bangalore
libstorage
linux
litecoin
log analytics
Log mining
Low-Code
Low-Code No-Code Platforms
Loyalty
machine-learning
Meditation
Microservices
migration
Mindfulness
ML
mobile-application-testing
mobile-automation-testing
monitoring tools
Mutli-Cloud
network
network file storage
new features
NFS
NVMe
NVMEof
NVMes
Online Education
opensource
openstack
opscode-2
OSS
others
Paas
PDLC
Positivty
predictive analytics
Predictive analytics tools
prescriptive analysis
private-cloud
product sustenance
programming language
public cloud
qa
qa automation
quality-assurance
Rapid Application Development
raspberry pi
RDMA
real time analytics
realtime analytics platforms
Real-time data analytics
Recovery
Recovery as a service
recovery as service
rsa
rsa 2019
rsa 2019 san francisco
rsac 2018
rsa conference
rsa conference 2019
rsa usa 2019
SaaS Security
san francisco
SDC India 2019
SDDC
security
Security Monitoring
Selenium Test Automation
selenium testng
serverless
Serverless Computing
Site Reliability Engineering
smart homes
smart mirror
SNIA
snia india 2019
SNIA SDC 2019
SNIA SDC INDIA
SNIA SDC USA
software
software defined storage
software-testing
software testing trends
software testing trends 2019
SRE
STaaS
storage
storage events
storage replication
Storage Trends 2018
storage virtualization
support
Synchronous Replication
technology
tech support
test-automation
Testing
testing automation tools
thought leadership articles
trends
tutorials
ui automation testing
ui testing
ui testing automation
vCenter Operations Manager
vCOPS
virtualization
VMware
vmworld
VMworld 2019
vmworld 2019 san francisco
VMworld 2019 US
vROM
Web Automation Testing
web test automation
WFH

LET'S ENGINEER

Your Next Product Breakthrough

Book a Free 30-minute Meeting with our technology experts.

Aziro has been a true engineering partner in our digital transformation journey. Their AI-native approach and deep technical expertise helped us modernize our infrastructure and accelerate product delivery without compromising quality. The collaboration has been seamless, efficient, and outcome-driven.

Customer Placeholder
CTO

Fortune 500 company