Cloud Updates

Uncover our latest and greatest product updates
blogImage

A Complete Guide To Cloud Containers

This blog article briefs the current technological trends and advances made to enable cloud scale orchestration possible. VMware brought physical machine virtualization to commercial world about a decade ago. Today it is Containers based micro services that is doing it again. Docker, Kubernetes and Mesos are being discussed everywhere and are projected as the next big thing to watch out for.This article tries to explore this latest buzz around Containers.1 IntroductionPhysical machine virtualization has started off a great trend in many areas. Today virtualization is an umbrella term widely used everywhere. Any place, where a logical handle of a physical resource is provided, enabling sharing of the physical resource is deemed virtualized. Virtualization by extended definition enables higher utilization of deployed resources. There is not just compute [CPU] virtualization, there is storage and network virtualization too. It has been known for some time that CPU performance lay wasted, as its performance is far ahead than either the network or memory components. Therefore it was assumed virtualizing CPU would provide more benefits. The success of virtual machine adaptation in varied domain puts any argument against to rest, beyond any doubt.Companies widely posted their success stories to describe the scaling of their physical infrastructure and failure resolution. Industry got busy integrating virtual machines as part of standard workflow. But then there was Google, who were not just experimenting but deploying with great success in live networks another new model called Containers. In short, containers are lightweight totally isolated userland sandbox for running processes when compared to virtual machines 1.Before Software defined anything was even spoken about, Google had designed their very own Borg Cluster running and managing container based micro services. Google made lots of assumptions to begin with, and in hindsight some of them were great. This learning they had with container management is being used in the design and implementation of Kubernetes. The container lifecycle management by itself is done by Docker engine, part of Docker. And then there is Mesos.To understand and appreciate Kubernetes, Mesos and Docker engine, it will be worth the effort to look at their fundamental building blocks.Figure 1: VM vs Containers2 Some HistorySolaris projects/zones, BSD Jails and LXC containers all do userspace compartments. The basis for all this stems from chroot system call introduced way back in 1982! Although chroot accomplished a new root filesystem view for applications to run, it opened up the need for rest of the OS pieces to be virtualized too. And *bsd jails has been doing total container virtualization since time immemorial. But today, Linux seems to rule the world! With relatively recent advancements in Linux for control groups and namespaces, it enhanced Linux to have highly sandboxed environment for containers. And Docker Inc, opensourced a suite of tools, that provides a clean and easy workflow to distribute, create and run container based solutions.And Kubernetes and Mesos applications are built over native OS support on containerization. It would be prudent to note that only userspace virtualization is possible in container world. If different versions of OS, or different OS are needed, then virtual machines are needed still. And with Windows OS, also working with Docker for container integration, we sure will see many cloud services being run as containers on multiple OS’es.With that background out of our way, let us understand the major pieces that Docker has brought in today.3 Build, Ship and Run anywhereWhy does everyone love containers? It makes development, test and deployment easy, by recreating the same environment from development everywhere.Normally, requirements come from customers and an engineering team starts working on it. Once the application is signed off by dev team, then testing team tries to install the application, where all application dependencies needs to be satisfied. That is inhouse and the environment can be controlled. But deployment is never easy, because the customers environment can have conflicting set of applications running, and satisfying dependencies for the new application along with those existing, is to lightly put a nightmare. What does container world do here? Every application has its own set of libraries in the filesystem defined part of its image, and completely isolated from other processes/application in the system. Voila!! no more deployment nightmares. And setting up that entire applications based on containers has been nicely solved by Docker.4 Docker SuiteDocker comes out with a suite of tools that together help organize, manage and deploy easily containers for real life applications.4.1 Docker EngineDocker engine is the core technology that enables creating, destroying and deploying containers. They connect to Docker Registry to pull/push container images. Docker engine has two parts to it. Docker engine server is a daemon that manages container lifecycle methods and exposes its functionality through a REST endpoint. A Docker command line program exists that can run anywhere and manage containers by connecting to the REST endpoint over the network.4.2 Docker RegistryDocker Registry hosts container Images. These Registries are publicly available through Docker Hub. Additionally, these Registries can be setup inhouse as well. Docker Images are the containers filesystem. So, by defining a method for hosting these Images at a Registry, Docker has made is really easy to share Images across. Versioning of Docker images is also supported. Additionally, Docker Images are being developed inline with opencontainer initiative. https://www.opencontainers.org/4.3 Docker ComposeThis is a utility that helps setting up a multiple container application environment. With Docker Compose, a template can be defined that captures all the dependencies. This can then be passed along to Docker Compose, to create this environment repeatedly and easily everytime. As simple as running ”‘docker-compose up”’.Figure 2: CloudApplications4.4 Docker MachineDocker machine can create virtual machines that can be readily used for container deployment. It uses virtualbox, or other supported drivers https://docs.docker.com/build/builders/drivers/docker/ to create the virtual machine that is docker ready.4.5 KiteMaticKiteMatic is Docker native GUI for working with containers locally on personal machine. It is currently supported on Mac and will support Windows soon. It installs a virtualbox and provisions containers inside a virtual machine locally.4.6 Docker SwarmDocker Swarm supports managing a cluster of machines in a network that is running Docker engine. Swarm agents running on each machine, run a voting algorithm and elect a master node for the cluster. All operations on the cluster are routed to the swarm master node. A distributed kv store like etcd, zookeeper or consul is used to keep the swarm nodes in good health and recover from node failures.5 So what is Kubernetes and Mesos about?Kubernetes and Mesos are higher level software stack used for managing applications on a cluster built over containers.Just like the technology, applications 2 are changing too. We are so used to client-server applications where the applications were small and the servers powerful(consider databases, workflow apps etc). Those class of applications are fast changing. Today, cloud scale applications are another class of applications where the applications are resource hungry and individual servers can hardly satisfy them (think of youtube, twitter, facebook etc). So we should understand that LXC (linux containers) and docker engine play a key part in creating largerFigure 3: Kubernetes StackFigure 4: Mesos Cluster Configurationframeworks. But apart from using container technology as its basis, Kubernetes and Mesos approach cluster utilization in two different ways.Kubernetes 3 understands and manages container based application lifecyles great. While Docker engine can ease out sharing container images, creating and running containers; applications are a slew of containers, that needs to be constantly updated, bugs fixed and new enhancements brought it or downgraded for any critical issues that are found, provide fault tolerance etc. Kubernetes is very good at enforcing application lifecycle management. We could really appreciate the power Kubernetes brings to the table. Google has applied its vast experience in running containers to the design, and it shows. Try comparing Kubernetes with Docker Compose or the concept of PODs, Replication Controller or Services on Kubernetes which are non-existant in Docker. But not all applications are micro services that can easily be packaged as a containers.And that is where Mesos excels! Mesos 4 solves this other class of problems by integrating frameworks http://mesos.apache.org/documentation/latest/ mesos-frameworks/. Each of these frameworks are plugins to the Mesos. And these frameworks teach Mesos to handle new application classes, like mapreduce, MPI etc. Mesos natively only controls cluster membership and failovers.Figure 5: Mesosphere StackThe scheduling is offloaded to frameworks. Mesos slaves inform masters about resource availability, Mesos Master runs an Allocation Policy Module, which determines the framework to offer this resource. Frameworks decide to either accept or reject the offer. If accepted, then they provide details of the tasks to run, and the Mesos Master shares the task information to Mesos Slaves, that continue to run them and provide results and status as necessary.What if one were to integrate these two software stacks together to get the best of both worlds! Mesosphere 5 did just that, they call it Datacenter Operating Systems (DCOS). But that is a story for another day. ReferencesDocker Inc, https://www.docker.comKubernetes, https://kubernetes.ioMesos, http://mesos.apache.org/Linux Containers, https://en.wikipedia.org/wiki/LXCFreeBSD Jails, https://en.wikipedia.org/wiki/FreeBSD jailMesos, http://www.slideshare.net/Docker/building-web-scale-apps-withdocker-and-mesosMesosphere, http://www.slideshare.net/mesosphere/apache-mesos-andmesosphere-live-webcast-by-ceo-and-cofounder-florian-liMesos tech paper, http://mesos.berkeley.edu/mesos tech report.pdfKubernetes, http://www.slideshare.net/wattsteve/kubernetes-48013640Containers for masses, http://patg.net/containers,virtualization,docker/2014/06/05/dockerintro/

Aziro Marketing

blogImage

Cloud Computing for Enterprise VS SMBs- Key Differentiators

With the growing demand and hype around cloud computing and its related services, it is not a surprise when boards have endless heated discussions on whether it’s time to switch to cloud computing services while leaving traditional enterprise software behind. However, the “me too” attitude is only good as long as the environment is favorable to you, as it is to your next door neighbor. What many organizations, SMEs and enterprises, tend to oversee is if it the right time/cause to jump to cloud computing services. The answer to this question needs to be heavily weighed and assessed; just a marketing pitch doesn’t suffice. As contrary to popular belief, not all cloud computing services may be the right ones for every business. A careful examination and comparison is a must. Also, do not limit this to different cloud providers alone. Before you embark on the cloud, ensure that is in the best interests to move from your enterprise software. Though cost effectiveness is a great advantage for SMEs, it doesn’t alter much for enterprises. So is the case with overall operating costs. Enterprises cannot consider benefits like these in order to move to the cloud. For an enterprise to reach out to cloud computing service providers, you need to look at things from a slightly different perspective. Security and performance risks can be your focus when you are sifting through cloud service providers. Seemingly minor security breaches or regular outages and downtimes, can cause irreversible damage to enterprises. Like with any enterprise, being unable to cater to clients in real time can result in a huge loss. Though cloud service providers are quick to scale and expand to adjust to your ongoing needs, have you analyzed their capacity? You need to evaluate their scale-out plans with regards to infrastructure. According to a research, many cloud based ERP systems are reported to lack the infrastructure to accommodate interoperability with existing applications. Cloud based services may limit extensive customization of the system. With users spread across various departments, it becomes imperative to adapt the system as per individual teams, constraints to do so result in inadequate performance and procedural rigidity. In another research by Forrester, enterprises are concerned about stability, and dedicating a team for the maintenance of the system. These are some of the factors that enterprises need to think through before availing cloud related services. Going with largely marketed concepts of lower costs and perceived scalability will only prove diminutive in the future.

Aziro Marketing

blogImage

How to Make MS Azure Compatible with Chef

Let’s get an overview of Microsoft Azure cloud and, how the popular configuration management tool Chef can be installed on Azure for making them work together.Chef IntroductionChef is a configuration management tool that turns infrastructure into code. You can easily configure your server with the help of Chef. Chef will help you automate, build, deploy, and manage the infrastructure process.If you want to know more about Chef, please refer to https://docs.chef.io/index.html.In order to know how you can create a user account on hosted Chef, please refer to:https://manage.chef.io/signup; alternatively, use open-source Chef by referring to:https://docs.chef.io/install_server.html.Microsoft AzureMicrosoft Azure is a cloud computing platform and infrastructure created by Microsoft for building, deploying, and managing applications and services through a global network of Microsoft-managed datacenters.It provides both PaaS and IaaS services and supports many different programming languages, tools, and frameworks, including both Microsoft-specific and third-party software and systems.For more details refer to: http://azure.microsoft.com/en-in/ andhttp://msdn.microsoft.com/en-us/library/azure/dd163896.aspx.There are three ways to install Chef extension on Azure cloud:By using Azure Portal2. Azure PowerShell CLI3. Knife-azure – The Chef’s CLI tool for Azure providerPrerequisites  Active account on the Azure cloud; https://manage.windowsazure.com or https://portal.azure.comActive account on hosted Chef; See https://manage.chef.io/signup.We need your Chef’s account organization_validation key, rb and run_list.Sample format for client.rb file­:log_location: STDOUT chef_server_url  "https://api.opscode.com/organizations/" validation_client_name "-validator" +36+9 1. From Azure portal log into your azure account at https://portal.azure.com1.1 List existing virtual machines:1.2 Select existing VM:1.3 Click the Extensions section:1.4 Select Add Extension:1.5 Select the Chef extension:1.6 Click the Create button:1.7 Upload Chef configuration files:1.8 You can now see the Chef extension for VM: 2. Azure PowerShell CLI ToolAzure PowerShell is a command line tool used to manage Azure cloud resources. You can use cmdlets to perform the same tasks that you can perform from the Azure portal.Refer- http://msdn.microsoft.com/en-us/library/azure/jj156055.aspxPrerequisitesInstall Azure PowerShell Tool; refer to http://azure.microsoft.com/en-in/documentation/articles/install-configure-powershell/Azure user accounts publish settings file.We are going to use Azure PowerShell cmdlets to install the Chef extension on Azure VM.2.1 Import your Azure user account into your PowerShell Session. Download subscription credentials for accessing Azure. This can be done by executing a cmdlet.PS C:\> Get-AzurePublishSettingsFile It will launch your browser and download the credentials file. PS C:\> Import-AzurePublishSettingsFile PS C:\> Select-AzureSubscription -SubscriptionName "" PS C:\> Set-AzureSubscription -SubscriptionName "" -CurrentStorageAccountName "" 2.2 Create a new Azure VM and install the Chef extension# Set VM and Cloud Service names PS C:\> $vm1 = "azurechef" PS C:\> $svc = "azurechef" PS C:\> $username = 'azure' PS C:\> $password = 'azure@123' PS C:\> $img = #Note- Try Get-AzureVMImage cmdlet to list images PS C:\> $vmObj1 = New-AzureVMConfig -Name $vm1 -InstanceSize Small -ImageName $img #Add-AzureProvisioningConfig  for Windows OR Linux VM # For Windows VM     PS C:\> $vmObj1 = Add-AzureProvisioningConfig -VM $vmObj1 -Password $password -AdminUsername $username –Windows or# For Linux VM     PS C:\> $vmObj1 = Add-AzureProvisioningConfig -VM $vmObj1 -Password $password -LinuxUser $username -Linux # Set AzureVMChefExtension for Windows OR Linux VM # For Windows VM     PS C:\> $vmObj1 = Set-AzureVMChefExtension -VM $vmObj1 -ValidationPem "C:\\users\\azure\ \msazurechef-validator.pem" -ClientRb "C:\\users\\azure\\client.rb" -RunList "getting-started" -Windows or# For Linux VM      PS C:\> $vmObj1 = Set-AzureVMChefExtension -VM $vmObj1 -ValidationPem "C:\\users\\azure\ \msazurechef-validator.pem" -ClientRb "C:\\users\\azure\\client.rb" -RunList "getting-started" -Linux # Create VM     PS C:\> New-AzureVM -Location 'West US' -ServiceName $svc -VM $vObj1 2.3 Install SetAzureVMChefExtension on existing azure VM:# Get existing azure VM     PS C:\> $vmObj1 = Get-AzureVM -ServiceName  -Name # Set AzureVMChefExtension for Windows OR Linux VM # For Windows VM     PS C:\> $vmObj1 = Set-AzureVMChefExtension -VM $vmObj1 -ValidationPem "C:\\users\\azure\ \msazurechef-validator.pem" -ClientRb "C:\\users\\azure\\client.rb" -RunList "getting-started" -Windows or# For Linux VM     PS C:\> $vmObj1 = Set-AzureVMChefExtension -VM $vmObj1 -ValidationPem "C:\\users\\azure\ \msazurechef-validator.pem" -ClientRb "C:\\users\\azure\\client.rb" -RunList "getting-started" -Linux 2.4 You can use following cmdlets to Remove Chef Extension from VM# Get existing azure VM PS C:\> $vmObj1 = Get-AzureVM -ServiceName  -Name # Remove Chef Extension from VM PS C:\> Remove-AzureVMChefExtension -VM $vmObj1 # Update VM PS C:\> Update-AzureVM -ServiceName $vmName -Name $vmName -VM $vmObj1 2.5 You can get current state of Chef Extension by using following cmdlet:# Get existing azure VM PS C:\> $vmObj1 = Get-AzureVM -ServiceName  -Name # Get Chef Extension details from VM PS C:\> Set-AzureVMChefExtension -VM $vmObj1 3. Knife-Azure Chef’s Plugin: A knife plugin to create, delete, and enumerate Windows Azure resources to be managed by Chef. The knife-azure plugin (v1.4.0.rc.0) gives features to create VM and install chef extension on Windows Azure Cloud.For more details refer to https://docs.chef.io/plugin_knife_azure.html orhttps://github.com/opscode/knife-azurePrerequisites: ruby v1.9.3 +chef v11.0 +knife-azure v1.4.0.rc.0 pluginAzure user account publishsettings fileChef user accounts configuration filesInstall Ruby:On Windows- http://rubyinstaller.org/On Linux- https://rvm.io/Install Chef:$ gem install chef Install knife-azure plugin:$ gem install knife-azure --pre Download Chef’s Starter Kit:This starter kit includes Chef’s user/organization related configuration details. i.e., user.pem,organization-validator.pem and knife.rb files.Please refer to: https://learn.chef.io/legacy/get-started/#installthestarterkit orhttps://manage.chef.io/starter-kit.Run knife azure command to create VM and install chef extension-Create Windows VM command:$ knife azure server create  --azure-source-image   --azure-dns-name   --azure-service-location " " --winrm-user  --winrm-password   --azure-publish-settings-file  -c  --bootstrap-protocol "cloud-api" Create Linux VM command:$ knife azure server create -I  -x  -P  --bootstrap-protocol "cloud-api" -c  --azure-service-location" " --azure-publish-settings-file Note- To get$ knife azure image list -c   --azure-publish-settings-file Microsoft Azure is the leading public cloud platform out there, and Chef is one of the most sought after continuous integration and delivery tool. When they come together, ramifications can be great. Please share your comments and questions below. 

Aziro Marketing

blogImage

How Can Aziro (formerly MSys Technologies) Expertise Help You with SaaS, PaaS, and IaaS

Cloud computing has revolutionized how we provide IT software and infrastructure. Today, many software companies are interested in providing their applications and services in the form of a cloud service rather than packaging their software and licensing it out to customers. There are a number of advantages to this type of service delivery model.Customers have access to applications and data from anywhere. Just a direct connection to the Internet is necessary for a cloud-based application to run. Data is also easily accessible over a network. Data would not be confined to a simple computer system or the internal network of an organization. Hence, access is easy from any location.Cost will come down in this model. You no longer need advanced hardware resources to run an application. A simple thin client can access a cloud-based application from anywhere. The hardware resources to run the application resides in the cloud, and it can be used to profitably run the application on any number of systems. The thin client can include a monitor, I/O devices, and just enough processing power to run the middleware that accesses the application from the cloud.Cloud computing systems are highly scalable. You don’t need to worry about adding additional hardware to run an application. The cloud takes care of all of that.Servers and storage devices take up a lot of physical space. Renting physical space can cost quite a lot of money for an organization. You can, with the cloud, simply host your products and software on someone else’s hardware so as to save a lot of space on your end.Streamlined hardware infrastructure of the cloud will have fewer technical support needs.Since cloud computing takes advantage of a grid computing system in the back end, the front end doesn’t really need to know the infrastructure to run any application of any size. In simpler terms, the advanced calculations a normal computer would take years to complete can be done in seconds through a cloud-computing platform.Cloud ModelsCloud computing takes three major forms: SaaS, PaaS, and IaaS. They are expanded as Software, Platform, and Infrastructure as a Service. In the case of SaaS, users are given access to software applications and associated databases. The installation and operation of the software is done completely from the cloud, and the access through authentication is done from a thin client.Cloud provides the load balancers required to run the application by distributing the work across multiple virtual machines. This complex back end is not even visible to the end user, who simply sees the running application through a single access point. The SaaS applications can be in subscription model, in which you pay a monthly or yearly fee to get access to the application.In the PaaS model, the cloud provides a computing platform that includes typically an operating system (Windows, Mac OS X, Linux, etc.), programming languages required for software development, database, and web servers. These entities are all stored in the cloud. Instances of the PaaS model include Microsoft Azure and Google App Engine.In the IaaS model, you have as many virtual machines as you need on the cloud. A hypervisor, such as VMware ESXi, Oracle VirtualBox, XenServer, or Hyper-V are provided through the IaaS platform. Additionally, virtual machine disk image library, raw block storage, object storage, firewalls, load balancers, virtual LANs, etc., are all provided by the IaaS model. This helps any organization successfully deploy their applications on the cloud. The most popular IaaS provider is probably Amazon Web Services.Deployment ModelsThree types of deployment models exist in the cloud architecture. They are private cloud, public cloud, and hybrid cloud. Private cloud is managed and operated by a single organization internally. Significant amount of virtualization is required for a private cloud deployment, and that can increase the initial investment required. However, when deployed correctly, a private cloud could be highly profitable for any organization.Public cloud is rendered to the public as a service. For instance, Amazon AWS, Microsoft Azure, etc., are provided to the public to use and deploy their applications and infrastructure. This type of architecture requires you to analyze the security and communication concerns of the cloud.In the case of hybrid cloud, as the name implies, both private, community, and public cloud deployments could be there. In hybrid cloud systems, the advantages of both types of systems may be there. Various deployment models are available in the case of hybrid cloud: for instance, a company can store sensitive client data in private cloud architecture while deploying business intelligence services provided by a public cloud vendor.MSys’s Cloud ExpertiseAziro (formerly MSys Technologies) and its subsidiary company Clogeny have done several cloud-based projects in the past. We have analyzed the current infrastructure of clients, and provided a proper road map to cloud deployment. In implementation, we have taken care of the complete design of the cloud computing model, building test environments to check the validity of the design, and migration of apps and data to go live. We also provide fully functional cloud support through transition plans, service review, and service implementations.We have worked with some of the major cloud service providers in the industry including Amazon Web Services, Microsoft Azure, Rackspace, HP Cloud, Google Cloud, OpenStack, Salesforce, Google Apps, Netsuite, Office 365, etc. We have also helped organizations take advantage of their data by providing big data services. Leading companies in storage, server imaging, and datacenter provisioning have been our clients since our inception in 2007. In private and public cloud deployments, a few of our clients include Datapipe, Instance, and Netmagic.Our cloud-based product is known as PurpleStrike RT, which is a load-testing tool that utilizes Amazon’s EC2 platform.ConclusionCloud computing may prove to be the most important technology for future’s IT deployments. Already many companies have moved to the cloud. Many more are in the process of slowly transitioning to the cloud.

Aziro Marketing

blogImage

How to develop custom knife-cloud plugin using Knife Cloud Gem

Chef Software, Inc. has released knife-cloud gem. This article talks about what is the knife-cloud gem and how you can use it to develop your custom knife-cloud plugin. Knife is a CLI tool used for communication between local chef-repo and the Chef Server. There are a couple of knife subcommands supported by Chef, e.g., knife bootstrap, knife cookbook, knife node, knife client, knife ssh, etc. Knife plugin is an extension of the knife commands to support additional functionality. There are about 11 knife plugins managed by Chef and a lot more managed by the community. The concept of knife-cloud came up as we have a growing number of cloud vendors, and therefore a number of knife plugins, to support the cloud specific operations. The knife-cloud plugins use cloud specific APIs to provision a VM and bootstrap it with Chef. These plugins perform a number of common tasks, such as connection to the node using SSH or WinRM and bootstrapping the node with Chef. The knife-cloud (gem) has been designed to integrate the common tasks of all knife cloud plugins. As a developer of a knife cloud plugin, you will not have to worry about writing the generic code in your plugin. More importantly, if there is any bug or change in the generic code of the knife plugin, the fix would be done in knife-cloud itself. Today we need to apply such changes across all the knife plugins that exist. Knife-cloud is open source available at: https://github.com/opscode/knife-cloud. You may refer to https://github.com/opscode/knife-cloud#writing-your-custom-plugin about the steps to write your custom knife cloud plugin. Aziro (formerly MSys Technologies) has written a knife-cloud scaffolder(https://github.com/MsysTechnologies/knife-cloud-scaffolder) to make your job even simpler. The scaffolder generates the stub code for you with appropriate TODO comments to guide you in writing your cloud specific code. To use the knife-cloud-scaffolder: git clone https://github.com/MsysTechnologies/knife-cloud-scaffolder Update properties.json Run the command: ruby knifecloudgen.rb E.g., ruby knifecloudgen.rb ./knife-myplugin ./properties.json Your knife-myplugin stub will be ready. Just add your cloud specific code to it and you should be ready to use your custom plugin.

Aziro Marketing

blogImage

How to Dockerize your Ruby-On-Rails Application?

Packaging an application along with all of its bin/lib files, dependencies and deploying it in complex environments is much more tedious than it sounds. In order to extenuate it, Docker, an open-source platform, enables applications to quickly group their components and eliminates the friction between development, QA, and production environments. Docker is a lightweight packaging solution that can be used instead of a virtual machine. Docker is an open-source engine to create portable, lightweight containers from any application.Docker is hardware- and platform-agnostic, which means a Docker container can run on any supported hardware or operating system. The fact that it takes less than a second to spawn a container from a Docker image justifies that Docker really is lightweight as compared to any other virtualization mechanism. Also the Docker images are less than a tenth the size of their counterpart virtual machine images. The images created by extending a Docker base image can be as tiny as few megabytes. This makes it easier and faster to move your images across different environments.Docker Hub is the central repository for Docker. Docker Hub stores all the public as well as private images. Private images are only accessible for a given users account or team to which it belongs. Docker Hub can be linked to Github or Bitbucket to trigger auto builds. The result of such a build is ready to deploy the application’s Docker image.Docker provides mechanism to separate application dependencies, code, configuration, and data by providing features such as container linking, data volumes, and port mapping. Dependencies and configuration is specified in the Dockerfile script. The Dockerfile installs all the dependencies, pulls the application code from the local or remote repository, and builds a ready-to-deploy application image.Container LinkingDocker container linking mechanism allows communication between containers without exposing the communication ports and details. The below command spawns a Tomcat application container and links it to the mysql-db-container. The Tomcat application can communicate to the mysql-db by using the environment variables (like db:host, db:port, db:password) exposed by mysql-db-container there by providing maximum application security.docker run –link mysql:mysql-db-container clogeny/tomcat-applicationData VolumesDocker provides data volumes to store, backup, and separate the application data from the application. Data volumes can be shared between multiple containers and read write policies can be specified for a given data volume. Multiple data volumes can be attached to the container using the flag -v multiple times. Docker also allows mounting a host directory as data volume to a container.docker run -v /dbdata –name mysql-instance1 my-sql#this creates dbdata volume inside the mysql-instance1 containerdocker run –volumes-from mysql-instance1 –name my-sql-instance2 my-sql-server#mounts and share all the volumes from mysql-instance1containerDockerizing a Ruby on Rails Application4 Simple steps to Dockerize your ruby-on-rails applicationInstall DockerCreate a Dockerfile as below in your application directory.FROM rails # use the rails image from the Docker Hub central repositoryMAINTAINER Clogeny ADD ./src ./railsapp #Copies the source files from host to the container. URL to the code repository can also be usedRUN bundle installEXPOSE 3000 #Expose port 3000 to communicate with the RoR serverENTRYPOINT rails s # run the RoR server with “rail s” commandBuild the application image. This command creates a ready-to-run rails image with your rails application deployed.docker build -t clogeny/my-RoR-app # -t specifies the name of the image which gets createdPush the application to central repository so that the QA can use it to test the application. The image can be used to speed up and revolutionize the CI/CD workflow.docker push clogeny/my-RoR-app # Upload the Docker image to the central repoDeploying the Dockerized ApplicationDeployment requires executing just one command to get the application up and running on the test machine. Assuming Docker is installed on the host, all we need to do is execute the “docker run” command to spawn a Docker container.docker run # Spawn a docker container-t # -t flag is used to show the stdOut and stdErr on the commandLine-p 3000:3010 # -p flag is used to map container port 3000 to the host port 3010clogeny/my-RoR-app # Use the “my-RoR-app” image earlier uploaded to the repo.And here we are, the Docker container is up and running in a matter of a few seconds. We can log into the application using the URL http://localhost:3010

Aziro Marketing

blogImage

How to migrate Your Data and Application to the Cloud

Cloud computing is intended to reduce the expenses of IT organizations by lowering capital expenditure by allowing them to purchase only the required amount of computing and storage resources. Today, due to the enormous advantages of cloud computing, many organizations are exploring how the cloud could be leveraged to make their enterprise applications available on an on-demand basis.In the last few years, thousands of companies moved to the cloud, through public, private, or hybrid cloud offerings. Many others are considering moving to the cloud due to its enormous advantages.Before you move to the cloud, it is important to look at the major advantages of cloud computing.When it comes to migration to the cloud, you can take advantage of Microsoft’s Windows Azure, Google Cloud, Amazon AWS, Citrix, etc. Companies use these platforms to build websites, web apps, mobile apps, media solutions, etc.Migrating to the cloud, you can potentially create more business profits by taking risks and encouraging experimentation. While risk taking in the past requires you to invest a lot in hardware and software, the cloud allows you to create an application on a completely scalable platform and get it out in the form of a service rather than selling licenses.Although these advantages are there, migration may not be an easy task. For instance, enterprise applications are faced with strict requirements in terms of performance, service uptime, etc. Migrating them to the cloud requires you to analyze all these requirements very closely and come up with an in-depth migration plan that increases ROI.Hardware resources required can be greatly minimized by cloud migration. Since pooled resources are better utilized, moving to the public cloud can dramatically decrease the need for in-house servers. This will also reduce physical floor space and power consumption. In addition, as mentioned above, migration will surely reduce operational and management costs. A number of solution and service providers in the cloud market can help you easily migrate at reduced cost structure. Aziro (formerly MSys Technologies) has also been providing the same type of migration service for years.Things to Check before Cloud MigrationAn important thing to consider while migrating to the cloud is analyzing the changes required in the architecture of the application being migrated. In many cases, the application must undergo a complete architecture change to be fit for the cloud. A service-oriented application works well with the abstraction of cloud services through application programming interfaces (APIs). Additionally, you should also seek whether the application needs to be altered to take advantage of the native cloud features. Direct access to elastic storage, management of interfaces, and auto-provisioning services are some of these cloud features you may want to take advantage of.Migration RoadmapDuring the transition, you should also ensure that the level of service provided in the cloud is comparable or better than the service provided by traditional technical environments. Failure to comply with this requirement is the result of improper migration to the cloud. And it will result in higher costs, loss of business, etc., thus eliminating any benefits that the cloud could provide. A few steps involved in the migration of an application to the cloud include:1. Assessing Your Applications and WorkloadsThis step allows organizations to find out what data and applications can be readily moved to the cloud. During this phase, you can also determine the delivery models supported by each application. It of course makes sense to sort the applications to be ported based on the risk factor, especially the ones with minimal amount of customer data or other sensitive information.2. Build a Business CaseBuilding a business case requires you to come up with a proper migration strategy for porting your applications and data to the cloud. This strategy should incorporate ways to reduce costs, demonstrates advantages, and deliver meaningful business value. Value propositions of cloud migration include shift of capital expenditures to operational expenditures, cost savings, faster deployment, elasticity, etc.3. Develop a Technical ApproachThere are two potential service models to migrate an existing application—Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). For PaaS migration, the application itself has to be designed for the runtimes available in the target PaaS. However, for IaaS, the requirements are not that much.4. Adopt a Flexible Integration ModelAn application that gets migrated to the cloud might have already existing connections with other applications, services, and data. It is important to understand the impact of these connections before proper migration to the cloud. The integration model to be adopted may involve three types: process integration (an application invokes another application to execute a workflow), data integration (integration of the data shared among applications), and presentation integration (multiple applications sharing results over a single dashboard).5. Address Security and Privacy RequirementsTwo of the most important issues faced in migration to the cloud are security and privacy issues. Especially in the case of applications that deal with sensitive information, such as credit card numbers and social security information, the security should be high. Several issues to be addressed are there, including the difficulty for an intruder to steal any data, proper notifications on security breach, reliability of the personnel of the cloud service provider, authorization issues, etc.6. Manage the MigrationAfter thoroughly analyzing the various benefits and issues associated with migration, planning and execution of the migration can happen. It should be done in a controlled manner with the help of a formal migration plan that tracks durations, resources, costs, and risks.Aziro (formerly MSys Technologies)’ Cloud ExpertiseIn cloud migration, we have industry-wide expertise in SaaS, PaaS, IaaS. In IaaS, we have worked on private and public clouds with infrastructures such as OpenStack, Amazon AWS, Windows Azure, Rackspace, VMware, Cloupia, HP Cloud, etc.Clogeny, Aziro (formerly MSys Technologies)’s subsidiary company, has worked on hybrid cloud migration projects for leading clients in server imaging and datacenter provisioning. We have helped add support for several public vCloud Director implementations, including Bluelock, AT&T, Savvis, and Dell. In addition, we have architected hybrid cloud migration appliance for VMware vSphere. In enterprise Java PaaS, we have worked on VMware vCloud Director, AWS, HP Cloud, and Rackspace.ConclusionCloud computing provides a few key benefits for companies. Migration to the cloud may create a better, modern business model for most tech companies.

Aziro Marketing

blogImage

How to write Ohai plugin for the Windows Azure IaaS cloud

Chef is an open-source systems management and cloud infrastructure automation framework created by Opscode. It helps in managing your IT infrastructure and applications as code. It gives you a way to automate your infrastructure and processes. Knife is a CLI to create, update, search and delete the entities or manage actions on entities in your infrastructure like node (hosts), cloud resources, metadata (roles, environments) and code for infrastructure (recipes, cookbooks), etc. A Knife plug-in is a set of one (or more) subcommands that can be added to Knife to support additional functionality that is not built-in to the base set of Knife subcommands. Ohai, Ohai plugins and the hints system: Ohai is a tool that is used to detect certain properties about a node’s environment and provide them to the chef-client during every Chef run. The types of properties Ohai reports on include: Platform details Networking usage Memory usage Processor usage Kernel data Host names Fully qualified domain names (FQDN) Other configuration details When additional data about a system infrastructure is required, a custom Ohai plugin can be used to gather that information. An Ohai plugin is a Ruby DSL. There are several community OHAI cloud plugins providing cloud specific information. Writing OHAI plug-in for the Azure IaaS cloud: In simple words Ohai plug-in is a ruby DSL that populates and returns a Mash object to upload nested data. It can be as simple as: provides “azure” azure Mash.new azure[:version] = “1.2.3” azure[:description] = “VM created on azure” And you are done!! Well practically you would populate this programmatically. This plug-in is now ready and when the chef client runs, you would see these attributes set for the node. More on how to setup the custom plug-ins. Additionally Ohai includes a hinting system that allows a plugin to receive a hint by the existence of a file. These files are in the JSON format to allow passing additional information about the environment at bootstrap time, such as region or datacenter. This information can then be used by ohai plug-ins to identify the type of cloud the node is created on and additionally any cloud attributes that should be set on the node. Let’s consider a case where you create a virtual machine instance on the Microsoft Windows Azure IaaS Cloud using the knife-azure plugin. Typically, once the VM is created and successfully bootstrapped, we can use knife ssh to secure shell into the VM and run commands. To secure shell into the VM the public IP or FQDN should be set as an attribute. Incase of Azure, the public FQDN can only be retrieved by querying azure management API which can add a lot of overhead to Ohai. Alternatively we can handle this using OHAI hint system, where the knife azure plug-in can figure out the public FQDN as part of VM creation. and pass on this information to VM. Then a OHAI plug-in can be written which reads the hints file and determines the public IP address. Let’s see how to achieve this: The hints data can be generated by any cloud plug-in and sent over to node during bootstrap. For example say the knife-azure plug-in sets few attributes within plug-in code before bootstrap: 1. Chef::Config[:knife][:hints]["azure"] ||= cloud_attributes Where “cloud_attributes” is hash containing the attributes to be set on node using azure ohai plug-in. {"public_ip":"137.135.46.202","vm_name":"test-linuxvm-on-cloud", "public_fqdn":"my-hosted-svc.cloudapp.net","public_ssh_port":"7931"} You can also have this information passed as a json file to the plug-in if it’s not feasible to modify the plug-in code and the data is available before knife command execution so that it can be passed as CLI option: "--hint HINT_NAME[=HINT_FILE]" "Specify Ohai Hint to be set on the bootstrap target. Use multiple --hint options to specify multiple hints." The corresponding ohai plug-ins to load this information and set the attributes can be seen here: https://github.com/opscode/ohai/blob/master/lib/ohai/plugins/cloud.rb#L234 Taking the above scenario, this will load attribute like cloud.public_fqdn in the node which can then be used by knife ssh command or for any other purpose. Knife SSH example: Once the attributes are populated on chef node we can use knife ssh command as follows: $ knife ssh 'name:nodename' 'sudo chef-client -v' -a 'cloud.public_fqdn' --identity-file test.pem --ssh-user foo --ssh-port 22 my-hosted-svc.cloudapp.net Chef: 11.4.4$ knife ssh 'name:nodename' 'sudo chef-client -v' -a 'cloud.public_fqdn' --identity-file test.pem --ssh-user foo --ssh-port 22 my-hosted-svc.cloudapp.net Chef: 11.4.4 *Note the use of attribute ‘cloud.public_fqdn’ which is populated using the ohai hint system from the json. This post is meant to explain the basics and showcase a real world example of the Ohai plugins and the hints system.

Aziro Marketing

blogImage

Learn about the Latest Enterprisy Updates to knife-cloudstack!

Opscode’s Chef is open-source systems integration framework built specifically for automating the cloud. Knife is a powerful CLI that is used by administrators to interact with Chef. It is easily extensible to support provisioning of cloud resources. There is currently support for over 15 cloud providers including Amazon EC2, Rackspace, Openstack and Cloudstack. Ever since the acquisition of Cloud.com by Citrix, Cloudstack (now re-christened as Citrix CloudPlatform) is being actively morphed into a more enterprise-focused product with support for Production-grade networking appliances like the Netscalar suite, F5 Big IP, Cisco Nexus 1000V and networking features like InterVLAN communication and Site-to-Site VPN. Continuing in the spirit, the Knife Cloudstack plugin has recently received major updates that are targeted towards enterprises using Cloudstack/Cloudplatform in private environments: Microsoft Windows Server bootstrapping: Microsoft Windows Server is widely used across Enterprises to host a variety of critical internal and external applications including Microsoft Exchange, Sharepoint, CRM. We have added support to easily bootstrap provision and bootstrap Windows machines via the WinRM protocol with ability to use both Basic and Kerberos modes of Authentication. Support for Projects: Cloudstack Projects is one of the widely used feature in Enterprises allowing BUs to isolate their compute, networking and storage resources for better chargeback, billing and management of resources. The plugin now supports the ability to spawn servers, choose networks and allocate IP addresses in specific projects. Choose between Source NAT and Static NAT: Enterprises host certain Applications for their customers, partners or employees on public IP addresses. Hence they prefer to use static NAT (IP forwarding, EC2 Style) rather than Source NAT (Port Forwarding) for increased security and control. Enabling static NAT is as simple as setting a flag. Ability to choose networks: Typically enterprises prefer isolating different types of traffic on different networks. eg. VoIP traffic on a higher QoS networks, separate storage/backup networks and so on. The plugin now adds the ability spawn virtual machines as well as allocate public IP addresses from specific networks. Sample Examples: Windows Bootstrapping knife cs server create --cloudstack-service 'Medium Instance' --cloudstack-template 'w2k8-basic' --winrm-user 'Administrator --winrm-password 'xxxx' --winrm-port 5985 --port-rules "3389:3389:TCP" --bootstrap-protocol winrm --template-file windows-chef-client-msi.erb knife cs server create --cloudstack-service "Medium Instance" --cloudstack-template "w2k8-with-AD" --kerberos-realm "ORG_WIDE_AD_DOMAIN" --winrm-port 5985 --port-rules "3389:3389:TCP" --bootstrap- protocol winrm --template-file windows-chef-client-msi.erb Support for Projects and Static NAT knife cs server create --cloudstack-service 'Medium Instance' --cloudstack-template 'w2k8-basic' --cloudstack-project 'Engg-Dev' --winrm-user 'Administrator --winrm-password 'Fr3sca21!' -- static-nat --port-rules "3389:TCP" --bootstrap-protocol winrm Choose specific networks: knife cs server create "rhel-node-1" --node-name "rhel-node-1" -T "RHEL 5.7-x86" --bootstrap-protocol ssh --ssh-user root --ssh-password **** --service "Small Instance" --networks "Admin-Default" --port-rules '22:tcp' The plugin is available to download from the source at: knife-cloudstack Update: knife-cloudstack-0.0.13 has released to rubygems.org with these changes. gem install knife-cloudstack for the latest

Aziro Marketing

EXPLORE ALL TAGS
2019 dockercon
Advanced analytics
Agentic AI
agile
AI
AI ML
AIOps
Amazon Aws
Amazon EC2
Analytics
Analytics tools
AndroidThings
Anomaly Detection
Anomaly monitor
Ansible Test Automation
apache
apache8
Apache Spark RDD
app containerization
application containerization
applications
Application Security
application testing
artificial intelligence
asynchronous replication
automate
automation
automation testing
Autonomous Storage
AWS Lambda
Aziro
Aziro Technologies
big data
Big Data Analytics
big data pipeline
Big Data QA
Big Data Tester
Big Data Testing
bitcoin
blockchain
blog
bluetooth
buildroot
business intelligence
busybox
chef
ci/cd
CI/CD security
cloud
Cloud Analytics
cloud computing
Cloud Cost Optimization
cloud devops
Cloud Infrastructure
Cloud Interoperability
Cloud Native Solution
Cloud Security
cloudstack
cloud storage
Cloud Storage Data
Cloud Storage Security
Codeless Automation
Cognitive analytics
Configuration Management
connected homes
container
Containers
container world 2019
container world conference
continuous-delivery
continuous deployment
continuous integration
Coronavirus
Covid-19
cryptocurrency
cyber security
data-analytics
data backup and recovery
datacenter
data protection
data replication
data-security
data-storage
deep learning
demo
Descriptive analytics
Descriptive analytics tools
development
devops
devops agile
devops automation
DEVOPS CERTIFICATION
devops monitoring
DevOps QA
DevOps Security
DevOps testing
DevSecOps
Digital Transformation
disaster recovery
DMA
docker
dockercon
dockercon 2019
dockercon 2019 san francisco
dockercon usa 2019
docker swarm
DRaaS
edge computing
Embedded AI
embedded-systems
end-to-end-test-automation
FaaS
finance
fintech
FIrebase
flash memory
flash memory summit
FMS2017
GDPR faqs
Glass-Box AI
golang
GraphQL
graphql vs rest
gui testing
habitat
hadoop
hardware-providers
healthcare
Heartfullness
High Performance Computing
Holistic Life
HPC
Hybrid-Cloud
hyper-converged
hyper-v
IaaS
IaaS Security
icinga
icinga for monitoring
Image Recognition 2024
infographic
InSpec
internet-of-things
investing
iot
iot application
iot testing
java 8 streams
javascript
jenkins
KubeCon
kubernetes
kubernetesday
kubernetesday bangalore
libstorage
linux
litecoin
log analytics
Log mining
Low-Code
Low-Code No-Code Platforms
Loyalty
machine-learning
Meditation
Microservices
migration
Mindfulness
ML
mobile-application-testing
mobile-automation-testing
monitoring tools
Mutli-Cloud
network
network file storage
new features
NFS
NVMe
NVMEof
NVMes
Online Education
opensource
openstack
opscode-2
OSS
others
Paas
PDLC
Positivty
predictive analytics
Predictive analytics tools
prescriptive analysis
private-cloud
product sustenance
programming language
public cloud
qa
qa automation
quality-assurance
Rapid Application Development
raspberry pi
RDMA
real time analytics
realtime analytics platforms
Real-time data analytics
Recovery
Recovery as a service
recovery as service
Retail
rsa
rsa 2019
rsa 2019 san francisco
rsac 2018
rsa conference
rsa conference 2019
rsa usa 2019
SaaS Security
san francisco
SDC India 2019
SDDC
security
Security Monitoring
Selenium Test Automation
selenium testng
serverless
Serverless Computing
Site Reliability Engineering
smart homes
smart mirror
SNIA
snia india 2019
SNIA SDC 2019
SNIA SDC INDIA
SNIA SDC USA
software
software defined storage
software-testing
software testing trends
software testing trends 2019
SRE
STaaS
storage
storage events
storage replication
Storage Trends 2018
storage virtualization
support
Synchronous Replication
technology
tech support
test-automation
Testing
testing automation tools
thought leadership articles
trends
tutorials
ui automation testing
ui testing
ui testing automation
vCenter Operations Manager
vCOPS
virtualization
VMware
vmworld
VMworld 2019
vmworld 2019 san francisco
VMworld 2019 US
vROM
Web Automation Testing
web test automation
WFH

Real People, Real Replies.
No Bots, No Black Holes.

Big things at Aziro often start small - a message, an idea, a quick hello. A real human reads every enquiry, and a simple conversation can turn into a real opportunity.
Start yours with us.

Phone

Talk to us

+1 844 415 0777

Email

Drop us a line at

info@aziro.com

Got a Tech Challenge? Let’s Talk