DevOps Updates

Uncover our latest and greatest product updates
blogImage

2 Approaches to Ensure Success in Your DevOps Journey

DevOps makes continuous software delivery simple for both development and operation teams, with a set of tools and best practices. In order to understand the power of DevOps, we chose a standard development environment with a suite of applications such as git, Gerrit, Jenkins, JIRA and Nagios. We studied setting up such a traditional environment and compared the same with a more modern approach involving docker containers. Introduction In this article we will discuss about DevOps, traditional and container based ways of approaching it. For this purpose, we will have a fictitious software company (client) which wants to streamline its development and delivery process. What is DevOps? DevOps means many things to many people. The one that is closest to our view is ā€œDevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production supportā€. The process of continuous delivery of good quality code with the help of tools that make it easy. There are many tools that work in tandem to ensure that only good quality code ends up to the production. Our client wants to use the following tools. DevOps Tools Git – Most popular distributed version control system. Gerrit – Code review tool. Jenkins -Continuous integration tool. JIRA – Bug tracking tool. Development Workflow We came up with the following workflow that captures a typical development life cycle: A developer will commit their changes to the staging area of a branch. Gerrit will watch for commits in the staging area. Jenkins will watch Gerrit for new change sets to review. It will then trigger a set of jobs to run on the patch set. The results are shared with both Gerrit and JIRA. Based on the commit message the appropriate JIRA issue will be updated. When reviewers accept the change, it will be ready to commit. In order to let Jenkins auto update JIRA issue, a pattern was enforced for all commits. This allowed us to automate Jenkins to find and update a specific JIRA issue. DevOps Operations Workflow Operation teams were more concerned with provisioning machines (physical or virtual), installing suite of applications and eventually monitoring those machines and applications. Alert notifications are important too, in order to address any anomalies at the earliest. For monitoring and alert notification we used Nagios. Two Types of DevOps Approach Traditional Approach The traditional approach is to manually install these tools on bare metal boxes or virtual machines and configuring them to talk to each other. Following are the brief steps for traditional DevOps infrastructure. Git/Gerrit, Jenkins, JIRA is installed on single or multiple machines. Gerrit project,accounts are created and access is provided as per the requirement. Required plugins are installed on Jenkins( i.e. Gerrit-trigger, Git client, JIRA issue updater, Git etc). Previously installed plugins are configured on Jenkins. Jenkins ssh key is added against Gerrit account. Multiple accounts with few issues are created in JIRA. Now the whole DevOps infrastructure is ready to be used. DevOps Automation Services via Python Script We automated the workflow for both installation, configuration and monitoring using python script. Actual python code for downloading, installing and configuring Git, Gerrit, Jenkins, JIRA and Nagios can be found in the following Github repository. https://github.com/sujauddinmullick/dev_ops_traditional Container Approach The automation of installation and configuration relieves us of some pain in setting up infrastructure. Think of a situation where the client’s environment dependencies conflict with our DevOps infrastructure dependencies. In order to solve this problem, we tried isolating the DevOps environment from existing environment. We used docker engine to setup these tools. A docker engine builds over a linux container. A linux container is an operating system-level virtualization method for running multiple isolated linux systems (continers) on a single control host[2]. It differs from virtual machines in many ways. One stricking difference is containers share host’s kernel and library files but virtual machines do not. This makes containers lightweight than virtual machines. Getting a linux container up and running is once again a difficult task. Docker makes it simple. Docker is built on top of Linux containers, which makes it easy to create, deploy and run applications using containers. Dockerfile is used to create a container image. It contains instructions for docker to assemble an image. For example, following Dockerfile will build a Jenkins image. Sample Dockerfile for Jenkins: FROM jenkins MAINTAINER sujauddin # Install plugins COPY plugins.txt /usr/local/etc/plugins.txt RUN /usr/local/bin/plugins.sh /usr/local/etc/plugins.txt # Add gerrit-trigger plugin config file COPY gerrit-trigger.xml /usr/local/etc/gerrit-trigger.xml COPY gerrit-trigger.xml /var/jenkins_home/gerrit-trigger.xml # Add Jenkins URL and system admin e-mail config file COPY jenkins.model.JenkinsLocationConfiguration.xml /usr/local/etc/jenkins.model.JenkinsLocationConfiguration.xml COPY hudson.plugins.JIRA.JIRAProjectProperty.xml /var/jenkins_home/hudson.plugins.JIRA.JIRAProjectProperty.xml COPY jenkins.model.JenkinsLocationConfiguration.xml /var/jenkins_home/jenkins.model.JenkinsLocationConfiguration.xml #COPY jenkins.model.ArtifactManagerConfiguration.xml /var/jenkins_home/jenkins.model.ArtifactManagerConfiguration.xml # Add setup script. COPY jenkins-setup.sh /usr/local/bin/jenkins-setup.sh # Add cloud setting in config file. COPY config.xml /usr/local/etc/config.xml COPY jenkins-cli.jar /usr/local/etc/jenkins-cli.jar COPY jenkins_job.xml /usr/local/etc/jenkins_job.xml We can run the previously built images inside a docker container. To setup and run a set of containers, docker-compose tool is used. This command will take a docker-compose.yml file and builds and runs all the containers defined there. The actual compose file we used to build git, gerrit, jenkins and JIRA is given below. docker-compose.yml final_gerrit: image: sujauddin/docker_gerrit_final restart: always ports: - 8020:8080 - 29418:29418 final_JIRA: build: ./docker-JIRA ports: - 8025:8080 restart: always final_jenkins: build: ./docker-jenkins restart: always ports: - 8023:8080 - 8024:50000 links: - final_JIRA - final_gerrit final_DevOpsnagios: image: tpires/nagios ports: - 8036:80 - 8037:5666 - 8038:2266 restart: always With one command we got all the containers up and running with all the necessary configurations done so that the whole DevOps workflow runs smoothly. Isn’t that cool? Conclusion Clearly docker based approach is easy to setup and more efficient. Containers can be quickly deployed (usually in a few seconds), can be ported along with the application and its dependencies and has minimal memory footprint. Resulting into a happy client!

Aziro Marketing

blogImage

Building Package Using Omnibus

Omnibus is a tool for creating full-stack installers for multiple platforms. In general, it simplifies the installation of any software by including all of the dependencies for that piece of software. It was written by the people at Chef, who use it to package Chef. Omnibus consists of two pieces- omnibus and omnibus software. omnibus – the framework, created by Chef Software, by which we create full-stack, cross-platform installers for software. The project is on GitHub at chef/omnibus. omnibus-software – Chef Software’s open source collection of software definitions that are used to build the Chef Client, the Chef Server, and other Chef Software products. The software definitions can be found on GitHub at chef/omnibus-software. Omnibus provides both, a DSL for defining Omnibus projects for your software, as well as a command-line tool for generating installer artifacts from that definition. Omnibus has minimal prerequisites. It requires Ruby 2.0.0+ and Bundler. Getting Started To get started install omnibus > gem install omnibus You can now create an omnibus project inside your current directory using project generator feature > omnibus new demo This will generate a complete project skeleton in the directory as following: create omnibus-demo/Gemfile create omnibus-demo/.gitignore create omnibus-demo/README.md create omnibus-demo/omnibus.rb create omnibus-demo/config/projects/demo.rb create omnibus-demo/config/software/demo-zlib.rb create omnibus-demo/.kitchen.local.yml create omnibus-demo/.kitchen.yml create omnibus-demo/Berksfile create omnibus-demo/package-scripts/demo/preinst chmod omnibus-demo/package-scripts/demo/preinst create omnibus-demo/package-scripts/demo/prerm chmod omnibus-demo/package-scripts/demo/prerm create omnibus-demo/package-scripts/demo/postinst chmod omnibus-demo/package-scripts/demo/postinst create omnibus-demo/package-scripts/demo/postrm chmod omnibus-demo/package-scripts/demo/postrm It creates the omnibus-demo directory inside your current directory and this directory has all omnibus package build related files. It is easy to build an empty project without doing any change run > bundle install --binstubs bundle install installs all Omnibus dependencies bundle install installs all Omnibus dependencies The above command will create the installer inside pkg directory. Omnibus determines the platform for which to build an installer based on the platform it is currently running on. That is, you can only generate a .deb file on a Debian-based system. To alleviate this caveat, the generated project includes a Test Kitchen setup suitable for generating a series of Omnibus projects. Back to the Omnibus DSL. Though bin/omnibus build demo will build the package for you, it will not do anything exciting. For that, you need to use the Omnibus DSL to define the specifics of your application. 1) Config If present, Omnibus will use a top-level configuration file name omnibus.rb at the root of your repository. This file is loaded at runtime and includes number of configurations. For e.g.- omnibus.rb # Build locally (instead of /var) # ------------------------------- base_dir './local' # Disable git caching # ------------------------------ use_git_caching false # Enable S3 asset caching # ------------------------------ use_s3_caching true s3_access_key ENV['S3_ACCESS_KEY'] s3_secret_key ENV['S3_SECRET_KEY'] s3_bucket ENV['S3_BUCKET'] Please see config doc for more information. You can use different configuration file by using –config option using command line $ bin/omnibus --config /path/to/config.rb 2) Project DSL When you create an omnibus project, it creates a project DSL file inside config/project with the name which you used for creating project for above example it will create config/project/demo.rb. It provides means to define the dependencies of the project and metadata of the project. We will look at some contents of project DSL file name "demo" maintainer "YOUR NAME" homepage "http://yoursite.com" install_dir "/opt/demo" build_version "0.1.0" # Creates required build directories dependency "preparation" # demo dependencies/components dependency "harvester" ā€˜install_dir’ option is the location where package will be installed. There are more DSL methods available which you can use in this file. Please see the Project Doc for more information. 3) Software DSL Software DSL defines individual software components that go into making your overall package. The Software DSL provides a way to define where to retrieve the software sources, how to build them, and what dependencies they have. Now let’s edit a config/software/demo.rb name "demo" default_version "1.0.0" dependency "ruby" dependency "rubygems" build do #vendor the gems required by the app bundle ā€œinstall –path vendor/bundleā€ end In the above example, consider that we are building a package for Ruby on Rails application, hence we need to include ruby and rubygems dependency. The definition for ruby and rubygems dependency comes from the omnibus-software. Chef has introduced omnibus-software, it is a collection of software definitions used by chef while building their products. To use omnibus-software definitions you need to include the repo path in Gemfile. You can also write your own software definitions. Inside build block you can define how to build your installer. Omnibus provide Build DSL which you can use inside build block to define your build essentials. You can run ruby script and copy and delete files using Build DSL inside build block. Apart from all these DSL file omnibus also created ā€˜package-script’ directory which consist of pre install and post install script files. You can write a script which you want to run before and after the installation of package and also before and after the removal of the package inside these files. You can use the following references for more examples https://github.com/chef/omnibus https://www.chef.io/blog/2014/06/30/omnibus-a-look-forward/

Aziro Marketing

blogImage

Learn How to orchestrate Your Infrastructure Fleet with Chef Provisioning

Chef Provisioning is a relatively new member in the Chef family. It can be used to build infrastructure topologies using the new machine resource. This blog post shows how this is done. You bring up and configure individual nodes with Chef all the time. Your standard workflow would be to bootstrap a node, register the node to a Chef server, and then run Chef client to install software and configure the node. You would rinse and repeat this step for every node that you want in your fleet. Maybe you have written a nice wrapper over Chef and Knife to manage your clusters using Chef. Until recently, Chef did not have any way to understand the concept of cluster or fleet. So if you were running a web application with some decent traffic,there would be a bunch of cookbooks and recipes to install and configure: web servers, DB server, background processor, load balancer, etc. Sometimes, you might have additional nodes for Redis or RabbitMQ. So let us say, your cluster consists of three web servers, one DB server, one server that does all the background processing, like generate PDFs or send emails etc., and one load balancer for the three web servers. Now if you wanted to bring such a cluster for multiple environments, say ā€œtestingā€, ā€œstaging,ā€ and ā€œproduction,ā€ you would have to repeat the steps for each environment; not to mention, your environments could possibly be powered by different providers–production and staging on AWS, Azure, etc. But testing could possible be on local infrastructure, maybe in VMs. This is not difficult, but it definitely makes you wonder whether you could do it better–if only you could describe your infrastructure as code that comes up with just one command. That is exactly what Chef Provisioning does. Chef Provisioning was introduced in Chef version 12. This helps you describe your cluster as code and build it at will as many times as you want and on various types of clouds, virtual machines, or even on bare metal. The Concepts Chef provisioning depends on two main pillars–machine resource and drivers. Machine Resource ā€œmachineā€ is an abstract concept of a node from your infrastructure topology. It could be an AWS EC2 instance or a node on some other cloud provider. It could be a Vagrant-based virtual machine, a Linux container, or a Docker instance. It could even be a real, physical bare-metal machine. ā€œmachineā€ and other related resources (like machine_batch, machine_image, etc.,) can be used to describe your cluster infrastructure. Each ā€œmachineā€ resource describes whatever it does using standard Chef recipes. General convention is to describe your fleet and its topologies using ā€œmachineā€ and other resources in a separate file. We will see this in detail soon, but for now here is how a machine is described. #setup-cluster.rb machine 'server' do recipe 'nginx' end machine 'db' do recipe 'mysql' end A recipe is one of a ā€œmachineā€ resource’s attributes. Later we will see a few more of these along with their examples. Drivers As mentioned earlier, with Chef Provisioning you can describe your clusters and their topologies and then deploy them across a variety of clouds, VMs, bare metal, etc. For each such cloud or machine that you would like to provision, there are drivers that do the actual heavy lifting. Drivers convert the abstract ā€œmachineā€ descriptions into physical reality. Drivers are responsible for acquiring the node data, connecting to them via required protocol, bootstrapping them with Chef, and running the recipes described in the ā€œmachineā€ resource. Provisioning drivers need to be installed separately as gems. Following shows how to install and use AWS driver via environment variables in your system. $ gem install chef-provisioning-aws $ export CHEF_DRIVER=aws Running Chef-client on the above recipe will create two instances in your AWS account referenced by your settings in ā€œ~/.aws/config.ā€ We will see an example run later in the post. Driver can be set in your knife.rb if you so prefer. Here, we set the chef-provisioning-fog driver for AWS. driver 'fog:AWS' It is possible to set driver inline in the cluster recipe code. require 'chef/provisioning/aws_driver' with_driver 'aws' machine 'server' do recipe ā€˜web-server-app' end In the following example, Vagrant driver is given the driver attribute and a driver URL as the value. ā€œ/opt/vagrantfilesā€ will be looked up for Vagrantfiles in the following case. machine 'server' do driver 'vagrant:/opt/vagrantfiles' recipe 'web-server-app' end It’s a good practice to keep driver details and cluster code separate as it lets you use the same cluster descriptions with different provisioners by just changing the driver in the environment. It is possible to write your own custom provisioning drivers. But that is beyond the scope of this blog post. The Provisioner Node An interesting concept you need to understand is that Chef Provisioner needs a ā€œprovisioner-nodeā€ to provision all machines. This node could be a node in your infrastructure or simply your workstation. chef-client (or chef-solo / chef-zero) runs on this ā€œprovisioner nodeā€ against a recipe that defines your cluster recipe. Chef Provisioner then takes care of acquiring a node in your infrastructure, bootstrapping it with Chef, and then running the required recipes on the node. Thus, you will see that chef-client runs twice–once on the provisioner node and then on the node that is being provisioned. The Real Thing Let us dig a deeper now. Let us first bring up a single DB server. Using Chef knife you can upload your cookbooks to the Chef server (you could do it with chef-zero as well). Here I have put all my required recipes in a cookbook called ā€œclusterā€ and uploaded it to a Chef server and set the ā€œchef_server_urlā€ in my ā€œclient.rbā€ and ā€œknife.rbā€. You can find all the examples here. Machine #recipes/webapp.rb require 'chef/provisioning' machine 'db' do recipe ā€˜database-server’ end machine 'webapp' do recipe 'web-app-stack' end To run the above recipe: sudo CHEF_DRIVER=aws chef-client -r "recipe["cluster::webapp"]" This should bring up two nodes in your infrastructure — a DB server and a web application server as defined by the web-app-stack recipe. The above command assumes that you have uploaded the cluster cookbook consisting of the required recipes to the Chef server. More Machine Goodies Like any other Chef resource, machine can have multiple actions and attributes that can be used to achieve different results. A ā€œmachineā€ can have a ā€œchef_serverā€ attribute, which means different machines can talk to different Chef servers. ā€œfrom_imageā€ attribute can be used to set a machine image that can be used to create a machine. You can read more about machine resource here. Parallelisation Using machine_batch Now if you would like to have more than one web application instances in your cluster and you need more web app servers, say 5 instances, what do you do? Run a loop over your machine resource. 1.upto(5) do |i| machine "webapp-#{i}" do recipe 'web-app-stack' end end The above code snippet, when run, should bring up and configure five instances in parallel. ā€œmachineā€ resource parallelizes by default. If you describe multiple ā€œmachineā€ resources consecutively with same actions, then Chef Provisioning combines them into a single (ā€œmachine_batchā€, more about this later) resource and runs it in parallel. This is great because it saves a lot of time. The following will not parallelize because the actions are different. machine 'webapp' do action :setup end machine 'db' do action :destroy end Note: if you put other resources between ā€œmachineā€ resources, the automatic parallelization does not happen. machine 'webapp' do action :setup end remote_file 'somefile.tar.gz' do url 'https://example.com/somefile.tar.gz' end machine 'db' do action :setup end Also, you can explicitly turn off parallelization by setting ā€œauto_batch_machines = falseā€ in Chef config (knife.rb or client.rb). Using ā€œmachine_batchā€ explicitly, we can parallelize and speed up provisioning for multiple machines. machine_batch do action :setup machines 'web-app-stack', 'db' Machine Image It is even possible to define machine images using a ā€œmachine_imageā€ resource which can be used to build machines by the ā€œmachineā€ resource. machine_image 'web_stack_image' do recipe ā€˜web-app-stack’ end The above code will launch a machine using your chosen driver, install and configure the node as per the given recipes, create an image from this machine, and finally destroy the machine. This is quite similar to how Packer tool launches a node, configures it, and then freezes it as image before destroying the node. machine 'web-app-stack' do from_image 'web_stack_image' end Here a machine ā€œweb-app-stackā€ when launched will already have everything in the recipe ā€œweb-app-stackā€. This saves a lot of time when you want to spin up machines, which have common base recipes. Think of a situation where team members need machines with some common stuff installed, and different people install their own specific things as per requirement. In such a case, one could create an image with the basic packages e.g., build-essential, ruby, vim, etc., and that base image could use a source machine image for further work. Load Balancer A very common scenario is to put a bunch of machines, say web-application-servers, behind a load balancer thus achieving redundancy. Chef Provisioning has a resource specifically for load balancers, aptly called ā€œload_balancerā€. All you need to do is create the machine nodes and then pass the machines to a ā€œload_balancerā€ as below. 1.upto(2) do |node_id| machine ā€œweb-app-stack-#{node_id}ā€ end load_balancer "web-app-load-balancer" do machines %w(web-app-stack-1 web-app-stack-2) end The above code will bring up two nodes–webapp-stack-1 and webapp-stack-2 and put a load balancer in front of them. Final Thoughts If you are using the AWS driver, you can set machine_options as below. This is important if you want to use customized AMIs, users, security groups, etc. with_machine_options :ssh_username => '', :bootstrap_options => { :key_name => '', :image_id => ā€˜', :instance_type => ā€˜ā€™, :security_group_ids => '' } If you don’t provide the AMI ID, the AWS driver defaults to a certain AMI per region. Whatever AMI you use, you have to use the correct ssh username for the respective AMI. [3] One very important thing to note would be that there exists a Fog driver (chef-provisioning-fog) for various cloud services including EC2. So, there are often different names for the parameters that you might want to use. For example, the chef-provisioning-aws driver that depends on AWS Ruby SDK uses ā€œinstance_typeā€ where as the Fog driver uses ā€œflavor_idā€. Security Groups use the key ā€œsecurity_groups_idsā€ in the AWS driver and takes ID as value, but the Fog driver uses ā€œgroupsā€ and takes the name of the Security Group as its value. This can at times lead to confusion if you are moving from one driver to another. At the time of writing this article, I could use the help of the documentation of various drivers. The best way to understand them would be to check the examples provided, run them and learn from them–maybe even read the source code of various drivers to understand how they work. Chef Provisioning recently got bumped to 1.0.0. I would highly recommend to keep an eye on the GitHub issues in case you face some trouble. References [1] https://docs.chef.io/provisioning.html [2] https://github.com/pradeepto/chef-provisioning-playground [3] http://alestic.com/2014/01/ec2-ssh-username [4] https://github.com/chef/chef-provisioning/issues Ā 

Aziro Marketing

blogImage

9 Best Practices for a Mature Continuous Delivery Pipeline

Continuous Integration(CI) is a software engineering practice which evolved to support extreme and agile programming methodologies. CI consists of best practices consisting of build automation, continuous testing and code quality analysis. The desired result is that software in mainline can be rapidly built and deployed to production at any point. Continuous Delivery(CD) goes further and automates the deployment of software in QA, pre-production and production environments. Continuous Delivery enables organizations to make predictable releases reducing risk and, automation across the pipeline enables reduction of release cycles. CD is no longer an option if you run geographically distributed agile teams. Aziro (formerly MSys Technologies) has designed and deployed continuous integration and delivery pipelines for start-ups and large organizations leading to benefits like: Automate entire pipeline – reduce manual effort and accelerate release cycles Improve release quality – reduce rollbacks and defects Increased visibility leading to accountability and process improvements Cross-team visibility and openness – increased collaboration between development, QA, support and operations teams Reduction in costs for deployment and support A mature continuous delivery pipeline consists of the following steps and principles: 1. Maintain a single code repository for the product or organization Revision control for the project source code is absolutely mandatory. All the dependencies and artifacts required for the project should be in this repository. Avoid branches per developer to foster shared ownership and reduce integration defects. Git is a popular distributed version control system that we recommend. 2. Automated builds Leverage popular build tools like ANT, make, maven, etc to standardize the build process. A single command should be capable of building your entire system including the binaries and distribution media (RPM, tarballs, MSI files, ISOs). Builds should be fast – larger builds can be broken into smaller jobs and run in parallel. 3. Automated testing for each commit An automated process where each commit is built and tested is necessary to ensure a stable baseline. A continuous integration server can monitor the version control system and automatically run the builds and tests. Ideally, you should hook up the continuous integration server with Gerrit or ReviewBoard to report the results to reviewers. 4. Static Code Analysis Many teams ignore code quality until it is too late and accumulate heavy technical debt. All continuous integration servers have plugins that enable integration of static code analysis within your CD pipeline or one can also automate this using custom scripts. You should fail builds that do not pass agreed upon code quality criteria. 5. Frequent commits into baseline Developers should commit their changes frequently into the baseline. This allows fast feedback from the automated system and there are fewer conflicts and bugs during merges. With automated testing of each commit, developers will know the real-time state of their code. Integration testing in environment that are production clones Testing should be done in an environment that is as close to production as possible. The operating system versions, patches, libraries and dependencies should be the same on the test servers as on the production servers. Configuration management tools like Chef, Puppet, Ansible should be used to automate and standardize the setup of environments. 6. Well-defined promotion process and managing release artifacts Create and document a promotion process for your builds and releases. This involves defining when a build is ready for QA or pre-production testing. Or which build should be given to the support team. Having a well-defined process setup in your continuous integration servers improves agility within disparate or geographically distributed teams. Most continuous integration servers have features that allow you to setup promotion processes. Large teams tend to have hundreds or thousands of release artifacts across versions, custom builds for specific clients, RC releases, etc. A tool like Nexus or Artifactory can be used to efficiently and predictably store and manage release artifacts. 7. Automate your Deployment An effective CI/CD pipeline is one that is fully automated. Automating deployments is critical to reduce wastage of time and avoid possibility of human errors during deployment. Teams should implement scripts to deploy builds and verify using automated tests that the build is stable. This way not only the code but the deployment mechanisms also get tested regularly. It is also possible to setup continuous deployment which includes automated deployments into production environments along with necessary checks and balances. 8. Configuration management for deployments Software stacks have become complicated over the years and deployments more so. Customers commonly use virtualized environments, cloud and multiple datacenters. It is imperative to use configuration management tools like Chef, Puppet or custom scripts to ensure that you can stand up environments predictably for dev, QA, pre-prod and production. These tools will also enable you to setup and manage multi-datacenter or hybrid environments for your products. 9. Build status and test results should be published across the team Developers should be automatically notified when a build breaks so it can be fixed immediately. It should be possible to see whose changes broke the build or test cases. This feedback can be positively used by developers and QA to improve processes. Every CxO and engineering leader is looking to increase the ROI and predictability of their engineering teams. It is proven that these DevOps and Continuous Delivery(CD) practices lead to faster release cycles, better code quality, reduced engineering costs and enhanced collaboration between teams. Learn more about Aziro (formerly MSys Technologies)’ skills and expertise in DevOps/CI/Automation here. Get in touch with us for a free consulting session to embark on your journey to a mature continuous delivery pipeline – email us at marketing@aziro.com

Aziro Marketing

blogImage

Chef Knife Plugin for Windows Azure (IAAS)

Chef is an open-source systems management and cloud infrastructure automation framework created by Opscode. It helps in managing your IT infrastructure and applications as code. It gives you a way to automate your infrastructure and processes. Knife is a CLI to create, update, search and delete the entities or manage actions on entities in your infrastructure like node (hosts), cloud resources, metadata (roles, environments) and code for infrastructure (recipes, cookbooks), etc. A Knife plug-in is a set of one (or more) subcommands that can be added to Knife to support additional functionality that is not built-in to the base set of Knife subcommands. The knife azure is a knife plugin which helps you automate virtual machine provisioning in Windows Azure and bootstrapping it. This article talks about using Chef and knife-azure plugin to provision Windows/Linux virtual machines in Windows Azure and bootstrapping the virtual machine. Understanding Windows Azure (IaaS): To deploy a Virtual Machine in a region (or service location) in Azure, all the components shown described above have to be created; A Virtual Machine is associated with a DNS (or cloud service). Multiple Virtual Machines can be associated with a single DNS with load-balancing enabled on certain ports (eg. 80, 443 etc). A Virtual Machine has a storage account associated with it which storages OS and Data disks A X509 certificate is required for password-less SSH authentication on Linux VMs and HTTPS-based WinRM authentication for Windows VMs. A service location is a geographic region in which to create the VMs, Storage accounts etc The Storage Account The storage account holds all the disks (OS as well as data). It is recommended that you create a storage account in a region and use it for the VMs in that region. If you provide the option –azure-storage-account, knife-azure plugin creates a new storage account with that name if it doesnt already exist. It uses this storage account to create your VM. If you do not specify the option, then the plugin checks for an existing storage account in the service location you have mentioned (using option –service-location). If no storage account exists in your location, then it creates a new storage with name prefixed with the azure-dns-name and suffixed with a 10 char random string. Azure Virtual Machine This is also called as Role(specified using option –azure-vm-name). If you do not specify the VM name, the default VM name is taken from the DNS name( specified using option –azure-dns-name). The VM name should be unique within a deployment. An Azure VM is analogous to the Amazon EC2 instance. Like an instance in Amazon is created from an AMI, you can create an Azure VM from the stock images provided by Azure. You can also create your own images and save them against your subscription. Azure DNS This is also called as Hosted Service or Cloud Service. It is a container for your application deployments in Azure( specified using option –azure-dns-name). A cloud service is created for each azure deployment. You can have multiple VMs(Roles) within a deployment with certain ports configured as load-balanced. OS Disk A disk is a VHD that you can boot and mount as a running version of an operating system. After an image is provisioned, it becomes a disk. A disk is always created when you use an image to create a virtual machine. Any VHD that is attached to virtualized hardware and that is running as part of a service is a disk. An existing OS Disk can be used (specified using option –azure-os-disk-name ) to create a VM as well. Certificates For SSH login without password, an X509 Certificate needs to be uploaded to the Azure DNS/Hosted service. As an end user, simply specify your private RSA key using –identity-file option and the knife plugin takes care of generating a X509 certificate. The virtual machine which is spawned then contains the required SSH thumbprint. I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo. Gem Install Run the command gem install knife-azure Install from Source Code To get the latest changes in the knife azure plugin, download the source code, build and install the plugin: 1. Uninstall any existing versions $ gem uninstall knife-azure Successfully uninstalled knife-azure-1.2.0 2. Clone the git repo and build the code $ git clone https://github.com/opscode/knife-azure $ cd knife-azure $ gem build knife-azure.gemspec WARNING: description and summary are identical Successfully built RubyGem Name: knife-azure Version: 1.2.0 File: knife-azure-1.2.0.gem 3. Install the gem $ gem install knife-azure-1.2.0.gem Successfully installed knife-azure-1.2.0 1 gem installed Installing ri documentation for knife-azure-1.2.0... Building YARD (yri) index for knife-azure-1.2.0... Installing RDoc documentation for knife-azure-1.2.0... 4. Verify your installation $ gem list | grep azure knife-azure (1.2.0) To provision a VM in Windows Azure and bootstrap using knife, Firstly, create a new windows azure account: at this link and secondly, download the publish settings file fromhttps://manage.windowsazure.com/publishsettings The publish settings file contains certificates used to sign all the HTTP requests (REST APIs). Azure supports two modes to create virtual machines – quick create and advanced. Azure VM Quick Create You can create a server with minimal configuration. On the Azure Management Portal, this corresponds to the ā€œQuick Create – Virtual Machineā€ workflow. The corresponding sample command for quick create for a small Windows instance is: knife azure server create --azure-publish-settings-file '/path/to/your/cert.publishsettingsfile' --azure-dns-name 'myservice' --azure-source-image 'windows-image-name' --winrm-password 'jetstream@123' --template-file 'windows-chef-client-msi.erb' --azure-service-location "West US" Azure VM Advanced Create You can set various other options in the advanced create including service location or region, storage-account, VM name etc. The corresponding command to create a Linux instance with advanced options is: knife azure server create --azure-publish-settings-file "path/to/your/publish/settings/file" --azure-vm-size Medium --azure-dns-name "HelloAzureDNS" --azure-service-location "West US" --azure-vm-name 'myvm01' --azure-source-image "b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-13_04-amd64-server-20130423-en-us-30GB" --azure-storage-account "helloazurestorage1" --ssh-user "helloazure" --identity-file "path/to/your/rsa/pvt/key" To create a VM and connect it to an existing DNS/service, you can use a command as below: knife azure server create --azure-publish-settings-file "path/to/your/publish/settings/file" --azure-connect-to-existing-dns --azure-dns-name 'myservice' --azure-vm-name 'myvm02' --azure-service-location 'West US' --azure-source-image 'source-image-name' --ssh-user 'jetstream' --ssh-password 'jetstream@123' List available Images: knife azure image list List currently available Virtual Machines: knife azure server list Delete and Clean up a Virtual Machine: knife azure server delete --azure-dns-name myvm02 'myservice' --chef-node-name 'myvm02' --purge This post is meant to explain the basics and usage for knife-azure.

Aziro Marketing

EXPLORE ALL TAGS
2019 dockercon
Advanced analytics
Agentic AI
agile
AI
AI ML
AIOps
Amazon Aws
Amazon EC2
Analytics
Analytics tools
AndroidThings
Anomaly Detection
Anomaly monitor
Ansible Test Automation
apache
apache8
Apache Spark RDD
app containerization
application containerization
applications
Application Security
application testing
artificial intelligence
asynchronous replication
automate
automation
automation testing
Autonomous Storage
AWS Lambda
Aziro
Aziro Technologies
big data
Big Data Analytics
big data pipeline
Big Data QA
Big Data Tester
Big Data Testing
bitcoin
blockchain
blog
bluetooth
buildroot
business intelligence
busybox
chef
ci/cd
CI/CD security
cloud
Cloud Analytics
cloud computing
Cloud Cost Optimization
cloud devops
Cloud Infrastructure
Cloud Interoperability
Cloud Native Solution
Cloud Security
cloudstack
cloud storage
Cloud Storage Data
Cloud Storage Security
Codeless Automation
Cognitive analytics
Configuration Management
connected homes
container
Containers
container world 2019
container world conference
continuous-delivery
continuous deployment
continuous integration
Coronavirus
Covid-19
cryptocurrency
cyber security
data-analytics
data backup and recovery
datacenter
data protection
data replication
data-security
data-storage
deep learning
demo
Descriptive analytics
Descriptive analytics tools
development
devops
devops agile
devops automation
DEVOPS CERTIFICATION
devops monitoring
DevOps QA
DevOps Security
DevOps testing
DevSecOps
Digital Transformation
disaster recovery
DMA
docker
dockercon
dockercon 2019
dockercon 2019 san francisco
dockercon usa 2019
docker swarm
DRaaS
edge computing
Embedded AI
embedded-systems
end-to-end-test-automation
FaaS
finance
fintech
FIrebase
flash memory
flash memory summit
FMS2017
GDPR faqs
Glass-Box AI
golang
GraphQL
graphql vs rest
gui testing
habitat
hadoop
hardware-providers
healthcare
Heartfullness
High Performance Computing
Holistic Life
HPC
Hybrid-Cloud
hyper-converged
hyper-v
IaaS
IaaS Security
icinga
icinga for monitoring
Image Recognition 2024
infographic
InSpec
internet-of-things
investing
iot
iot application
iot testing
java 8 streams
javascript
jenkins
KubeCon
kubernetes
kubernetesday
kubernetesday bangalore
libstorage
linux
litecoin
log analytics
Log mining
Low-Code
Low-Code No-Code Platforms
Loyalty
machine-learning
Meditation
Microservices
migration
Mindfulness
ML
mobile-application-testing
mobile-automation-testing
monitoring tools
Mutli-Cloud
network
network file storage
new features
NFS
NVMe
NVMEof
NVMes
Online Education
opensource
openstack
opscode-2
OSS
others
Paas
PDLC
Positivty
predictive analytics
Predictive analytics tools
prescriptive analysis
private-cloud
product sustenance
programming language
public cloud
qa
qa automation
quality-assurance
Rapid Application Development
raspberry pi
RDMA
real time analytics
realtime analytics platforms
Real-time data analytics
Recovery
Recovery as a service
recovery as service
Retail
rsa
rsa 2019
rsa 2019 san francisco
rsac 2018
rsa conference
rsa conference 2019
rsa usa 2019
SaaS Security
san francisco
SDC India 2019
SDDC
security
Security Monitoring
Selenium Test Automation
selenium testng
serverless
Serverless Computing
Site Reliability Engineering
smart homes
smart mirror
SNIA
snia india 2019
SNIA SDC 2019
SNIA SDC INDIA
SNIA SDC USA
software
software defined storage
software-testing
software testing trends
software testing trends 2019
SRE
STaaS
storage
storage events
storage replication
Storage Trends 2018
storage virtualization
support
Synchronous Replication
technology
tech support
test-automation
Testing
testing automation tools
thought leadership articles
trends
tutorials
ui automation testing
ui testing
ui testing automation
vCenter Operations Manager
vCOPS
virtualization
VMware
vmworld
VMworld 2019
vmworld 2019 san francisco
VMworld 2019 US
vROM
Web Automation Testing
web test automation
WFH

Real People, Real Replies.
No Bots, No Black Holes.

Big things at Aziro often start small - a message, an idea, a quick hello. A real human reads every enquiry, and a simple conversation can turn into a real opportunity.
Start yours with us.

Phone

Talk to us

+1 844 415 0777

Email

Drop us a line at

info@aziro.com

Got a Tech Challenge? Let’s Talk