Tag Archive

Below you'll find a list of all posts that have been tagged as "chef"
blogImage

Building Package Using Omnibus

Omnibus is a tool for creating full-stack installers for multiple platforms. In general, it simplifies the installation of any software by including all of the dependencies for that piece of software. It was written by the people at Chef, who use it to package Chef. Omnibus consists of two pieces- omnibus and omnibus software. omnibus – the framework, created by Chef Software, by which we create full-stack, cross-platform installers for software. The project is on GitHub at chef/omnibus. omnibus-software – Chef Software’s open source collection of software definitions that are used to build the Chef Client, the Chef Server, and other Chef Software products. The software definitions can be found on GitHub at chef/omnibus-software. Omnibus provides both, a DSL for defining Omnibus projects for your software, as well as a command-line tool for generating installer artifacts from that definition. Omnibus has minimal prerequisites. It requires Ruby 2.0.0+ and Bundler. Getting Started To get started install omnibus > gem install omnibus You can now create an omnibus project inside your current directory using project generator feature > omnibus new demo This will generate a complete project skeleton in the directory as following: create omnibus-demo/Gemfile create omnibus-demo/.gitignore create omnibus-demo/README.md create omnibus-demo/omnibus.rb create omnibus-demo/config/projects/demo.rb create omnibus-demo/config/software/demo-zlib.rb create omnibus-demo/.kitchen.local.yml create omnibus-demo/.kitchen.yml create omnibus-demo/Berksfile create omnibus-demo/package-scripts/demo/preinst chmod omnibus-demo/package-scripts/demo/preinst create omnibus-demo/package-scripts/demo/prerm chmod omnibus-demo/package-scripts/demo/prerm create omnibus-demo/package-scripts/demo/postinst chmod omnibus-demo/package-scripts/demo/postinst create omnibus-demo/package-scripts/demo/postrm chmod omnibus-demo/package-scripts/demo/postrm It creates the omnibus-demo directory inside your current directory and this directory has all omnibus package build related files. It is easy to build an empty project without doing any change run > bundle install --binstubs bundle install installs all Omnibus dependencies bundle install installs all Omnibus dependencies The above command will create the installer inside pkg directory. Omnibus determines the platform for which to build an installer based on the platform it is currently running on. That is, you can only generate a .deb file on a Debian-based system. To alleviate this caveat, the generated project includes a Test Kitchen setup suitable for generating a series of Omnibus projects. Back to the Omnibus DSL. Though bin/omnibus build demo will build the package for you, it will not do anything exciting. For that, you need to use the Omnibus DSL to define the specifics of your application. 1) Config If present, Omnibus will use a top-level configuration file name omnibus.rb at the root of your repository. This file is loaded at runtime and includes number of configurations. For e.g.- omnibus.rb # Build locally (instead of /var) # ------------------------------- base_dir './local' # Disable git caching # ------------------------------ use_git_caching false # Enable S3 asset caching # ------------------------------ use_s3_caching true s3_access_key ENV['S3_ACCESS_KEY'] s3_secret_key ENV['S3_SECRET_KEY'] s3_bucket ENV['S3_BUCKET'] Please see config doc for more information. You can use different configuration file by using –config option using command line $ bin/omnibus --config /path/to/config.rb 2) Project DSL When you create an omnibus project, it creates a project DSL file inside config/project with the name which you used for creating project for above example it will create config/project/demo.rb. It provides means to define the dependencies of the project and metadata of the project. We will look at some contents of project DSL file name "demo" maintainer "YOUR NAME" homepage "http://yoursite.com" install_dir "/opt/demo" build_version "0.1.0" # Creates required build directories dependency "preparation" # demo dependencies/components dependency "harvester" ā€˜install_dir’ option is the location where package will be installed. There are more DSL methods available which you can use in this file. Please see the Project Doc for more information. 3) Software DSL Software DSL defines individual software components that go into making your overall package. The Software DSL provides a way to define where to retrieve the software sources, how to build them, and what dependencies they have. Now let’s edit a config/software/demo.rb name "demo" default_version "1.0.0" dependency "ruby" dependency "rubygems" build do #vendor the gems required by the app bundle ā€œinstall –path vendor/bundleā€ end In the above example, consider that we are building a package for Ruby on Rails application, hence we need to include ruby and rubygems dependency. The definition for ruby and rubygems dependency comes from the omnibus-software. Chef has introduced omnibus-software, it is a collection of software definitions used by chef while building their products. To use omnibus-software definitions you need to include the repo path in Gemfile. You can also write your own software definitions. Inside build block you can define how to build your installer. Omnibus provide Build DSL which you can use inside build block to define your build essentials. You can run ruby script and copy and delete files using Build DSL inside build block. Apart from all these DSL file omnibus also created ā€˜package-script’ directory which consist of pre install and post install script files. You can write a script which you want to run before and after the installation of package and also before and after the removal of the package inside these files. You can use the following references for more examples https://github.com/chef/omnibus https://www.chef.io/blog/2014/06/30/omnibus-a-look-forward/

Aziro Marketing

blogImage

What is Chef Automate?

Introduction to Chef AutomateChef Automate provides a full suite of enterprise capabilities for workflow, node visibility, and compliance. Chef Automate integrates with the open-source products Chef, InSpec, and Habitat. It comes with comprehensive 24Ɨ7 support services for the entire platform, including open source components.These capabilities include the ability to build, deploy, manage, and collaborate across all aspects of software production: infrastructure, applications, and compliance. Each capability represents a set of collective actions and the resulting artifacts.Collaborate:As software deployment speed increases across your organization, the need for fast real-time collaboration becomes critical. Different teams may use different tools to accomplish various tasks. The ability to integrate a variety of third-party products is necessary in support of continuous deployment of infrastructure and applications. Chef Automate provides tools for local development, several integration points including APIs and SDKs, in addition to deployment pipelines that support a common workflow.Build:Practicing continuous integration and following proper deployment workflows that methodically test all proposed changes help you to build code for production use. Packaging code into a reusable artifact ensures that you are testing, approving, and promoting use of an atomic change that is consistent across multiple environments and prevents configuration drift.Deploy:Deployment pipelines increase the speed and efficiency of your software deployments by simplifying the number of variables and removing the unpredictable nature of manual steps. Deployment pipelines have a specific beginning, a specific end, and a predictable way of working each time; thereby removing complexity, reducing risk, and improving efficiency. Establishing standard workflows that utilize deployment pipelines give your operations and development teams a common platform.Manage:With increased speed comes an increased demand to understand the current state of your underlying software automation. Organizations cannot ship software quickly, yet poorly, and still manage to outperform their competitors. The ability to visualize fleetwide status and ensure security and compliance requirements act as risk mitigation techniques to resolve errors quickly and easily. Removing manual processes and checklist requirements means that shifting management capabilities becomes a key component of moving to continuous automation.OSS Automation Engines:Chef Automate is powered by three open source engines: Chef, Habitat and InSpec.Chef is the engine for infrastructure automation.Habitat automates modern applications such as those that are in containers and composed of microservices.InSpec lets you specify compliance and security requirements as executable code.Automate Setup Steps1: You must have an ACC account2: Download open VPN (https://chef-vpn.chef.co/?src=connect)3: Download client.ovpn (After login above last link)4: Install Docker5: Install Docker-Compose6: Install Vagrant7: Install Virtual-Box8: Download and install the ChefDK. This will give you the Delivery CLI tool, which will allow you to clone the Workflow project from delivery.shd.chef.co. Remember to log into the VPN to access this site.9: Add your SSH keyOn the Admin page, add your public ssh key (usually found in ~/.ssh/id_rsa.pub) to your account. This will be necessary in a few minutes.>10: Setup deliverydelivery setup --ent=chef --org=products --user=pawasthi --server=automate.chef.co -f master11: Setup tokendelivery token --ent=chef --org=products --user=pawasthi --server=automate.chef.co12: Copy the token from browser and validate.13: Clone automate via deliverydelivery clone automate --ent=chef --org=products --user=pawasthi --server=automate.chef.co14: Goto Automate (cd automate) then run `make`Note: Before make add Hook after That.1:`apt-get update`2: `apt-get install direnv`3: run `direnv hook bash` and put what it prints in your `~/.bashrc` file4: then `source ~/bashrc`Note for error unhealthy cluster: Check the cluster is created first `docker-compose ps -a` then clean all project `make clean` then run `make` try to avoid `sudo` to minimise your error.Note for port: If you get any port as used, try to release that portLike: `netstat -tunlp | grep :port` if this return process is running on your required port then kill that process `kill -9 process_id`Visibility Web UI:Developing for Visibility UI follows the same pattern as Workflow UI: a local file-system watcher builds and syncs changes into visibility_ui container that Nginx redirects to.Before developing, you will need to get the docker-compose environment at the root of this repository running.cd .. && docker-compose upThe visibility_ui container should Exit 0 indicating the JavaScript bundle was built successfully.You can run some operations locally. Make sure your version of Node matches what’s defined in .nvmrc.We recommend you use nvm to install ā€˜node’ if you don’t have it already. To install node first install nvm with the below linecurl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.1/install.sh | bashThen install node by going into the /visibility-web directory and running the below commandnvm installTo ensure that node is running with the correct version compare the output of this command node -v to the file /visibility-web/.nvmrc.make install – will install the Node modules.make unit – will run the unit tests locally.make e2e – will run the end-to-end tests in the Docker Compose test environment with the functional test suite in ../test/functional.sh, andmake startdev – will start a watch process that’ll rebuild the bundle whenever you make changes. (Reload the browser to see them.)make beforecommit – will run typescript linting, sass linting, and unit testsĀ References:https://learn.chef.io/automate/

Aziro Marketing

blogImage

Chef Knife Plugin for Windows Azure (IAAS)

Chef is an open-source systems management and cloud infrastructure automation framework created by Opscode. It helps in managing your IT infrastructure and applications as code. It gives you a way to automate your infrastructure and processes. Knife is a CLI to create, update, search and delete the entities or manage actions on entities in your infrastructure like node (hosts), cloud resources, metadata (roles, environments) and code for infrastructure (recipes, cookbooks), etc. A Knife plug-in is a set of one (or more) subcommands that can be added to Knife to support additional functionality that is not built-in to the base set of Knife subcommands. The knife azure is a knife plugin which helps you automate virtual machine provisioning in Windows Azure and bootstrapping it. This article talks about using Chef and knife-azure plugin to provision Windows/Linux virtual machines in Windows Azure and bootstrapping the virtual machine. Understanding Windows Azure (IaaS): To deploy a Virtual Machine in a region (or service location) in Azure, all the components shown described above have to be created; A Virtual Machine is associated with a DNS (or cloud service). Multiple Virtual Machines can be associated with a single DNS with load-balancing enabled on certain ports (eg. 80, 443 etc). A Virtual Machine has a storage account associated with it which storages OS and Data disks A X509 certificate is required for password-less SSH authentication on Linux VMs and HTTPS-based WinRM authentication for Windows VMs. A service location is a geographic region in which to create the VMs, Storage accounts etc The Storage Account The storage account holds all the disks (OS as well as data). It is recommended that you create a storage account in a region and use it for the VMs in that region. If you provide the option –azure-storage-account, knife-azure plugin creates a new storage account with that name if it doesnt already exist. It uses this storage account to create your VM. If you do not specify the option, then the plugin checks for an existing storage account in the service location you have mentioned (using option –service-location). If no storage account exists in your location, then it creates a new storage with name prefixed with the azure-dns-name and suffixed with a 10 char random string. Azure Virtual Machine This is also called as Role(specified using option –azure-vm-name). If you do not specify the VM name, the default VM name is taken from the DNS name( specified using option –azure-dns-name). The VM name should be unique within a deployment. An Azure VM is analogous to the Amazon EC2 instance. Like an instance in Amazon is created from an AMI, you can create an Azure VM from the stock images provided by Azure. You can also create your own images and save them against your subscription. Azure DNS This is also called as Hosted Service or Cloud Service. It is a container for your application deployments in Azure( specified using option –azure-dns-name). A cloud service is created for each azure deployment. You can have multiple VMs(Roles) within a deployment with certain ports configured as load-balanced. OS Disk A disk is a VHD that you can boot and mount as a running version of an operating system. After an image is provisioned, it becomes a disk. A disk is always created when you use an image to create a virtual machine. Any VHD that is attached to virtualized hardware and that is running as part of a service is a disk. An existing OS Disk can be used (specified using option –azure-os-disk-name ) to create a VM as well. Certificates For SSH login without password, an X509 Certificate needs to be uploaded to the Azure DNS/Hosted service. As an end user, simply specify your private RSA key using –identity-file option and the knife plugin takes care of generating a X509 certificate. The virtual machine which is spawned then contains the required SSH thumbprint. I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo. Gem Install Run the command gem install knife-azure Install from Source Code To get the latest changes in the knife azure plugin, download the source code, build and install the plugin: 1. Uninstall any existing versions $ gem uninstall knife-azure Successfully uninstalled knife-azure-1.2.0 2. Clone the git repo and build the code $ git clone https://github.com/opscode/knife-azure $ cd knife-azure $ gem build knife-azure.gemspec WARNING: description and summary are identical Successfully built RubyGem Name: knife-azure Version: 1.2.0 File: knife-azure-1.2.0.gem 3. Install the gem $ gem install knife-azure-1.2.0.gem Successfully installed knife-azure-1.2.0 1 gem installed Installing ri documentation for knife-azure-1.2.0... Building YARD (yri) index for knife-azure-1.2.0... Installing RDoc documentation for knife-azure-1.2.0... 4. Verify your installation $ gem list | grep azure knife-azure (1.2.0) To provision a VM in Windows Azure and bootstrap using knife, Firstly, create a new windows azure account: at this link and secondly, download the publish settings file fromhttps://manage.windowsazure.com/publishsettings The publish settings file contains certificates used to sign all the HTTP requests (REST APIs). Azure supports two modes to create virtual machines – quick create and advanced. Azure VM Quick Create You can create a server with minimal configuration. On the Azure Management Portal, this corresponds to the ā€œQuick Create – Virtual Machineā€ workflow. The corresponding sample command for quick create for a small Windows instance is: knife azure server create --azure-publish-settings-file '/path/to/your/cert.publishsettingsfile' --azure-dns-name 'myservice' --azure-source-image 'windows-image-name' --winrm-password 'jetstream@123' --template-file 'windows-chef-client-msi.erb' --azure-service-location "West US" Azure VM Advanced Create You can set various other options in the advanced create including service location or region, storage-account, VM name etc. The corresponding command to create a Linux instance with advanced options is: knife azure server create --azure-publish-settings-file "path/to/your/publish/settings/file" --azure-vm-size Medium --azure-dns-name "HelloAzureDNS" --azure-service-location "West US" --azure-vm-name 'myvm01' --azure-source-image "b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-13_04-amd64-server-20130423-en-us-30GB" --azure-storage-account "helloazurestorage1" --ssh-user "helloazure" --identity-file "path/to/your/rsa/pvt/key" To create a VM and connect it to an existing DNS/service, you can use a command as below: knife azure server create --azure-publish-settings-file "path/to/your/publish/settings/file" --azure-connect-to-existing-dns --azure-dns-name 'myservice' --azure-vm-name 'myvm02' --azure-service-location 'West US' --azure-source-image 'source-image-name' --ssh-user 'jetstream' --ssh-password 'jetstream@123' List available Images: knife azure image list List currently available Virtual Machines: knife azure server list Delete and Clean up a Virtual Machine: knife azure server delete --azure-dns-name myvm02 'myservice' --chef-node-name 'myvm02' --purge This post is meant to explain the basics and usage for knife-azure.

Aziro Marketing

blogImage

Learn How to orchestrate Your Infrastructure Fleet with Chef Provisioning

Chef Provisioning is a relatively new member in the Chef family. It can be used to build infrastructure topologies using the new machine resource. This blog post shows how this is done. You bring up and configure individual nodes with Chef all the time. Your standard workflow would be to bootstrap a node, register the node to a Chef server, and then run Chef client to install software and configure the node. You would rinse and repeat this step for every node that you want in your fleet. Maybe you have written a nice wrapper over Chef and Knife to manage your clusters using Chef. Until recently, Chef did not have any way to understand the concept of cluster or fleet. So if you were running a web application with some decent traffic,there would be a bunch of cookbooks and recipes to install and configure: web servers, DB server, background processor, load balancer, etc. Sometimes, you might have additional nodes for Redis or RabbitMQ. So let us say, your cluster consists of three web servers, one DB server, one server that does all the background processing, like generate PDFs or send emails etc., and one load balancer for the three web servers. Now if you wanted to bring such a cluster for multiple environments, say ā€œtestingā€, ā€œstaging,ā€ and ā€œproduction,ā€ you would have to repeat the steps for each environment; not to mention, your environments could possibly be powered by different providers–production and staging on AWS, Azure, etc. But testing could possible be on local infrastructure, maybe in VMs. This is not difficult, but it definitely makes you wonder whether you could do it better–if only you could describe your infrastructure as code that comes up with just one command. That is exactly what Chef Provisioning does. Chef Provisioning was introduced in Chef version 12. This helps you describe your cluster as code and build it at will as many times as you want and on various types of clouds, virtual machines, or even on bare metal. The Concepts Chef provisioning depends on two main pillars–machine resource and drivers. Machine Resource ā€œmachineā€ is an abstract concept of a node from your infrastructure topology. It could be an AWS EC2 instance or a node on some other cloud provider. It could be a Vagrant-based virtual machine, a Linux container, or a Docker instance. It could even be a real, physical bare-metal machine. ā€œmachineā€ and other related resources (like machine_batch, machine_image, etc.,) can be used to describe your cluster infrastructure. Each ā€œmachineā€ resource describes whatever it does using standard Chef recipes. General convention is to describe your fleet and its topologies using ā€œmachineā€ and other resources in a separate file. We will see this in detail soon, but for now here is how a machine is described. #setup-cluster.rb machine 'server' do recipe 'nginx' end machine 'db' do recipe 'mysql' end A recipe is one of a ā€œmachineā€ resource’s attributes. Later we will see a few more of these along with their examples. Drivers As mentioned earlier, with Chef Provisioning you can describe your clusters and their topologies and then deploy them across a variety of clouds, VMs, bare metal, etc. For each such cloud or machine that you would like to provision, there are drivers that do the actual heavy lifting. Drivers convert the abstract ā€œmachineā€ descriptions into physical reality. Drivers are responsible for acquiring the node data, connecting to them via required protocol, bootstrapping them with Chef, and running the recipes described in the ā€œmachineā€ resource. Provisioning drivers need to be installed separately as gems. Following shows how to install and use AWS driver via environment variables in your system. $ gem install chef-provisioning-aws $ export CHEF_DRIVER=aws Running Chef-client on the above recipe will create two instances in your AWS account referenced by your settings in ā€œ~/.aws/config.ā€ We will see an example run later in the post. Driver can be set in your knife.rb if you so prefer. Here, we set the chef-provisioning-fog driver for AWS. driver 'fog:AWS' It is possible to set driver inline in the cluster recipe code. require 'chef/provisioning/aws_driver' with_driver 'aws' machine 'server' do recipe ā€˜web-server-app' end In the following example, Vagrant driver is given the driver attribute and a driver URL as the value. ā€œ/opt/vagrantfilesā€ will be looked up for Vagrantfiles in the following case. machine 'server' do driver 'vagrant:/opt/vagrantfiles' recipe 'web-server-app' end It’s a good practice to keep driver details and cluster code separate as it lets you use the same cluster descriptions with different provisioners by just changing the driver in the environment. It is possible to write your own custom provisioning drivers. But that is beyond the scope of this blog post. The Provisioner Node An interesting concept you need to understand is that Chef Provisioner needs a ā€œprovisioner-nodeā€ to provision all machines. This node could be a node in your infrastructure or simply your workstation. chef-client (or chef-solo / chef-zero) runs on this ā€œprovisioner nodeā€ against a recipe that defines your cluster recipe. Chef Provisioner then takes care of acquiring a node in your infrastructure, bootstrapping it with Chef, and then running the required recipes on the node. Thus, you will see that chef-client runs twice–once on the provisioner node and then on the node that is being provisioned. The Real Thing Let us dig a deeper now. Let us first bring up a single DB server. Using Chef knife you can upload your cookbooks to the Chef server (you could do it with chef-zero as well). Here I have put all my required recipes in a cookbook called ā€œclusterā€ and uploaded it to a Chef server and set the ā€œchef_server_urlā€ in my ā€œclient.rbā€ and ā€œknife.rbā€. You can find all the examples here. Machine #recipes/webapp.rb require 'chef/provisioning' machine 'db' do recipe ā€˜database-server’ end machine 'webapp' do recipe 'web-app-stack' end To run the above recipe: sudo CHEF_DRIVER=aws chef-client -r "recipe["cluster::webapp"]" This should bring up two nodes in your infrastructure — a DB server and a web application server as defined by the web-app-stack recipe. The above command assumes that you have uploaded the cluster cookbook consisting of the required recipes to the Chef server. More Machine Goodies Like any other Chef resource, machine can have multiple actions and attributes that can be used to achieve different results. A ā€œmachineā€ can have a ā€œchef_serverā€ attribute, which means different machines can talk to different Chef servers. ā€œfrom_imageā€ attribute can be used to set a machine image that can be used to create a machine. You can read more about machine resource here. Parallelisation Using machine_batch Now if you would like to have more than one web application instances in your cluster and you need more web app servers, say 5 instances, what do you do? Run a loop over your machine resource. 1.upto(5) do |i| machine "webapp-#{i}" do recipe 'web-app-stack' end end The above code snippet, when run, should bring up and configure five instances in parallel. ā€œmachineā€ resource parallelizes by default. If you describe multiple ā€œmachineā€ resources consecutively with same actions, then Chef Provisioning combines them into a single (ā€œmachine_batchā€, more about this later) resource and runs it in parallel. This is great because it saves a lot of time. The following will not parallelize because the actions are different. machine 'webapp' do action :setup end machine 'db' do action :destroy end Note: if you put other resources between ā€œmachineā€ resources, the automatic parallelization does not happen. machine 'webapp' do action :setup end remote_file 'somefile.tar.gz' do url 'https://example.com/somefile.tar.gz' end machine 'db' do action :setup end Also, you can explicitly turn off parallelization by setting ā€œauto_batch_machines = falseā€ in Chef config (knife.rb or client.rb). Using ā€œmachine_batchā€ explicitly, we can parallelize and speed up provisioning for multiple machines. machine_batch do action :setup machines 'web-app-stack', 'db' Machine Image It is even possible to define machine images using a ā€œmachine_imageā€ resource which can be used to build machines by the ā€œmachineā€ resource. machine_image 'web_stack_image' do recipe ā€˜web-app-stack’ end The above code will launch a machine using your chosen driver, install and configure the node as per the given recipes, create an image from this machine, and finally destroy the machine. This is quite similar to how Packer tool launches a node, configures it, and then freezes it as image before destroying the node. machine 'web-app-stack' do from_image 'web_stack_image' end Here a machine ā€œweb-app-stackā€ when launched will already have everything in the recipe ā€œweb-app-stackā€. This saves a lot of time when you want to spin up machines, which have common base recipes. Think of a situation where team members need machines with some common stuff installed, and different people install their own specific things as per requirement. In such a case, one could create an image with the basic packages e.g., build-essential, ruby, vim, etc., and that base image could use a source machine image for further work. Load Balancer A very common scenario is to put a bunch of machines, say web-application-servers, behind a load balancer thus achieving redundancy. Chef Provisioning has a resource specifically for load balancers, aptly called ā€œload_balancerā€. All you need to do is create the machine nodes and then pass the machines to a ā€œload_balancerā€ as below. 1.upto(2) do |node_id| machine ā€œweb-app-stack-#{node_id}ā€ end load_balancer "web-app-load-balancer" do machines %w(web-app-stack-1 web-app-stack-2) end The above code will bring up two nodes–webapp-stack-1 and webapp-stack-2 and put a load balancer in front of them. Final Thoughts If you are using the AWS driver, you can set machine_options as below. This is important if you want to use customized AMIs, users, security groups, etc. with_machine_options :ssh_username => '', :bootstrap_options => { :key_name => '', :image_id => ā€˜', :instance_type => ā€˜ā€™, :security_group_ids => '' } If you don’t provide the AMI ID, the AWS driver defaults to a certain AMI per region. Whatever AMI you use, you have to use the correct ssh username for the respective AMI. [3] One very important thing to note would be that there exists a Fog driver (chef-provisioning-fog) for various cloud services including EC2. So, there are often different names for the parameters that you might want to use. For example, the chef-provisioning-aws driver that depends on AWS Ruby SDK uses ā€œinstance_typeā€ where as the Fog driver uses ā€œflavor_idā€. Security Groups use the key ā€œsecurity_groups_idsā€ in the AWS driver and takes ID as value, but the Fog driver uses ā€œgroupsā€ and takes the name of the Security Group as its value. This can at times lead to confusion if you are moving from one driver to another. At the time of writing this article, I could use the help of the documentation of various drivers. The best way to understand them would be to check the examples provided, run them and learn from them–maybe even read the source code of various drivers to understand how they work. Chef Provisioning recently got bumped to 1.0.0. I would highly recommend to keep an eye on the GitHub issues in case you face some trouble. References [1] https://docs.chef.io/provisioning.html [2] https://github.com/pradeepto/chef-provisioning-playground [3] http://alestic.com/2014/01/ec2-ssh-username [4] https://github.com/chef/chef-provisioning/issues Ā 

Aziro Marketing

EXPLORE ALL TAGS
2019 dockercon
Advanced analytics
Agentic AI
agile
AI
AI ML
AIOps
Amazon Aws
Amazon EC2
Analytics
Analytics tools
AndroidThings
Anomaly Detection
Anomaly monitor
Ansible Test Automation
apache
apache8
Apache Spark RDD
app containerization
application containerization
applications
Application Security
application testing
artificial intelligence
asynchronous replication
automate
automation
automation testing
Autonomous Storage
AWS Lambda
Aziro
Aziro Technologies
big data
Big Data Analytics
big data pipeline
Big Data QA
Big Data Tester
Big Data Testing
bitcoin
blockchain
blog
bluetooth
buildroot
business intelligence
busybox
chef
ci/cd
CI/CD security
cloud
Cloud Analytics
cloud computing
Cloud Cost Optimization
cloud devops
Cloud Infrastructure
Cloud Interoperability
Cloud Native Solution
Cloud Security
cloudstack
cloud storage
Cloud Storage Data
Cloud Storage Security
Codeless Automation
Cognitive analytics
Configuration Management
connected homes
container
Containers
container world 2019
container world conference
continuous-delivery
continuous deployment
continuous integration
Coronavirus
Covid-19
cryptocurrency
cyber security
data-analytics
data backup and recovery
datacenter
data protection
data replication
data-security
data-storage
deep learning
demo
Descriptive analytics
Descriptive analytics tools
development
devops
devops agile
devops automation
DEVOPS CERTIFICATION
devops monitoring
DevOps QA
DevOps Security
DevOps testing
DevSecOps
Digital Transformation
disaster recovery
DMA
docker
dockercon
dockercon 2019
dockercon 2019 san francisco
dockercon usa 2019
docker swarm
DRaaS
edge computing
Embedded AI
embedded-systems
end-to-end-test-automation
FaaS
finance
fintech
FIrebase
flash memory
flash memory summit
FMS2017
GDPR faqs
Glass-Box AI
golang
GraphQL
graphql vs rest
gui testing
habitat
hadoop
hardware-providers
healthcare
Heartfullness
High Performance Computing
Holistic Life
HPC
Hybrid-Cloud
hyper-converged
hyper-v
IaaS
IaaS Security
icinga
icinga for monitoring
Image Recognition 2024
infographic
InSpec
internet-of-things
investing
iot
iot application
iot testing
java 8 streams
javascript
jenkins
KubeCon
kubernetes
kubernetesday
kubernetesday bangalore
libstorage
linux
litecoin
log analytics
Log mining
Low-Code
Low-Code No-Code Platforms
Loyalty
machine-learning
Meditation
Microservices
migration
Mindfulness
ML
mobile-application-testing
mobile-automation-testing
monitoring tools
Mutli-Cloud
network
network file storage
new features
NFS
NVMe
NVMEof
NVMes
Online Education
opensource
openstack
opscode-2
OSS
others
Paas
PDLC
Positivty
predictive analytics
Predictive analytics tools
prescriptive analysis
private-cloud
product sustenance
programming language
public cloud
qa
qa automation
quality-assurance
Rapid Application Development
raspberry pi
RDMA
real time analytics
realtime analytics platforms
Real-time data analytics
Recovery
Recovery as a service
recovery as service
rsa
rsa 2019
rsa 2019 san francisco
rsac 2018
rsa conference
rsa conference 2019
rsa usa 2019
SaaS Security
san francisco
SDC India 2019
SDDC
security
Security Monitoring
Selenium Test Automation
selenium testng
serverless
Serverless Computing
Site Reliability Engineering
smart homes
smart mirror
SNIA
snia india 2019
SNIA SDC 2019
SNIA SDC INDIA
SNIA SDC USA
software
software defined storage
software-testing
software testing trends
software testing trends 2019
SRE
STaaS
storage
storage events
storage replication
Storage Trends 2018
storage virtualization
support
Synchronous Replication
technology
tech support
test-automation
Testing
testing automation tools
thought leadership articles
trends
tutorials
ui automation testing
ui testing
ui testing automation
vCenter Operations Manager
vCOPS
virtualization
VMware
vmworld
VMworld 2019
vmworld 2019 san francisco
VMworld 2019 US
vROM
Web Automation Testing
web test automation
WFH

LET'S ENGINEER

Your Next Product Breakthrough

Book a Free 30-minute Meeting with our technology experts.

Aziro has been a true engineering partner in our digital transformation journey. Their AI-native approach and deep technical expertise helped us modernize our infrastructure and accelerate product delivery without compromising quality. The collaboration has been seamless, efficient, and outcome-driven.

Customer Placeholder
CTO

Fortune 500 company