Tag Archive

Below you'll find a list of all posts that have been tagged as "others"
blogImage

10 Steps to Setup and Manage a Hadoop Cluster Using Ironfan

Recently, we faced a unique challenge – setup DevOps and management for a relatively complex Hadoop cluster on the Amazon EC2 Cloud. The obvious choice was to use a configuration management tool. Having extensively used Opscode’s Chef and given the flexibility and extensibility Chef provides; it was an obvious choice. While looking around for the best practices to manage a hadoop cluster using Chef, we stumbled upon: Ironfan What is Ironfan? In short Ironfan, open-souced by InfoChimps provides an abstraction on top of Chef, allowing users to easily provision, deploy and manage a cluster of servers – be it a simple web application or a complex Hadoop cluster. After a few experiments, we were convinced that Ironfan was the right thing to use as it simplifies a lot of configuration avoiding repetition while retaining the goodness of Chef. This blog shows how easy it is to setup and manage a Hadoop cluster using Ironfan. Pre-requisties: Chef Account (Hosted or Private) with knife.rb setup correctly on your client machine. Ruby setup (using RVM or otherwise) Installation: Now you can install IronFan on your machine using the steps mentioned here. Once you have all the packages setup correctly, perform these sanity checks: Ensure that the environment variable CHEF_USERNAME is your Chef Server username (unless your USER environment variable is the same as your Chef username) Ensure the the environment variable CHEF_HOMEBASE points to the location which contains the expanded out knife.rb ~/.chef should be a symbolic link to your knife directory in the CHEF_HOMEBASE Your knife/knife.rb file is not modified. Your Chef user PEM file should be in knife/credentials/{username}.pem Your organization’s Chef validator PEM file should be in knife/credentials/{organization}-validator.pem Your knife/credentials/knife-{organization}.rb file Should contain your Chef organization Should contain the chef_server_url Should contain the validation_client_name Should contain path to validation_key Should contain the aws_access_key_id/ aws_secret_access_key Should contain an AMI ID of an AMI you’d like to be able to boot in ec2_image_info Finally in the homebase rename the example_clusters directory to clusters. These are sample clusters than comes with Ironfan. Perform a knife cluster list command : $ knife cluster list Cluster Path: /.../homebase/clusters +----------------+-------------------------------------------------------+ | cluster | path | +----------------+-------------------------------------------------------+ | big_hadoop | /.../homebase/clusters/big_hadoop.rb | | burninator | /.../homebase/clusters/burninator.rb | ... Defining Cluster: Now lets define a cluster. A Cluster in IronFan is defined by a single file which describes all the configurations essential for a cluster. You can customize your cluster spec as follows: Define cloud provider settings Define base roles Define various facets Defining facet specific roles and recipes. Override properties of a particular facet server instance. Defining cloud provider settings: IronFan currently supports AWS and Rackspace Cloud providers. We will take an example of AWS cloud provider. For AWS you can provide config information like: Region, in which the servers will be deployed. Availibility zone to be used. EBS backed or Instance-Store backed servers Base Image(AMIs) to be used to spawn servers Security zone with the allowed port range. Defining Base Roles: You can define the global roles for a cluster. These roles will be applied to all servers unless explicitly overridden for any particular facet or server. All the available roles are defined in $CHEF_HOMEBASE/roles directory. You can create a custom role and use it in your cluster config. Defining Environment: Environments in Chef provide a mechanism for managing different environments such as production, staging, development, and testing, etc with one Chef setup (or one organization on Hosted Chef). With environments, you can specify per environment run lists in roles, per environment cookbook versions, and environment attributes. The available environments can be found in $CHEF_HOMEBASE/environments directory. Custom environments can be created and used. Ironfan.cluster 'my_first_cluster' do # Enviornment under which chef nodes will be placed environment :dev # Global roles for all servers role :systemwide role :ssh # Global ec2 cloud settings cloud(:ec2) do permanent true region 'us-east-1' availability_zones ['us-east-1c', 'us-east-1d'] flavor 't1.micro' backing 'ebs' image_name 'ironfan-natty' chef_client_script 'client.rb' security_group(:ssh).authorize_port_range(22..22) mount_ephemerals end Defining Facets: Facets are group of servers within a cluster. Facets share common attributes and roles. For example, in your cluster you have 2 app servers and 2 database servers then you can group the app servers under the app_server facet and the database servers under the database facet. Defining Facet specific roles and recipes: You can define roles and recipes particular to a facet. Even the global cloud settings can be overridden for a particular facet. facet :master do instances 1 recipe ‘nginx’ cloud(:ec2) do flavor ‘m1.small’ security_group(:web) do authorize_port_range(80..80) authorize_port_range(443..443) role :hadoop_namenode role :hadoop_secondarynn role :hadoop_jobtracker role :hadoop_datanode role :hadoop_tasktracker end facet :worker do instances 2 role :hadoop_datanode role :hadoop_tasktracker end In the above example we have defined a facet for Hadoop master node and a facet for worker node. The number of instances of master is set to 1 and that of worker is set to 2. Each master and worker facets have been assigned a set of roles. For master facet we have overridden the ec2 flavor settings as m1.medium. Also the security group for the master node is set to accept incoming traffic on port 80 and 443. Cluster Management: Now that we are ready with the cluster configuration lets get a hands on cluster management. All the cluster configuration files are placed under the $CHEF_HOMEBASE/clusters directory. We will place our new config file as hadoop_job001_cluster.rb. Now our new cluster should be listed in the cluster list. List Clusters: $ knife cluster list Cluster Path: /.../homebase/clusters +-------------+-------------------------+ | cluster | path | +-------------+-------------------------+ hadoop_job001 HOMEBASE/clusters/hadoop_job001_cluster.rb +-------------+-------------------------+ Show Cluster Configuration: $ knife cluster show hadoop_job001 Inventorying servers in hadoop_job001 cluster, all facets, all servers my_first_cluster: Loading chef my_first_cluster: Loading ec2 my_first_cluster: Reconciling DSL and provider information +-----------------------------+-------+-------------+----------+------------+-----+ | Name | Chef? | State | Flavor | AZ | Env | +-----------------------------+-------+-------------+----------+------------+-----+ | hadoop_job001-master-0 | no | not running | m1.small | us-east-1c | dev | | hadoop_job001-client-0 | no | not running | t1.micro | us-east-1c | dev | | hadoop_job001-client-1 | no | not running | t1.micro | us-east-1c | dev | +-----------------------------+-------+-------------+----------+------------+-----+ Launch Cluster: Launch Whole Cluster: $ knife cluster launch hadoop_job001 Loaded information for 3 computer(s) in cluster my_first_cluster +-----------------------------+-------+---------+----------+------------+-----+------------+--------- -------+----------------+------------+ | Name | Chef? | State | Flavor | AZ | Env | MachineID | Public IP | Private IP | Created On | +-----------------------------+-------+---------+----------+------------+-----+------------+----------------+----------------+------------+ | hadoop_job001-master-0 | yes | running | m1.small | us-east-1c | dev | i-c9e117b5 | 101.23.157.51 | 10.106.57.77 | 2012-12-10 | | hadoop_job001-client-0 | yes | running | t1.micro | us-east-1c | dev | i-cfe117b3 | 101.23.157.52 | 10.106.57.78 | 2012-12-10 | | hadoop_job001-client-1 | yes | running | t1.micro | us-east-1c | dev | i-cbe117b7 | 101.23.157.52 | 10.106.57.79 | 2012-12-10 | +-----------------------------+-------+---------+----------+------------+-----+------------+----------------+----------------+------------+ Launch a single instance of a facet: $ knife cluster launch hadoop_job001 master 0 Launch all instances of a facet: $ knife cluster launch hadoop_job001 worker Stop Whole Cluster: $ knife cluster stop hadoop_job001 Stop a single instance of a facet: $ knife cluster stop hadoop_job001 master 0 Stop all instances of a facet: $ knife cluster stop hadoop_job001 Setting up a Hadoop cluster and managing it cannot get easier than this! Just to re-cap, Ironfan, open-souced by InfoChimps, is a systems provisioning and deployment tool which automates entire systems configuration to enable the entire Big Data stack, including tools for data ingestion, scraping, storage, computation, and monitoring. There is another tool that we are exploring for Hadoop cluster management – Apache Ambari. We will post our findings and comparisons soon, stay tuned!

Aziro Marketing

blogImage

Chef Knife Plugin for Windows Azure (IAAS)

Chef is an open-source systems management and cloud infrastructure automation framework created by Opscode. It helps in managing your IT infrastructure and applications as code. It gives you a way to automate your infrastructure and processes. Knife is a CLI to create, update, search and delete the entities or manage actions on entities in your infrastructure like node (hosts), cloud resources, metadata (roles, environments) and code for infrastructure (recipes, cookbooks), etc. A Knife plug-in is a set of one (or more) subcommands that can be added to Knife to support additional functionality that is not built-in to the base set of Knife subcommands. The knife azure is a knife plugin which helps you automate virtual machine provisioning in Windows Azure and bootstrapping it. This article talks about using Chef and knife-azure plugin to provision Windows/Linux virtual machines in Windows Azure and bootstrapping the virtual machine. Understanding Windows Azure (IaaS): To deploy a Virtual Machine in a region (or service location) in Azure, all the components shown described above have to be created; A Virtual Machine is associated with a DNS (or cloud service). Multiple Virtual Machines can be associated with a single DNS with load-balancing enabled on certain ports (eg. 80, 443 etc). A Virtual Machine has a storage account associated with it which storages OS and Data disks A X509 certificate is required for password-less SSH authentication on Linux VMs and HTTPS-based WinRM authentication for Windows VMs. A service location is a geographic region in which to create the VMs, Storage accounts etc The Storage Account The storage account holds all the disks (OS as well as data). It is recommended that you create a storage account in a region and use it for the VMs in that region. If you provide the option –azure-storage-account, knife-azure plugin creates a new storage account with that name if it doesnt already exist. It uses this storage account to create your VM. If you do not specify the option, then the plugin checks for an existing storage account in the service location you have mentioned (using option –service-location). If no storage account exists in your location, then it creates a new storage with name prefixed with the azure-dns-name and suffixed with a 10 char random string. Azure Virtual Machine This is also called as Role(specified using option –azure-vm-name). If you do not specify the VM name, the default VM name is taken from the DNS name( specified using option –azure-dns-name). The VM name should be unique within a deployment. An Azure VM is analogous to the Amazon EC2 instance. Like an instance in Amazon is created from an AMI, you can create an Azure VM from the stock images provided by Azure. You can also create your own images and save them against your subscription. Azure DNS This is also called as Hosted Service or Cloud Service. It is a container for your application deployments in Azure( specified using option –azure-dns-name). A cloud service is created for each azure deployment. You can have multiple VMs(Roles) within a deployment with certain ports configured as load-balanced. OS Disk A disk is a VHD that you can boot and mount as a running version of an operating system. After an image is provisioned, it becomes a disk. A disk is always created when you use an image to create a virtual machine. Any VHD that is attached to virtualized hardware and that is running as part of a service is a disk. An existing OS Disk can be used (specified using option –azure-os-disk-name ) to create a VM as well. Certificates For SSH login without password, an X509 Certificate needs to be uploaded to the Azure DNS/Hosted service. As an end user, simply specify your private RSA key using –identity-file option and the knife plugin takes care of generating a X509 certificate. The virtual machine which is spawned then contains the required SSH thumbprint. I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo. Gem Install Run the command gem install knife-azure Install from Source Code To get the latest changes in the knife azure plugin, download the source code, build and install the plugin: 1. Uninstall any existing versions $ gem uninstall knife-azure Successfully uninstalled knife-azure-1.2.0 2. Clone the git repo and build the code $ git clone https://github.com/opscode/knife-azure $ cd knife-azure $ gem build knife-azure.gemspec WARNING: description and summary are identical Successfully built RubyGem Name: knife-azure Version: 1.2.0 File: knife-azure-1.2.0.gem 3. Install the gem $ gem install knife-azure-1.2.0.gem Successfully installed knife-azure-1.2.0 1 gem installed Installing ri documentation for knife-azure-1.2.0... Building YARD (yri) index for knife-azure-1.2.0... Installing RDoc documentation for knife-azure-1.2.0... 4. Verify your installation $ gem list | grep azure knife-azure (1.2.0) To provision a VM in Windows Azure and bootstrap using knife, Firstly, create a new windows azure account: at this link and secondly, download the publish settings file fromhttps://manage.windowsazure.com/publishsettings The publish settings file contains certificates used to sign all the HTTP requests (REST APIs). Azure supports two modes to create virtual machines – quick create and advanced. Azure VM Quick Create You can create a server with minimal configuration. On the Azure Management Portal, this corresponds to the “Quick Create – Virtual Machine” workflow. The corresponding sample command for quick create for a small Windows instance is: knife azure server create --azure-publish-settings-file '/path/to/your/cert.publishsettingsfile' --azure-dns-name 'myservice' --azure-source-image 'windows-image-name' --winrm-password 'jetstream@123' --template-file 'windows-chef-client-msi.erb' --azure-service-location "West US" Azure VM Advanced Create You can set various other options in the advanced create including service location or region, storage-account, VM name etc. The corresponding command to create a Linux instance with advanced options is: knife azure server create --azure-publish-settings-file "path/to/your/publish/settings/file" --azure-vm-size Medium --azure-dns-name "HelloAzureDNS" --azure-service-location "West US" --azure-vm-name 'myvm01' --azure-source-image "b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-13_04-amd64-server-20130423-en-us-30GB" --azure-storage-account "helloazurestorage1" --ssh-user "helloazure" --identity-file "path/to/your/rsa/pvt/key" To create a VM and connect it to an existing DNS/service, you can use a command as below: knife azure server create --azure-publish-settings-file "path/to/your/publish/settings/file" --azure-connect-to-existing-dns --azure-dns-name 'myservice' --azure-vm-name 'myvm02' --azure-service-location 'West US' --azure-source-image 'source-image-name' --ssh-user 'jetstream' --ssh-password 'jetstream@123' List available Images: knife azure image list List currently available Virtual Machines: knife azure server list Delete and Clean up a Virtual Machine: knife azure server delete --azure-dns-name myvm02 'myservice' --chef-node-name 'myvm02' --purge This post is meant to explain the basics and usage for knife-azure.

Aziro Marketing

blogImage

How to install Microsoft SQL Client Libraries Using CFEngine

CFEngine is an IT infrastructure automation framework that helps engineers, system admins, and other stakeholders in an IT organization manage IT infrastructure while ensuring service levels and compliance. We use CFEngine to solve one of the many problems within the automation for deploying Microsoft SQL Server client utilities. We will take a dive into CFEngine syntax and try to program (well, in configuration-management terminology declare the state of system). What does it take to install Microsoft SQL Server client libraries using CFEngine? You need two things to achieve this: 1. Microsoft SQL Server client installers 2. CFEngine understanding – we will learn this as we write the policy. Microsoft SQL Server 2008 R2 Native Client Let’s try to install the native client for the 64-bit system. The installer is available here. So we first need to download the installer and use it to install the application. Let’s break it down into smaller tasks to achieve this and figure out how to do the same using CFEngine. Basically we need to figure out two things here: 1. How to download the installer file from the URL? 2. How to use the downloaded file and invoke the installer. CFEngine defines a term ”promise” to describe the final state of a resource or part of a system. All such promises are written into a file referred to as ”policy file”. CFEngine has support for a large number of ”promise types” that can help you achieve day-to-day infrastructure tasks such as creating users or files with specific attributes, installing packages, etc. CFEngine has its own language syntax, known asDSL, that helps you define how to automate the system. All these are well described in the documentation. The things we need to know are variables, bundles (think of these as methods, i.e., group of promises) and classes (think of these as events or conditionals). Then there is ”ordering” that defines the flow of execution, which is mostly the implied ordering though you can have explicitly defined ordering using ”depends_on”. Well, I feel that I have described the whole CFEngine language in two paragraphs, which are going to be hard to understand unless you read CFEngine docs! But even if you do read them, these paragraphs should help you follow it with a real-life example. Jumping back on above breakup of tasks, let’s have a look at how to download the installer .msi file from a web URL. The URLs will be different for different versions of SQL Server client and architecture. Let’s define some variables using classes (as conditions) x86_64.2008R2.native_client:: "package_url" string => "http://download.microsoft.com/download/B/6/3/B63CAC7F-44BB-41FA-92A3- CBF71360F022/1033/x64/sqlncli.msi"; "msi_name" string => "sqlncli"; "msi_args" string => "IACCEPTSQLNCLILICENSETERMS=YES"; "package_name" string => "Microsoft SQL Server 2008 R2 Native Client"; The above CFEngine code defines a string variables initialized to values under the condition (using classes) that we are targeting 64-bit x86 systems and trying to install “Microsoft SQL Server 2008 R2 Native Client”. Note here that ‘x86_64’ is one CFEngine class; ‘2008R2’ is another class. You can define and initialize different values for these variables under other conditions, say x86.2008R2.native_client:: for 32-bit x86 systems. So the next question is how do we define these classes? “class_name_in_quotes” expression => condition based on classes or functions The above CFEngine code defines a string variables initialized to values under the condition (using classes) that we are targeting 64-bit x86 systems and trying to install “Microsoft SQL Server 2008 R2 Native Client”. Note here that ‘x86_64’ is one CFEngine class; ‘2008R2’ is another class. You can define and initialize different values for these variables under other conditions, say x86.2008R2.native_client:: for 32-bit x86 systems. So the next question is how do we define these classes? “class_name_in_quotes” expression => condition based on classes or functions Before we get into defining our classes lets write a definition for bundle (think of it as writing a method) that takes a few input arguments. # isserver - 0/1 # architecture - x86/x86_64 # mssqlversion - sql version 2008R2/2012 # type - native_client, cli, clr_types, management_objects, sql_powershell_ext bundle agent ensure(runenv, metadata, isserver, purge, architecture, mssqlversion, installer_type) { … } I hope this is self explanatory; just note that bundles are a CFEngine way of grouping a set of promises that may or may not take arguments. Logically, bundles can hold variables, classes, or methods in order to define a state of the system in a certain context. Let’s come back to defining classes required for our solution: classes can be based on system state or bundle arguments. So this is how we can define classes we require. bundle agent ensure(runenv, metadata, isserver, purge, architecture, mssqlversion, installer_type) { classes: "$(mssqlversion)" expression => "any"; "$(architecture)" expression => "any"; "$(installer_type)" expression => "any"; … We are defining our classes to be named the same as the argument values; for example, parameter architecture can be set as ‘x86_64’, and the argument mssqlversion can be set as ‘2008R2’. These are defined in ‘any’ case, but one can have conditional expression as well. For example, define a soft class, i.e., user defined class, ‘starnix’ if current platform is either Linux or Solaris, where Linux is another class (hard class) already defined by CFEngine. "starnix" expression => "linux||solaris"; Download Now that we have the basics, let’s write a bundle to download the installer from a web URL. Since we are doing this on Windows, we have two options to download the package from the Internet, using WScript or using powershell cmdlets. For using WScript we will have to write a script and trigger it via the CFEngine ‘commands’ promise. But using Powershell, the script will be very short and elegant compared to older WScript style. Here is how we do the download: bundle agent download_from_url(url, localpath) { classes: "already_downloaded" expression => fileexists("$(localpath)"); reports: already_downloaded:: "File is present at $(localpath)." classes => if_repaired("download_success"); !already_downloaded:: "Downloading from $(url) to $(localpath)"; commands: !already_downloaded.windows:: "(new-object System.Net.WebClient).DownloadFile('$(url)', '$(localpath)')" contain => pscontainbody, classes => if_repaired("download_success"); reports: !already_downloaded.download_success:: "Package was downloaded successfully from $(url) into $(localpath)."; !already_downloaded.!download_success:: "Package download failed from $(url)."; } body contain pscontainbody { useshell => "powershell"; } Note above that we are only trying to download if it is not already downloaded–a condition that is set by defining a class ‘already_downloaded’ using a CFEngine function fileexists() in an expression. The ‘commands’ promise helps trigger a dos/powershell/unix command. The command we use is creating an object of ‘System.Net.WebClient’ class in powershell and calling DownloadFile() method to download the installer from a web URL. Note that we have to escape quotes to keep CFEngine happy and delimit at proper places. Additionally, if the download was successful, define a new class to indicate the condition that the download was successful. This is achieved using classes => if_repaired(“download_success”) Another important CFEngine concept used here is the ‘body’, which can help modularize the specification of attributes. We just use it to define the ‘useshell’ attribute; for larger examples see this. Install For installation we have to use the installer in silent, non-interactive mode. This can be achieved by using ‘/qn’ flag to “msiexec.exe”. Here is how we can perform the install. bundle agent install_using_msi(installer, install_log, msi_args) { reports: "Installing Package using $(installer)"; commands: windows:: "Start-Process -FilePath "msiexec.exe" -ArgumentList '/qn /log $(install_log) / i $(installer) $(msi_args)' -Wait -Passthru" contain => pscontainbody, classes => if_repaired("installed_package"); } This should be easy to understand now; just note that we are reusing the ‘body’ concept here in the form of ‘pscontainbody’ that was defined during the download bundle definition previously. The Start-Process cmdlet with ‘–Wait’ option helps in running the installation in a synchronous manner. Now that we know how to download and install using bundles, we need to invoke these in order within the “ensure” bundle we looked at above, while defining variables. For this we will use the ”methods” promise type. methods: "fetch" usebundle => download_from_url("$(package_url)", "$(local_temp_dir)$(msi_name).msi"); !purge.download_success:: "install" usebundle => install_using_msi("$(local_temp_dir)$(msi_name).msi", "$(local_temp_dir)install_$(msi_name).log", "$(msi_args)"); The methods promises are named ‘fetch’ and ‘install’ and invoke the download_from_url and install_using_msi bundles respectively, passing the variable values. The “install” promise is evaluated only if the download was successful which is flagged using “download_success” CFEngine class. Given here is the complete source code for installing various SQL Server client utilities. # Copyright:: Copyright (c) 2014 Clogeny Technologies. # # License:: Apache License, Version 2.0 # # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file # except in compliance with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software distributed under the # License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, # either express or implied. See the License for the specific language governing permissions # and limitations under the License. body common control { inputs => { "c:Program FilesCfengineinputslibrariescfengine_stdlib.cf" }; bundlesequence => { "install_sql_client", "install_sql_cli", "install_sql_clr_types", "install_sql_management_objects", "install_sql_powershell_ext" }; # bundlesequence => { "uninstall_sql_powershell_ext", "uninstall_sql_management_objects", "uninstall_sql_clr_types", "uninstall_sql_cli", "uninstall_sql_client" }; } ################ TEST SPECS ################ bundle agent install_sql_client { methods: "any" usebundle => ensure("runenv", "metadata", "0", "0", "x86_64", "2008R2", "native_client"), classes => if_repaired("installed_sqlclient"); reports: installed_sqlclient:: "Installed Microsoft SQL Server 2008 R2 Native Client successfully."; } bundle agent install_sql_cli { methods: "any" usebundle => ensure("runenv", "metadata", "0", "0", "x86_64", "2008R2", "cli"), classes => if_repaired("installed_sqlcli"); reports: installed_sqlcli:: "Installed Microsoft SQL Server 2008 R2 Command Line Utilities successfully."; } bundle agent install_sql_clr_types { methods: "any" usebundle => ensure("runenv", "metadata", "0", "0", "x86_64", "2008R2", "clr_types"), classes => if_repaired("installed_sql_clr_types"); reports: installed_sql_clr_types:: "Installed Microsoft SQL Server System CLR Types (x64) successfully."; } bundle agent install_sql_management_objects { methods: "any" usebundle => ensure("runenv", "metadata", "0", "0", "x86_64", "2008R2", "management_objects"), classes => if_repaired("installed_sql_management_objects"); reports: installed_sql_management_objects:: "Installed Microsoft SQL Server 2008 R2 Management Objects (x64) successfully."; } bundle agent install_sql_powershell_ext { methods: "any" usebundle => ensure("runenv", "metadata", "0", "0", "x86_64", "2008R2", "sql_powershell_ext"), classes => if_repaired("installed_sql_powershell_ext"); reports: installed_sql_powershell_ext:: "Installed Windows PowerShell Extensions for SQL Server 2008 R2 successfully."; } ####### UNINSTALL ########## bundle agent uninstall_sql_client { methods: "any" usebundle => ensure("runenv", "metadata", "0", "1", "x86_64", "2008R2", "native_client"), classes => if_repaired("uninstalled_sqlclient"); reports: uninstalled_sqlclient:: "UnInstalled Microsoft SQL Server 2008 R2 Native Client successfully."; } bundle agent uninstall_sql_cli { methods: "any" usebundle => ensure("runenv", "metadata", "0", "1", "x86_64", "2008R2", "cli"), classes => if_repaired("uninstalled_sqlcli"); reports: uninstalled_sqlcli:: "UnInstalled Microsoft SQL Server 2008 R2 Command Line Utilities successfully."; } bundle agent uninstall_sql_clr_types { methods: "any" usebundle => ensure("runenv", "metadata", "0", "1", "x86_64", "2008R2", "clr_types"), classes => if_repaired("uninstalled_sql_clr_types"); reports: uninstalled_sql_clr_types:: "UnInstalled Microsoft SQL Server System CLR Types (x64) successfully."; } bundle agent uninstall_sql_management_objects { methods: "any" usebundle => ensure("runenv", "metadata", "0", "1", "x86_64", "2008R2", "management_objects"), classes => if_repaired("uninstalled_sql_management_objects"); reports: uninstalled_sql_management_objects:: "UnInstalled Microsoft SQL Server 2008 R2 Management Objects (x64) successfully."; } bundle agent uninstall_sql_powershell_ext { methods: "any" usebundle => ensure("runenv", "metadata", "0", "1", "x86_64", "2008R2", "sql_powershell_ext"), classes => if_repaired("uninstalled_sql_powershell_ext"); reports: uninstalled_sql_powershell_ext:: "UnInstalled Windows PowerShell Extensions for SQL Server 2008 R2 successfully."; } ################ ACTUAL CODE ################ # isserver - 0/1 # architecture - x86/x86_64 # mssqlversion - sql version 2008R2/2012 # type - native_client, cli, clr_types, management_objects, sql_powershell_ext bundle agent ensure(runenv, metadata, isserver, purge, architecture, mssqlversion, installer_type) { classes: "server" expression => strcmp($(isserver), "1"); "client" expression => strcmp($(isserver), "0"); "purge" expression => strcmp($(purge), "1"); "$(mssqlversion)" expression => "any"; "$(architecture)" expression => "any"; "$(installer_type)" expression => "any"; vars: "local_temp_dir" string => execresult("$env:temp", "powershell"); # ************ x86_64 configurations ************ x86_64.2008R2.native_client:: "package_url" string => "http://download.microsoft.com/download/B/6/3/ B63CAC7F-44BB-41FA-92A3-CBF71360F022/1033/x64/sqlncli.msi"; "msi_name" string => "sqlncli"; "msi_args" string => "IACCEPTSQLNCLILICENSETERMS=YES"; "package_name" string => "Microsoft SQL Server 2008 R2 Native Client"; x86_64.2008R2.cli:: "package_url" string => "http://download.microsoft.com/download/B/6/3/ B63CAC7F-44BB-41FA-92A3-CBF71360F022/1033/x64/SqlCmdLnUtils.msi"; "msi_name" string => "SqlCmdLnUtils"; "msi_args" string => ""; "package_name" string => "Microsoft SQL Server 2008 R2 Command Line Utilities"; x86_64.2008R2.clr_types:: "package_url" string => "http://download.microsoft.com/download/B/6/3/ B63CAC7F-44BB-41FA-92A3-CBF71360F022/1033/x64/SQLSysClrTypes.msi"; "msi_name" string => "SQLSysClrTypes"; "msi_args" string => ""; "package_name" string => "Microsoft SQL Server System CLR Types (x86_64)"; x86_64.2008R2.management_objects:: "package_url" string => "http://download.microsoft.com/download/B/6/3/ B63CAC7F-44BB-41FA-92A3-CBF71360F022/1033/x64/SharedManagementObjects.msi"; "msi_name" string => "SQLSharedManagementObjects"; "msi_args" string => ""; "package_name" string => "Microsoft SQL Server 2008 R2 Management Objects (x86_64)"; x86_64.2008R2.sql_powershell_ext:: "package_url" string => "http://download.microsoft.com/download/B/6/3/ B63CAC7F-44BB-41FA-92A3-CBF71360F022/1033/x64/PowerShellTools.msi"; "msi_name" string => "SQLPowerShellTools"; "msi_args" string => ""; "package_name" string => "Windows PowerShell Extensions for SQL Server 2008 R2"; # ************ x86 configurations ************ x86.2008R2.native_client:: "package_url" string => "http://download.microsoft.com/download/B/6/3/ B63CAC7F-44BB-41FA-92A3-CBF71360F022/1033/x86/sqlncli.msi"; "msi_name" string => "sqlncli"; "msi_args" string => "IACCEPTSQLNCLILICENSETERMS=YES"; "package_name" string => "Microsoft SQL Server 2008 R2 Native Client"; x86.2008R2.cli:: "package_url" string => "http://download.microsoft.com/download/B/6/3/ B63CAC7F-44BB-41FA-92A3-CBF71360F022/1033/x86/SqlCmdLnUtils.msi"; "msi_name" string => "SqlCmdLnUtils"; "msi_args" string => ""; "package_name" string => "Microsoft SQL Server 2008 R2 Command Line Utilities"; x86.2008R2.clr_types:: "package_url" string => "http://download.microsoft.com/download/B/6/3/ B63CAC7F-44BB-41FA-92A3-CBF71360F022/1033/x86/SQLSysClrTypes.msi"; "msi_name" string => "SQLSysClrTypes"; "msi_args" string => ""; "package_name" string => "Microsoft SQL Server System CLR Types (x86)"; x86.2008R2.management_objects:: "package_url" string => "http://download.microsoft.com/download/B/6/3/ B63CAC7F-44BB-41FA-92A3-CBF71360F022/1033/x86/SharedManagementObjects.msi"; "msi_name" string => "SQLSharedManagementObjects"; "msi_args" string => ""; "package_name" string => "Microsoft SQL Server 2008 R2 Management Objects (x86)"; x86.2008R2.sql_powershell_ext:: "package_url" string => "http://download.microsoft.com/download/B/6/3/ B63CAC7F-44BB-41FA-92A3-CBF71360F022/1033/x86/PowerShellTools.msi"; "msi_name" string => "SQLPowerShellTools"; "msi_args" string => ""; "package_name" string => "Windows PowerShell Extensions for SQL Server 2008 R2"; methods: "fetch" usebundle => download_from_url("$(package_url)", "$(local_temp_dir)$(msi_name).msi"); !purge.download_success:: "install" usebundle => install_using_msi("$(local_temp_dir)$(msi_name).msi", "$(local_temp_dir) install_$(msi_name).log", "$(msi_args)"); purge.download_success:: "uninstall" usebundle => uninstall_using_msi("$(local_temp_dir)$(msi_name).msi", "$(local_temp_dir)uninstall_$(msi_name).log", "$(msi_args)"); reports: installed_package:: "Installed Package successfully."; } bundle agent download_from_url(url, localpath) { classes: "already_downloaded" expression => fileexists("$(localpath)"); reports: already_downloaded:: "File is present at $(localpath)." classes => if_repaired("download_success"); !already_downloaded:: "Downloading from $(url) to $(localpath)"; commands: !already_downloaded.windows:: "(new-object System.Net.WebClient).DownloadFile('$(url)', '$(localpath)')" contain => pscontainbody, classes => if_repaired("download_success"); reports: !already_downloaded.download_success:: "Package was downloaded successfully from $(url) into $(localpath)."; !already_downloaded.!download_success:: "Package download failed from $(url)."; } body contain pscontainbody { useshell => "powershell"; } bundle agent install_using_msi(installer, install_log, msi_args) { reports: "Installing Package using $(installer)"; commands: windows:: "Start-Process -FilePath "msiexec.exe" -ArgumentList '/qn /log $(install_log) /i $(installer) $(msi_args)' -Wait -Passthru" contain => pscontainbody, classes => if_repaired("installed_package"); } bundle agent uninstall_using_msi(installer, uninstall_log, msi_args) { reports: "Uninstalling Package using $(installer)"; commands: windows:: "Start-Process -FilePath "msiexec.exe" -ArgumentList '/qn /log $(uninstall_log) /x $(installer) $(msi_args)' -Wait -Passthru" contain => pscontainbody, classes => if_repaired("uninstalled_package"), comment => "Uninstalling Package using $(installer)"; }

Aziro Marketing

blogImage

How to write Ohai plugin for the Windows Azure IaaS cloud

Chef is an open-source systems management and cloud infrastructure automation framework created by Opscode. It helps in managing your IT infrastructure and applications as code. It gives you a way to automate your infrastructure and processes. Knife is a CLI to create, update, search and delete the entities or manage actions on entities in your infrastructure like node (hosts), cloud resources, metadata (roles, environments) and code for infrastructure (recipes, cookbooks), etc. A Knife plug-in is a set of one (or more) subcommands that can be added to Knife to support additional functionality that is not built-in to the base set of Knife subcommands. Ohai, Ohai plugins and the hints system: Ohai is a tool that is used to detect certain properties about a node’s environment and provide them to the chef-client during every Chef run. The types of properties Ohai reports on include: Platform details Networking usage Memory usage Processor usage Kernel data Host names Fully qualified domain names (FQDN) Other configuration details When additional data about a system infrastructure is required, a custom Ohai plugin can be used to gather that information. An Ohai plugin is a Ruby DSL. There are several community OHAI cloud plugins providing cloud specific information. Writing OHAI plug-in for the Azure IaaS cloud: In simple words Ohai plug-in is a ruby DSL that populates and returns a Mash object to upload nested data. It can be as simple as: provides “azure” azure Mash.new azure[:version] = “1.2.3” azure[:description] = “VM created on azure” And you are done!! Well practically you would populate this programmatically. This plug-in is now ready and when the chef client runs, you would see these attributes set for the node. More on how to setup the custom plug-ins. Additionally Ohai includes a hinting system that allows a plugin to receive a hint by the existence of a file. These files are in the JSON format to allow passing additional information about the environment at bootstrap time, such as region or datacenter. This information can then be used by ohai plug-ins to identify the type of cloud the node is created on and additionally any cloud attributes that should be set on the node. Let’s consider a case where you create a virtual machine instance on the Microsoft Windows Azure IaaS Cloud using the knife-azure plugin. Typically, once the VM is created and successfully bootstrapped, we can use knife ssh to secure shell into the VM and run commands. To secure shell into the VM the public IP or FQDN should be set as an attribute. Incase of Azure, the public FQDN can only be retrieved by querying azure management API which can add a lot of overhead to Ohai. Alternatively we can handle this using OHAI hint system, where the knife azure plug-in can figure out the public FQDN as part of VM creation. and pass on this information to VM. Then a OHAI plug-in can be written which reads the hints file and determines the public IP address. Let’s see how to achieve this: The hints data can be generated by any cloud plug-in and sent over to node during bootstrap. For example say the knife-azure plug-in sets few attributes within plug-in code before bootstrap: 1. Chef::Config[:knife][:hints]["azure"] ||= cloud_attributes Where “cloud_attributes” is hash containing the attributes to be set on node using azure ohai plug-in. {"public_ip":"137.135.46.202","vm_name":"test-linuxvm-on-cloud", "public_fqdn":"my-hosted-svc.cloudapp.net","public_ssh_port":"7931"} You can also have this information passed as a json file to the plug-in if it’s not feasible to modify the plug-in code and the data is available before knife command execution so that it can be passed as CLI option: "--hint HINT_NAME[=HINT_FILE]" "Specify Ohai Hint to be set on the bootstrap target. Use multiple --hint options to specify multiple hints." The corresponding ohai plug-ins to load this information and set the attributes can be seen here: https://github.com/opscode/ohai/blob/master/lib/ohai/plugins/cloud.rb#L234 Taking the above scenario, this will load attribute like cloud.public_fqdn in the node which can then be used by knife ssh command or for any other purpose. Knife SSH example: Once the attributes are populated on chef node we can use knife ssh command as follows: $ knife ssh 'name:nodename' 'sudo chef-client -v' -a 'cloud.public_fqdn' --identity-file test.pem --ssh-user foo --ssh-port 22 my-hosted-svc.cloudapp.net Chef: 11.4.4$ knife ssh 'name:nodename' 'sudo chef-client -v' -a 'cloud.public_fqdn' --identity-file test.pem --ssh-user foo --ssh-port 22 my-hosted-svc.cloudapp.net Chef: 11.4.4 *Note the use of attribute ‘cloud.public_fqdn’ which is populated using the ohai hint system from the json. This post is meant to explain the basics and showcase a real world example of the Ohai plugins and the hints system.

Aziro Marketing

EXPLORE ALL TAGS
2019 dockercon
Advanced analytics
Agentic AI
agile
AI
AI ML
AIOps
Amazon Aws
Amazon EC2
Analytics
Analytics tools
AndroidThings
Anomaly Detection
Anomaly monitor
Ansible Test Automation
apache
apache8
Apache Spark RDD
app containerization
application containerization
applications
Application Security
application testing
artificial intelligence
asynchronous replication
automate
automation
automation testing
Autonomous Storage
AWS Lambda
Aziro
Aziro Technologies
big data
Big Data Analytics
big data pipeline
Big Data QA
Big Data Tester
Big Data Testing
bitcoin
blockchain
blog
bluetooth
buildroot
business intelligence
busybox
chef
ci/cd
CI/CD security
cloud
Cloud Analytics
cloud computing
Cloud Cost Optimization
cloud devops
Cloud Infrastructure
Cloud Interoperability
Cloud Native Solution
Cloud Security
cloudstack
cloud storage
Cloud Storage Data
Cloud Storage Security
Codeless Automation
Cognitive analytics
Configuration Management
connected homes
container
Containers
container world 2019
container world conference
continuous-delivery
continuous deployment
continuous integration
Coronavirus
Covid-19
cryptocurrency
cyber security
data-analytics
data backup and recovery
datacenter
data protection
data replication
data-security
data-storage
deep learning
demo
Descriptive analytics
Descriptive analytics tools
development
devops
devops agile
devops automation
DEVOPS CERTIFICATION
devops monitoring
DevOps QA
DevOps Security
DevOps testing
DevSecOps
Digital Transformation
disaster recovery
DMA
docker
dockercon
dockercon 2019
dockercon 2019 san francisco
dockercon usa 2019
docker swarm
DRaaS
edge computing
Embedded AI
embedded-systems
end-to-end-test-automation
FaaS
finance
fintech
FIrebase
flash memory
flash memory summit
FMS2017
GDPR faqs
Glass-Box AI
golang
GraphQL
graphql vs rest
gui testing
habitat
hadoop
hardware-providers
healthcare
Heartfullness
High Performance Computing
Holistic Life
HPC
Hybrid-Cloud
hyper-converged
hyper-v
IaaS
IaaS Security
icinga
icinga for monitoring
Image Recognition 2024
infographic
InSpec
internet-of-things
investing
iot
iot application
iot testing
java 8 streams
javascript
jenkins
KubeCon
kubernetes
kubernetesday
kubernetesday bangalore
libstorage
linux
litecoin
log analytics
Log mining
Low-Code
Low-Code No-Code Platforms
Loyalty
machine-learning
Meditation
Microservices
migration
Mindfulness
ML
mobile-application-testing
mobile-automation-testing
monitoring tools
Mutli-Cloud
network
network file storage
new features
NFS
NVMe
NVMEof
NVMes
Online Education
opensource
openstack
opscode-2
OSS
others
Paas
PDLC
Positivty
predictive analytics
Predictive analytics tools
prescriptive analysis
private-cloud
product sustenance
programming language
public cloud
qa
qa automation
quality-assurance
Rapid Application Development
raspberry pi
RDMA
real time analytics
realtime analytics platforms
Real-time data analytics
Recovery
Recovery as a service
recovery as service
rsa
rsa 2019
rsa 2019 san francisco
rsac 2018
rsa conference
rsa conference 2019
rsa usa 2019
SaaS Security
san francisco
SDC India 2019
SDDC
security
Security Monitoring
Selenium Test Automation
selenium testng
serverless
Serverless Computing
Site Reliability Engineering
smart homes
smart mirror
SNIA
snia india 2019
SNIA SDC 2019
SNIA SDC INDIA
SNIA SDC USA
software
software defined storage
software-testing
software testing trends
software testing trends 2019
SRE
STaaS
storage
storage events
storage replication
Storage Trends 2018
storage virtualization
support
Synchronous Replication
technology
tech support
test-automation
Testing
testing automation tools
thought leadership articles
trends
tutorials
ui automation testing
ui testing
ui testing automation
vCenter Operations Manager
vCOPS
virtualization
VMware
vmworld
VMworld 2019
vmworld 2019 san francisco
VMworld 2019 US
vROM
Web Automation Testing
web test automation
WFH

LET'S ENGINEER

Your Next Product Breakthrough

Book a Free 30-minute Meeting with our technology experts.

Aziro has been a true engineering partner in our digital transformation journey. Their AI-native approach and deep technical expertise helped us modernize our infrastructure and accelerate product delivery without compromising quality. The collaboration has been seamless, efficient, and outcome-driven.

Customer Placeholder
CTO

Fortune 500 company