Tag Archive

Below you'll find a list of all posts that have been tagged as "cloud computing"
blogImage

It’s CLOUDy out there!

“Cloud” or “Cloud Computing” has remained a buzz in the technology space for the last decade or so. But for a layman, what exactly does it mean? How does it affect or benefit us or any organization for that matter? What is the future like when it comes to the cloud?Also most importantly is it really worth all the hype?Let’s try to look into and answer as many concepts as possible here.Cloud Philosophy – Simplified:Firstly, to understand the cloud, we can take a simple example to relate to:Every family needs milk at home. The quantity of the milk needed per family may vary but is more or less constant for each of the family every day or on an average in a week. Now there could be scenarios or situations where one might have some guests visiting or some festivals in which the milk consumption may rise. Also, there could be scenarios where the family goes for a vacation, or some members of the family are out of time due to any reason during which the milk consumption for those many days would decrease. What does the family do during such days of upward spikes or a drop in requirement? They simply buy less milk or ask the milk vendor to deliver only the required quantity for the specified duration.So, the question here is “Would you by the cow, for your intermittently fluctuating milk requirements?” The answer is No!Now just to explain, consider the cow being “the cloud” which instead of milk gives us “resources” to order in the right quantity based on our needs at the given period. Simple, isn’t it? So we don’t spend huge amounts of our money in the infrastructure (cow). We can pay as per the use for the resources (milk) quite literally ‘milking’ the benefits of the cloud (cow).We all use the cloud:What if I told you that all of us used the Cloud even before we knew about it? Yes, we do.Consider you have word file saved on your desktop at the office and you need to access that file at home for further modification. Can you really just open up your computer at home and start working on the file? No, because that would be saved on your office computer hard-drive and you would have to either email it to yourself so that you can download it home for use or you would have to carry it in some pen drive.Now consider you were working on the same word file on some third party platform such as the Google Docs in your G-Drive. All you had to do was have an internet connection at home and sign-in into the G-Drive using the same account! That’s it.Basically, you accessed the Google Cloud where they had saved your file on their server. Same happens when you access your emails. Be it Google, Yahoo or Microsoft emails; these are never on ‘a particular computer’ but on the cloud or server. This makes it possible for us to log into any machine and simply check emails by signing in with our username and password. Cloud was never an alien concept; it’s just that it is more commercialized now and smaller businesses and startups who aren’t financially strong to have the infrastructure are now moving ahead to reap its benefits.Top Players in the Cloud :Now there are many organizations who have joined the ‘cloud party’ but the top contributors as per the latest 2018 survey are AWS (Amazon Web Services), Azure, Google & IBM. The following chart shows the market share of each of the player and how they compete with each other in terms of market adoption, Year on Year growth and footprints.Types of Clouds :Going further, there are various kinds of flavors in Cloud Computing that a business can choose to stick with. Depending on the need of the organization, a decision can be taken on whether an enterprise needs a Public, Private or Hybrid Cloud.Let’s briefly look at this in a bit detail.Public Cloud: This would be when an enterprise or business wants its resources to be available to everyone on the internet. The public cloud model allows users to utilize software that is hosted and managed by a third party and accessed through the internet, such as Google Drive. By allowing a third party to host and manage various aspects of computing, businesses can scale faster and save money on setup and management.Private Cloud: Private cloud infrastructure can be hosted in on-site data centers or by a third-party, but is managed by and accessible to the company alone. Companies can tailor private cloud infrastructure to meet the unique needs of the companies, specifically security and privacy needs. As opposed to the public cloud model, private clouds are not meant to be sold “as-a-service,” but is instead built and managed by each company, similar to a local or shared drive.Hybrid/Multi Cloud: This is just the combination of the private and public cloud. Here a company decided the nature of cloud services depending on resources and their access.Benefits of Cloud:Cost savings: The pay-as-you-go system also applies to the data storage space needed to service your stakeholders and clients. This means that you’ll get and pay for exactly as much space as you need.Security: For one thing, a cloud host’s full-time job is to carefully monitor security, which is significantly more efficient than a conventional in-house system. Because in the latter system, an organization must divide its efforts between a myriad of IT concerns, with security being only one of them.Flexibility: The cloud offers businesses more flexibility overall versus hosting on a local server. And, if you need extra bandwidth, a cloud-based service can meet that demand instantly, rather than undergoing a complex (and expensive) update to your IT infrastructure. This improved freedom and flexibility can make a significant difference to the overall efficiency of your organization.Mobility: Cloud computing allows mobile access to corporate data via smartphones and devices. This ensures everyone is updated considering over 2.6 billion smartphones being used globally today.Disaster recovery: Downtime in your services leads to lost productivity, revenue, and brand reputation. But while there may be no way for you to prevent or even anticipate the disasters that could potentially harm your organization, there is something you can do to help speed your recovery. Cloud-based services provide quick data recovery for all kinds of emergency scenarios from natural disasters to power outages. While 20 percent of cloud users claim disaster recovery in four hours or less, only 9 percent of non-cloud users could claim the same.Automatic software updates: For those who have a lot to get done, there isn’t anything more irritating than having to wait for a system update to be installed. Cloud-based applications automatically refresh and update themselves, instead of forcing an IT department to perform a manual organization-wide update.Competitive edge: While cloud computing is increasing in popularity, there are still those who prefer to keep everything local. That’s their choice, but doing so places them at a distinct disadvantage when competing with those who have the benefits of the cloud at their fingertips.My Experiences with Cloud :Talking of my own experience with cloud first-hand, I have a habit of maintaining and updating my own notes on the tasks I am performing. At the very early stages of my working career, I often maintained notes over some word files or notepad. But as the problem goes with traditional storage, accessing these notes irrespective of place and time was a hinderance. Then I soon realized that Microsoft’s OneNote was quite a solution to this problem. My notes got synced with the Microsoft Account and were accessible to me everywhere and anywhere I needed them. Later on, there were other apps such as Evernote that were synced with my mobile phone and offered me greater flexibility and control over my notes and data.Providing cloud-based storage users may be a small update form a company’ viewpoint; however, from the user perspective, this is a very significant change. It can alter the way you work and makes ones’ life far easier.I am also quite an avid reader, and I have a Kindle to satisfy my need to read. I also have a Kindle app on my mobile phone. Now if it weren’t for the cloud, I would have to carry either my mobile phone or Kindle to every possible place to maintain and continue the reading. But the Amazon Cloud syncs the Kindle application on the phone as well as the Kindle to a level such that I can pick up reading from where I left on my phone from Kindle and vice-versa. Basically the cloud synchronizes whatever I read on either of the devices to make life easier for me.Moreover, I have drafted and worked over this article as and when I could find time in the office, home or even while my commute in the bus! How was this possible? Yes, cloud.I worked on the MSWord online, and I could jot down my points, expand on them, add or edit them as something interesting struck my mind.Verdict:Cloud computing has been evolving the way businesses operate these days. Companies of all the shapes and sizes have been adapting to this new technology. Industry experts believe that cloud computing will continue to benefit the mid-sized and large companies in the coming few years.The Cloud is here to stay and the future is all “cloudy” (in a good way of course) with the growing needs and consumption of resources by Organizations and their clients. This is surely a way forward for also small businesses and individuals who also now need not worry about the price-overheads or infrastructure and just focus on the tasks.Also, it isn’t rocket science to understand that when businesses focus on the actual tasks to be performed rather than the overheads involved, they flourish.Data Sources:State of the Cloud 2018 ReportsSalesforce.com

Aziro Marketing

blogImage

Learn about the Latest Enterprisy Updates to knife-cloudstack!

Opscode’s Chef is open-source systems integration framework built specifically for automating the cloud. Knife is a powerful CLI that is used by administrators to interact with Chef. It is easily extensible to support provisioning of cloud resources. There is currently support for over 15 cloud providers including Amazon EC2, Rackspace, Openstack and Cloudstack. Ever since the acquisition of Cloud.com by Citrix, Cloudstack (now re-christened as Citrix CloudPlatform) is being actively morphed into a more enterprise-focused product with support for Production-grade networking appliances like the Netscalar suite, F5 Big IP, Cisco Nexus 1000V and networking features like InterVLAN communication and Site-to-Site VPN. Continuing in the spirit, the Knife Cloudstack plugin has recently received major updates that are targeted towards enterprises using Cloudstack/Cloudplatform in private environments: Microsoft Windows Server bootstrapping: Microsoft Windows Server is widely used across Enterprises to host a variety of critical internal and external applications including Microsoft Exchange, Sharepoint, CRM. We have added support to easily bootstrap provision and bootstrap Windows machines via the WinRM protocol with ability to use both Basic and Kerberos modes of Authentication. Support for Projects: Cloudstack Projects is one of the widely used feature in Enterprises allowing BUs to isolate their compute, networking and storage resources for better chargeback, billing and management of resources. The plugin now supports the ability to spawn servers, choose networks and allocate IP addresses in specific projects. Choose between Source NAT and Static NAT: Enterprises host certain Applications for their customers, partners or employees on public IP addresses. Hence they prefer to use static NAT (IP forwarding, EC2 Style) rather than Source NAT (Port Forwarding) for increased security and control. Enabling static NAT is as simple as setting a flag. Ability to choose networks: Typically enterprises prefer isolating different types of traffic on different networks. eg. VoIP traffic on a higher QoS networks, separate storage/backup networks and so on. The plugin now adds the ability spawn virtual machines as well as allocate public IP addresses from specific networks. Sample Examples: Windows Bootstrapping knife cs server create --cloudstack-service 'Medium Instance' --cloudstack-template 'w2k8-basic' --winrm-user 'Administrator --winrm-password 'xxxx' --winrm-port 5985 --port-rules "3389:3389:TCP" --bootstrap-protocol winrm --template-file windows-chef-client-msi.erb knife cs server create --cloudstack-service "Medium Instance" --cloudstack-template "w2k8-with-AD" --kerberos-realm "ORG_WIDE_AD_DOMAIN" --winrm-port 5985 --port-rules "3389:3389:TCP" --bootstrap- protocol winrm --template-file windows-chef-client-msi.erb Support for Projects and Static NAT knife cs server create --cloudstack-service 'Medium Instance' --cloudstack-template 'w2k8-basic' --cloudstack-project 'Engg-Dev' --winrm-user 'Administrator --winrm-password 'Fr3sca21!' -- static-nat --port-rules "3389:TCP" --bootstrap-protocol winrm Choose specific networks: knife cs server create "rhel-node-1" --node-name "rhel-node-1" -T "RHEL 5.7-x86" --bootstrap-protocol ssh --ssh-user root --ssh-password **** --service "Small Instance" --networks "Admin-Default" --port-rules '22:tcp' The plugin is available to download from the source at: knife-cloudstack Update: knife-cloudstack-0.0.13 has released to rubygems.org with these changes. gem install knife-cloudstack for the latest

Aziro Marketing

blogImage

How to Dockerize your Ruby-On-Rails Application?

Packaging an application along with all of its bin/lib files, dependencies and deploying it in complex environments is much more tedious than it sounds. In order to extenuate it, Docker, an open-source platform, enables applications to quickly group their components and eliminates the friction between development, QA, and production environments. Docker is a lightweight packaging solution that can be used instead of a virtual machine. Docker is an open-source engine to create portable, lightweight containers from any application.Docker is hardware- and platform-agnostic, which means a Docker container can run on any supported hardware or operating system. The fact that it takes less than a second to spawn a container from a Docker image justifies that Docker really is lightweight as compared to any other virtualization mechanism. Also the Docker images are less than a tenth the size of their counterpart virtual machine images. The images created by extending a Docker base image can be as tiny as few megabytes. This makes it easier and faster to move your images across different environments.Docker Hub is the central repository for Docker. Docker Hub stores all the public as well as private images. Private images are only accessible for a given users account or team to which it belongs. Docker Hub can be linked to Github or Bitbucket to trigger auto builds. The result of such a build is ready to deploy the application’s Docker image.Docker provides mechanism to separate application dependencies, code, configuration, and data by providing features such as container linking, data volumes, and port mapping. Dependencies and configuration is specified in the Dockerfile script. The Dockerfile installs all the dependencies, pulls the application code from the local or remote repository, and builds a ready-to-deploy application image.Container LinkingDocker container linking mechanism allows communication between containers without exposing the communication ports and details. The below command spawns a Tomcat application container and links it to the mysql-db-container. The Tomcat application can communicate to the mysql-db by using the environment variables (like db:host, db:port, db:password) exposed by mysql-db-container there by providing maximum application security.docker run –link mysql:mysql-db-container clogeny/tomcat-applicationData VolumesDocker provides data volumes to store, backup, and separate the application data from the application. Data volumes can be shared between multiple containers and read write policies can be specified for a given data volume. Multiple data volumes can be attached to the container using the flag -v multiple times. Docker also allows mounting a host directory as data volume to a container.docker run -v /dbdata –name mysql-instance1 my-sql#this creates dbdata volume inside the mysql-instance1 containerdocker run –volumes-from mysql-instance1 –name my-sql-instance2 my-sql-server#mounts and share all the volumes from mysql-instance1containerDockerizing a Ruby on Rails Application4 Simple steps to Dockerize your ruby-on-rails applicationInstall DockerCreate a Dockerfile as below in your application directory.FROM rails # use the rails image from the Docker Hub central repositoryMAINTAINER Clogeny ADD ./src ./railsapp #Copies the source files from host to the container. URL to the code repository can also be usedRUN bundle installEXPOSE 3000 #Expose port 3000 to communicate with the RoR serverENTRYPOINT rails s # run the RoR server with “rail s” commandBuild the application image. This command creates a ready-to-run rails image with your rails application deployed.docker build -t clogeny/my-RoR-app # -t specifies the name of the image which gets createdPush the application to central repository so that the QA can use it to test the application. The image can be used to speed up and revolutionize the CI/CD workflow.docker push clogeny/my-RoR-app # Upload the Docker image to the central repoDeploying the Dockerized ApplicationDeployment requires executing just one command to get the application up and running on the test machine. Assuming Docker is installed on the host, all we need to do is execute the “docker run” command to spawn a Docker container.docker run # Spawn a docker container-t # -t flag is used to show the stdOut and stdErr on the commandLine-p 3000:3010 # -p flag is used to map container port 3000 to the host port 3010clogeny/my-RoR-app # Use the “my-RoR-app” image earlier uploaded to the repo.And here we are, the Docker container is up and running in a matter of a few seconds. We can log into the application using the URL http://localhost:3010

Aziro Marketing

blogImage

How to Make MS Azure Compatible with Chef

Let’s get an overview of Microsoft Azure cloud and, how the popular configuration management tool Chef can be installed on Azure for making them work together.Chef IntroductionChef is a configuration management tool that turns infrastructure into code. You can easily configure your server with the help of Chef. Chef will help you automate, build, deploy, and manage the infrastructure process.If you want to know more about Chef, please refer to https://docs.chef.io/index.html.In order to know how you can create a user account on hosted Chef, please refer to:https://manage.chef.io/signup; alternatively, use open-source Chef by referring to:https://docs.chef.io/install_server.html.Microsoft AzureMicrosoft Azure is a cloud computing platform and infrastructure created by Microsoft for building, deploying, and managing applications and services through a global network of Microsoft-managed datacenters.It provides both PaaS and IaaS services and supports many different programming languages, tools, and frameworks, including both Microsoft-specific and third-party software and systems.For more details refer to: http://azure.microsoft.com/en-in/ andhttp://msdn.microsoft.com/en-us/library/azure/dd163896.aspx.There are three ways to install Chef extension on Azure cloud:By using Azure Portal2. Azure PowerShell CLI3. Knife-azure – The Chef’s CLI tool for Azure providerPrerequisites  Active account on the Azure cloud; https://manage.windowsazure.com or https://portal.azure.comActive account on hosted Chef; See https://manage.chef.io/signup.We need your Chef’s account organization_validation key, rb and run_list.Sample format for client.rb file­:log_location: STDOUT chef_server_url  "https://api.opscode.com/organizations/" validation_client_name "-validator" +36+9 1. From Azure portal log into your azure account at https://portal.azure.com1.1 List existing virtual machines:1.2 Select existing VM:1.3 Click the Extensions section:1.4 Select Add Extension:1.5 Select the Chef extension:1.6 Click the Create button:1.7 Upload Chef configuration files:1.8 You can now see the Chef extension for VM: 2. Azure PowerShell CLI ToolAzure PowerShell is a command line tool used to manage Azure cloud resources. You can use cmdlets to perform the same tasks that you can perform from the Azure portal.Refer- http://msdn.microsoft.com/en-us/library/azure/jj156055.aspxPrerequisitesInstall Azure PowerShell Tool; refer to http://azure.microsoft.com/en-in/documentation/articles/install-configure-powershell/Azure user accounts publish settings file.We are going to use Azure PowerShell cmdlets to install the Chef extension on Azure VM.2.1 Import your Azure user account into your PowerShell Session. Download subscription credentials for accessing Azure. This can be done by executing a cmdlet.PS C:\> Get-AzurePublishSettingsFile It will launch your browser and download the credentials file. PS C:\> Import-AzurePublishSettingsFile PS C:\> Select-AzureSubscription -SubscriptionName "" PS C:\> Set-AzureSubscription -SubscriptionName "" -CurrentStorageAccountName "" 2.2 Create a new Azure VM and install the Chef extension# Set VM and Cloud Service names PS C:\> $vm1 = "azurechef" PS C:\> $svc = "azurechef" PS C:\> $username = 'azure' PS C:\> $password = 'azure@123' PS C:\> $img = #Note- Try Get-AzureVMImage cmdlet to list images PS C:\> $vmObj1 = New-AzureVMConfig -Name $vm1 -InstanceSize Small -ImageName $img #Add-AzureProvisioningConfig  for Windows OR Linux VM # For Windows VM     PS C:\> $vmObj1 = Add-AzureProvisioningConfig -VM $vmObj1 -Password $password -AdminUsername $username –Windows or# For Linux VM     PS C:\> $vmObj1 = Add-AzureProvisioningConfig -VM $vmObj1 -Password $password -LinuxUser $username -Linux # Set AzureVMChefExtension for Windows OR Linux VM # For Windows VM     PS C:\> $vmObj1 = Set-AzureVMChefExtension -VM $vmObj1 -ValidationPem "C:\\users\\azure\ \msazurechef-validator.pem" -ClientRb "C:\\users\\azure\\client.rb" -RunList "getting-started" -Windows or# For Linux VM      PS C:\> $vmObj1 = Set-AzureVMChefExtension -VM $vmObj1 -ValidationPem "C:\\users\\azure\ \msazurechef-validator.pem" -ClientRb "C:\\users\\azure\\client.rb" -RunList "getting-started" -Linux # Create VM     PS C:\> New-AzureVM -Location 'West US' -ServiceName $svc -VM $vObj1 2.3 Install SetAzureVMChefExtension on existing azure VM:# Get existing azure VM     PS C:\> $vmObj1 = Get-AzureVM -ServiceName  -Name # Set AzureVMChefExtension for Windows OR Linux VM # For Windows VM     PS C:\> $vmObj1 = Set-AzureVMChefExtension -VM $vmObj1 -ValidationPem "C:\\users\\azure\ \msazurechef-validator.pem" -ClientRb "C:\\users\\azure\\client.rb" -RunList "getting-started" -Windows or# For Linux VM     PS C:\> $vmObj1 = Set-AzureVMChefExtension -VM $vmObj1 -ValidationPem "C:\\users\\azure\ \msazurechef-validator.pem" -ClientRb "C:\\users\\azure\\client.rb" -RunList "getting-started" -Linux 2.4 You can use following cmdlets to Remove Chef Extension from VM# Get existing azure VM PS C:\> $vmObj1 = Get-AzureVM -ServiceName  -Name # Remove Chef Extension from VM PS C:\> Remove-AzureVMChefExtension -VM $vmObj1 # Update VM PS C:\> Update-AzureVM -ServiceName $vmName -Name $vmName -VM $vmObj1 2.5 You can get current state of Chef Extension by using following cmdlet:# Get existing azure VM PS C:\> $vmObj1 = Get-AzureVM -ServiceName  -Name # Get Chef Extension details from VM PS C:\> Set-AzureVMChefExtension -VM $vmObj1 3. Knife-Azure Chef’s Plugin: A knife plugin to create, delete, and enumerate Windows Azure resources to be managed by Chef. The knife-azure plugin (v1.4.0.rc.0) gives features to create VM and install chef extension on Windows Azure Cloud.For more details refer to https://docs.chef.io/plugin_knife_azure.html orhttps://github.com/opscode/knife-azurePrerequisites: ruby v1.9.3 +chef v11.0 +knife-azure v1.4.0.rc.0 pluginAzure user account publishsettings fileChef user accounts configuration filesInstall Ruby:On Windows- http://rubyinstaller.org/On Linux- https://rvm.io/Install Chef:$ gem install chef Install knife-azure plugin:$ gem install knife-azure --pre Download Chef’s Starter Kit:This starter kit includes Chef’s user/organization related configuration details. i.e., user.pem,organization-validator.pem and knife.rb files.Please refer to: https://learn.chef.io/legacy/get-started/#installthestarterkit orhttps://manage.chef.io/starter-kit.Run knife azure command to create VM and install chef extension-Create Windows VM command:$ knife azure server create  --azure-source-image   --azure-dns-name   --azure-service-location " " --winrm-user  --winrm-password   --azure-publish-settings-file  -c  --bootstrap-protocol "cloud-api" Create Linux VM command:$ knife azure server create -I  -x  -P  --bootstrap-protocol "cloud-api" -c  --azure-service-location" " --azure-publish-settings-file Note- To get$ knife azure image list -c   --azure-publish-settings-file Microsoft Azure is the leading public cloud platform out there, and Chef is one of the most sought after continuous integration and delivery tool. When they come together, ramifications can be great. Please share your comments and questions below. 

Aziro Marketing

blogImage

How to write Ohai plugin for the Windows Azure IaaS cloud

Chef is an open-source systems management and cloud infrastructure automation framework created by Opscode. It helps in managing your IT infrastructure and applications as code. It gives you a way to automate your infrastructure and processes. Knife is a CLI to create, update, search and delete the entities or manage actions on entities in your infrastructure like node (hosts), cloud resources, metadata (roles, environments) and code for infrastructure (recipes, cookbooks), etc. A Knife plug-in is a set of one (or more) subcommands that can be added to Knife to support additional functionality that is not built-in to the base set of Knife subcommands. Ohai, Ohai plugins and the hints system: Ohai is a tool that is used to detect certain properties about a node’s environment and provide them to the chef-client during every Chef run. The types of properties Ohai reports on include: Platform details Networking usage Memory usage Processor usage Kernel data Host names Fully qualified domain names (FQDN) Other configuration details When additional data about a system infrastructure is required, a custom Ohai plugin can be used to gather that information. An Ohai plugin is a Ruby DSL. There are several community OHAI cloud plugins providing cloud specific information. Writing OHAI plug-in for the Azure IaaS cloud: In simple words Ohai plug-in is a ruby DSL that populates and returns a Mash object to upload nested data. It can be as simple as: provides “azure” azure Mash.new azure[:version] = “1.2.3” azure[:description] = “VM created on azure” And you are done!! Well practically you would populate this programmatically. This plug-in is now ready and when the chef client runs, you would see these attributes set for the node. More on how to setup the custom plug-ins. Additionally Ohai includes a hinting system that allows a plugin to receive a hint by the existence of a file. These files are in the JSON format to allow passing additional information about the environment at bootstrap time, such as region or datacenter. This information can then be used by ohai plug-ins to identify the type of cloud the node is created on and additionally any cloud attributes that should be set on the node. Let’s consider a case where you create a virtual machine instance on the Microsoft Windows Azure IaaS Cloud using the knife-azure plugin. Typically, once the VM is created and successfully bootstrapped, we can use knife ssh to secure shell into the VM and run commands. To secure shell into the VM the public IP or FQDN should be set as an attribute. Incase of Azure, the public FQDN can only be retrieved by querying azure management API which can add a lot of overhead to Ohai. Alternatively we can handle this using OHAI hint system, where the knife azure plug-in can figure out the public FQDN as part of VM creation. and pass on this information to VM. Then a OHAI plug-in can be written which reads the hints file and determines the public IP address. Let’s see how to achieve this: The hints data can be generated by any cloud plug-in and sent over to node during bootstrap. For example say the knife-azure plug-in sets few attributes within plug-in code before bootstrap: 1. Chef::Config[:knife][:hints]["azure"] ||= cloud_attributes Where “cloud_attributes” is hash containing the attributes to be set on node using azure ohai plug-in. {"public_ip":"137.135.46.202","vm_name":"test-linuxvm-on-cloud", "public_fqdn":"my-hosted-svc.cloudapp.net","public_ssh_port":"7931"} You can also have this information passed as a json file to the plug-in if it’s not feasible to modify the plug-in code and the data is available before knife command execution so that it can be passed as CLI option: "--hint HINT_NAME[=HINT_FILE]" "Specify Ohai Hint to be set on the bootstrap target. Use multiple --hint options to specify multiple hints." The corresponding ohai plug-ins to load this information and set the attributes can be seen here: https://github.com/opscode/ohai/blob/master/lib/ohai/plugins/cloud.rb#L234 Taking the above scenario, this will load attribute like cloud.public_fqdn in the node which can then be used by knife ssh command or for any other purpose. Knife SSH example: Once the attributes are populated on chef node we can use knife ssh command as follows: $ knife ssh 'name:nodename' 'sudo chef-client -v' -a 'cloud.public_fqdn' --identity-file test.pem --ssh-user foo --ssh-port 22 my-hosted-svc.cloudapp.net Chef: 11.4.4$ knife ssh 'name:nodename' 'sudo chef-client -v' -a 'cloud.public_fqdn' --identity-file test.pem --ssh-user foo --ssh-port 22 my-hosted-svc.cloudapp.net Chef: 11.4.4 *Note the use of attribute ‘cloud.public_fqdn’ which is populated using the ohai hint system from the json. This post is meant to explain the basics and showcase a real world example of the Ohai plugins and the hints system.

Aziro Marketing

blogImage

So You Want to Build Your Own Data Center?

In today’s internet-of-things world, companies run their applications 24×7, and this generally results in a lot of users or data. This data needs to be stored, analyzed, and post-processed; in essence, some action needs to be taken on it. We are looking at huge fluctuating workloads. The scale of operations is enormous, and to handle this kind of mammoth operations, clusters are built. In the age of commodity hardware, clusters are easy to build, but clusters with specific software stack that could do only one type of task (static partitioning of resources) lead to less optimal resource utilization, because it is possible that no task of that type is running at a given time.For example, Jenkins slaves in a CI cluster could be sitting idle at night or during a common vacation time when developers are not pushing code. But let’s say, when product release time is near, it might so happen that developers are pushing and hacking away at code so frequently that the build queue becomes longer due to the need for slaves to run the CI jobs. Both the situations are undesirable and reduce the efficiency and ROI of the company.Dynamic partitioning of resources is the solution to fix the above issue. Here, you pool your resources (CPU, memory, IO, etc.) such that nodes from your cluster act as one huge computer. Based on your current requirement, resources are allocated to the task that needs it. So the same pool of hardware runs your Hadoop, MySQL, Jenkins, and Storm jobs. You can call this “node abstraction.” Thus achieve diverse cluster computing on commodity hardware by fine-grained resource sharing. To put it simply, different distributed applications run on the same cluster.Google has mastered this kind of cluster computing for almost 2 decades now. The outside world does not know much about their project known as Borg or its successor, Omega. Ben Hindman, an American entrepreneur, and a group of researchers from UC Berkeley came up with an open-source solution inspired by Borg and Omega.Enter Mesos!What Is Mesos?Mesos is a scalable and distributed resource manager designed to manage resources for data centers.Mesos can be thought of as “distributed kernel” that achieves resource sharing via APIs in various languages (C++, Python, Go, etc.) Mesos relies on cgroups to do process isolation on top of distributed file systems (e.g., HDFS). Using Mesos you can create and run clusters running heterogeneous tasks. Let us see what it is all about and some fundamentals on getting Mesos up and running.Basic Terminology and ArchitectureMesos follows the master-slave architecture. It can also have multiple masters and slaves. Multi-master architecture makes Mesos fault-tolerant. The leader is elected through ZooKeeper.A Mesos application, or in Mesos parlance a “framework,” is a combination of a scheduler and an executor. Framework’s scheduler is responsible for registering with the Mesos master and also for accepting or rejecting resource offers from the Mesos master. An executor is a process running on Mesos slave that actually runs the task. Mesos has a distributed two-way scheduling called resource offer. A “resource offer” can be thought of as a two-step process, where initially a message from the Mesos master is sent to a particular framework on a Mesos slave about what resources (CPU, memory, IO etc) are available to it. The framework decides which offers it should accept or reject and which tasks to run on them.A task could be a Jenkins job inside a Jenkins slave, a Hadoop MapReduce job, or even a long-running service like a Rails application. Tasks run in isolated environment which can be achieved via cgroups, Linux containers, or even Zones on Solaris. Since Mesos v0.20.0, native Docker support has been added as well.Examples of useful existing frameworks include Storm, Spark, Hadoop, Jenkins, etc. Custom frameworks can be written on the API provided by Mesos kernel in various languages–C++, Python, Java, etc.Image credit: Mesos documentationA Mesos slave informs Mesos master about its available resources that the slave is ready to share. The Mesos master based on allocation policy makes “resource offers” to a framework. The framework’s scheduler decides whether or not to accept the offers. Once accepted, the framework sends a task description (and its resource requirements) to the Mesos master it needs to run. The Mesos master then sends these tasks to the Mesos slave to be executed on the slave by the framework’s executor. Finally the framework’s executor launches the task. Once the task is complete and the Mesos slave is idle, it reports back to the Mesos master about the freed resources.Mesos is being used by Twitter for many of their services including analytics, typeahead, etc. Many other companies who have large cluster and big data requirements like Airbnb, Atlassian, Ebay, Netflix, and others use Mesos.What Do You Get With Mesos?Arguably the most important feature that you can get out of Mesos would be “resource isolation.” This resource can be CPU, memory, etc. Mesos allows running multiple distributed applications on the same cluster, and this gives us increased utilization, efficiency, reduced latency, and better ROI.How to Build and Run Mesos on Your Local Machine?Enough with the theory! Now let us do fun bits of actually building the latest Mesos from Git and running Mesos and test frameworks. The below steps assume you are running Ubuntu 14.04 LTS.* Get Mesos     git clone https://git-wip-us.apache.org/repos/asf/mesos.git * Install dependencies    sudo apt-get update    sudo apt-get install build-essential openjdk-6-jdk python-dev python-boto libcurl4-nss-dev libsasl2-dev maven libapr1-dev libsvn-dev autoconf libtool * Build Mesos    cd mesos    ./bootstrap    mkdir build && cd build    ../configure    makeof things can be configured, enabled, or disabled before building Mesos. Most importantly, you can choose where you want to install Mesos by passing the directory to “–prefix” at the configure step. You can optionally use system-installed versions for ZooKeeper, gmock, protocol buffers, etc., instead of building them and thus save some time. You can also save some time by disabling language bindings that you might not need.As a general rule it would be nice to use a beefy machine with at least 8 GB RAM and fast enough processor if you are building Mesos locally on your test machine.* Run tests    make checkNote that these tests take a lot of time to build (if they are not built by default) and run.* Install Mesos    make installThis is an optional step; if you ignore it then you can run Mesos from the build directory you created earlier. But if you choose to install it, it will be installed in the $PREFIX that you chose during the configure step. If you do not provide custom $PREFIX, it will be installed to /usr/local/bin.* Prepare the system    sudo mkdir /var/lib/mesos    sudo chown  /var/lib/mesosThe above two steps are mandatory. Mesos will throw an error if the directory is not there or permissions and ownership are not set correctly. You can chose some other directory but you have to provide the same as work_dir. Refer the next command. * Run Mesos Master    ./bin/mesos-master.sh --ip=127.0.0.1 --work_dir=/var/lib/mesosIt is mandatory to pass –work_dir with correct directory as the value to the command line switch. Mesos master uses it for replicated log registry.* Run Mesos Slave    ./bin/mesos-slave.sh --master=127.0.0.1:5050And voila! Now you have a Mesos master and Mesos slave running.Mesos by itself is incomplete. It uses frameworks to run distributed applications. Let’s run a sample framework. Mesos comes with a bunch of example frameworks located in “mesos/src/examples” folder of your mesos Git clone. For this article, I will run the Python framework that you should find in “mesos/src/examples/python”.You can play with the example code for more fun and profit. See what happens when you increase the value of TOTAL_TASKS in “mesos/src/examples/python/test_framework.py”. Or you could try to simulate different duration taken by tasks to execute by inserting a random amount of sleep in run_task() method inside “mesos/src/examples/python/test_executor.py”.* Run frameworks    cd mesos/build    ./src/examples/python/test-framework 127.0.0.1:5050Assuming that you have followed the above steps you can view the Mesos Dashboard at http://127.0.0.1:5050. Here is how it looked on our test box. ConclusionMarathon, a meta-framework on top of Mesos, is distributed init.d. It takes care of starting, stopping, restarting services, etc. Chronos, a scheduler, think of it as distributed and fault-tolerant cron (*nix scheduler), which takes care of scheduling tasks. Mesos even has a CLI tool (pip install mesos.cli), using which you can interact (tail, cat, find, ls, ps, etc.) with your Mesos cluster via command line and feel geeky about it. A lot can be achieved with Mesos, Marathon, and Chronos together. But more about these in a later post. I hope you have enjoyed reading about Mesos. Please share your questions through the comments.

Aziro Marketing

EXPLORE ALL TAGS
2019 dockercon
Advanced analytics
Agentic AI
agile
AI
AI ML
AIOps
Amazon Aws
Amazon EC2
Analytics
Analytics tools
AndroidThings
Anomaly Detection
Anomaly monitor
Ansible Test Automation
apache
apache8
Apache Spark RDD
app containerization
application containerization
applications
Application Security
application testing
artificial intelligence
asynchronous replication
automate
automation
automation testing
Autonomous Storage
AWS Lambda
Aziro
Aziro Technologies
big data
Big Data Analytics
big data pipeline
Big Data QA
Big Data Tester
Big Data Testing
bitcoin
blockchain
blog
bluetooth
buildroot
business intelligence
busybox
chef
ci/cd
CI/CD security
cloud
Cloud Analytics
cloud computing
Cloud Cost Optimization
cloud devops
Cloud Infrastructure
Cloud Interoperability
Cloud Native Solution
Cloud Security
cloudstack
cloud storage
Cloud Storage Data
Cloud Storage Security
Codeless Automation
Cognitive analytics
Configuration Management
connected homes
container
Containers
container world 2019
container world conference
continuous-delivery
continuous deployment
continuous integration
Coronavirus
Covid-19
cryptocurrency
cyber security
data-analytics
data backup and recovery
datacenter
data protection
data replication
data-security
data-storage
deep learning
demo
Descriptive analytics
Descriptive analytics tools
development
devops
devops agile
devops automation
DEVOPS CERTIFICATION
devops monitoring
DevOps QA
DevOps Security
DevOps testing
DevSecOps
Digital Transformation
disaster recovery
DMA
docker
dockercon
dockercon 2019
dockercon 2019 san francisco
dockercon usa 2019
docker swarm
DRaaS
edge computing
Embedded AI
embedded-systems
end-to-end-test-automation
FaaS
finance
fintech
FIrebase
flash memory
flash memory summit
FMS2017
GDPR faqs
Glass-Box AI
golang
GraphQL
graphql vs rest
gui testing
habitat
hadoop
hardware-providers
healthcare
Heartfullness
High Performance Computing
Holistic Life
HPC
Hybrid-Cloud
hyper-converged
hyper-v
IaaS
IaaS Security
icinga
icinga for monitoring
Image Recognition 2024
infographic
InSpec
internet-of-things
investing
iot
iot application
iot testing
java 8 streams
javascript
jenkins
KubeCon
kubernetes
kubernetesday
kubernetesday bangalore
libstorage
linux
litecoin
log analytics
Log mining
Low-Code
Low-Code No-Code Platforms
Loyalty
machine-learning
Meditation
Microservices
migration
Mindfulness
ML
mobile-application-testing
mobile-automation-testing
monitoring tools
Mutli-Cloud
network
network file storage
new features
NFS
NVMe
NVMEof
NVMes
Online Education
opensource
openstack
opscode-2
OSS
others
Paas
PDLC
Positivty
predictive analytics
Predictive analytics tools
prescriptive analysis
private-cloud
product sustenance
programming language
public cloud
qa
qa automation
quality-assurance
Rapid Application Development
raspberry pi
RDMA
real time analytics
realtime analytics platforms
Real-time data analytics
Recovery
Recovery as a service
recovery as service
rsa
rsa 2019
rsa 2019 san francisco
rsac 2018
rsa conference
rsa conference 2019
rsa usa 2019
SaaS Security
san francisco
SDC India 2019
SDDC
security
Security Monitoring
Selenium Test Automation
selenium testng
serverless
Serverless Computing
Site Reliability Engineering
smart homes
smart mirror
SNIA
snia india 2019
SNIA SDC 2019
SNIA SDC INDIA
SNIA SDC USA
software
software defined storage
software-testing
software testing trends
software testing trends 2019
SRE
STaaS
storage
storage events
storage replication
Storage Trends 2018
storage virtualization
support
Synchronous Replication
technology
tech support
test-automation
Testing
testing automation tools
thought leadership articles
trends
tutorials
ui automation testing
ui testing
ui testing automation
vCenter Operations Manager
vCOPS
virtualization
VMware
vmworld
VMworld 2019
vmworld 2019 san francisco
VMworld 2019 US
vROM
Web Automation Testing
web test automation
WFH

LET'S ENGINEER

Your Next Product Breakthrough

Book a Free 30-minute Meeting with our technology experts.

Aziro has been a true engineering partner in our digital transformation journey. Their AI-native approach and deep technical expertise helped us modernize our infrastructure and accelerate product delivery without compromising quality. The collaboration has been seamless, efficient, and outcome-driven.

Customer Placeholder
CTO

Fortune 500 company