Tag Archive

Below you'll find a list of all posts that have been tagged as "automation"
blogImage

4 easy steps for benchmark testing (using vSphere replication)

This blog will discuss about vSphere Replication when no third party replication product is needed (i.e. there is no SRA to be configured). Before that, we need to know VMware vSphere Site Recovery Manager, i.e., SRM. SRM is a disaster recovery management product from VMware that provides automated failover and disaster recovery testing. VMware SRM automates the process of synchronizing recovery data between the primary and backup data center sites by using a third-party replication product or vSphere Replication to copy virtual machine data to a secondary site.SRM replication choices:Array-based/ storage Array replicationvSphere ReplicationDescriptionHere we will discuss about vSphere Replication. Note: vSphere Replication will not be available in the trial version of SRM and to overcome this we need to deploy vSphere Replication appliance on both primary and secondary sites.vSphere ReplicationvSphere Replication is a feature of vSphere 5.0 and Site Recovery Manager (SRM) 5.0 that automates the failover of virtual servers to a recovery site. VMware vSphere Replication is a hypervisor-based, asynchronous replication solution for vSphere virtual machines. vSphere Replication delivers flexible, reliable and cost-efficient replication to enable data protection and disaster recovery for all virtual machines in your environment.Use CasesData protection locally, within a single siteDisaster recovery and avoidance between two sitesDisaster recovery and avoidance to a service provider cloudData center migrationConfigure vSphere Replication :To be Noted : This Blog is developed by considering the following versions, ESXi Version – ESXi 6.0, VMware vSphere Replication – 6.0.0.1, IOMeter tool -1.0Step 1 : Add one ESXi host to the Primay vCenter server (Site A) and Add another ESXi host to the Secondary vCenter server (Site B).Step 2 : Deploy Replication appliance virtual server on both ESXi host (as a VM). This appliance ovf file will be available on the following link,Link : https://my.vmware.com/web/vmware/details?productId=491&downloadGroup=VR6001Console of the vSphere Replication ApplianceStep 3 : Connect both the sites from each other as follows,3.1 Go to home and click on the “vSphere replication icon” from inventories.3.2 Manage ==> vSphere Replication ==> Target sites -> click “connect to target site” icon3.3 Provide vCenter server of each other’s IP and credentialsStep 4 : Create VM on the ESXi host available on the primary vCenter Server and initiate IO on the VM disk using IO meter.Once after completing the above steps, the vSphere replication process is diagrammatically explained below,vSphere replicationHow to initiate Replication :Right click on VM which is created on the primary site.Then Goto -> All vSphere replication action -> configure replication.Complete the configuration wizard with correct information.Once the above steps are done, replication will be initiated from Primary to Secondary site.Monitor Replication Process :From Primary site :Home ==> vSphere Replication ==> Monitor ==> vSphere Replication ==> outgoing replication(primary site)From Secondary Site :Home ==> vSphere Replication ==> Monitor ==> vSphere Replication ==>incoming replication(secondary site)Sample Output of Performance test:(Above output may differ based on the hardware)Few Flexible Configurations:Recovery point objective (RPO) from 15 minutes to 24 hoursProtect up to 2,000 virtual machines per vCenter Server environmentUse Linux file system quiescingConclusionThis blog is all about doing replication on DR and also HA of services without having an Adapter configuration on the storage side and this is also cost effective.

Aziro Marketing

blogImage

Aziro (formerly MSys Technologies) 2019 Tech Predictions: Smart Storage, Cloud’s Bull Run, Ubiquitous DevOps, and Glass-Box AI

2019 brings us to the second-last leg of this decade. From the last few years, IT professionals have been propagating rhetoric. They state that the technology landscape is seeing a revolutionary change. But, most of the “REVOLUTIONARY” changes, has, over the time lost their gullibility. Thanks to the awe-inspiring technologies like AI, Robotics, and upcoming 5G networks most tech pundits consider this decade to be a game changer in the technology sector.As we make headway into 2019, the internet is bombarded with numerous tech prophecies. Aziro (formerly MSys Technologies) presents to you the 2019 tech predictions based on our Storage, Cloud, DevOps and digital transformation expertise.1. Software Defined Storage (SDS)Definitely, 2019 looks promising for Software Defined Storage. It’ll be driven by changes in Autonomous Storage, Object Storage, Self-Managed DRaaS and NVMes. But, SDS will also be required to push the envelope to acclimatize and evolve. Let’s understand why so.1.1 Autonomous Storage to Garner MomentumBacked by users’ demand, we’ll witness the growth of self-healing storage in 2019. Here, Artificial Intelligence powered by intelligent algorithms will play a pivotal role. Consequently, companies will strive to ensure uninterrupted application performance, round the clock.1.2 Self-Managed Disaster Recovery as a Service (DRaaS) will be ProminentSelf-Managed DRaaS reduces human interference and proactively recovers business-critical data. It then duplicates the data in the Cloud. This brings relief during an unforeseen event. Ultimately, it cuts costs. In 2019, this’ll strike chords with enterprises, globally, and we’ll witness DRaaS gaining prominence.1.3 The Pendulum will Swing Back to Object Storage as a Service (STaaS)Object Storage makes a perfect case for cost-effective storage. Its flat structure creates a scale-out architecture and induces Cloud compatibility. It also assigns unique Metadata and ID for each object within storage. This accelerates the data retrieval and recovery process. Thus, in 2019, we expect companies to embrace Object Storage to support their Big data needs.1.4 NMVes Adoption to Register TractionIn 2019, Software Defined Storage will accelerate the adoption rate of NVMes. It rubs off glitches associated with traditional storage to ensure smooth data migration while adopting NVMes. With SDS, enterprises need not worry about the ‘Rip and Replace’ hardware procedure. We’ll see vendors design storage platforms that append to NVMes protocol. For 2019, NMVes growth will mostly be led by FC-NVME and NVMe-oF.2. Hyperconverged Infrastructure (HCI)In 2019, HCI will remain the trump card to create a multi-layer infrastructure with centralized management. We’ll see more companies utilize HCI to deploy applications quickly. This’ll circle around a policy-based and data-centric architecture.3. Hybridconverged Infrastructure will Mark its FootprintHybridconverged Infrastructure (HCI.2) comes with all the features of its big brother – Hyperconverged Infrastructure (HCI.1). But, one extended functionality makes the latter smarter. Unlike HCI.1, it allows connecting with an external host. This’ll help HCI.2 mark its footprint in 2019.4. VirtualizationIn 2019, Virtualization’s growth will be centered around Software Defined Data Centers and Containers.4.1 ContainersContainer technology is ace in the hole to deliver promises of multi-cloud – cost efficacy, operational simplicity, and team productivity. Per IDC, 76 percent of users’ leverage containers for mission-critical applications.4.1.1 Persistent Storage will be a Key ConcernIn 2019, Containers’ users will envision a cloud-ready persistent storage platform with flash arrays. They’ll expect their storage service providers to implement synchronous mirroring, CDP – continuous data protection and auto-tiering.4.1.2 Kubernetes Explosion is ImminentThe upcoming Kubernetes version is rumored to include a pre-defined configuration template. If true, it’ll enable an easier Kubernetes deployment and use. This year, we are also expecting a higher number of Kubernetes and containers synchronization. This’ll make Kubernetes’ security a burgeoning concern. So, in 2019, we should expect stringent security protocols around Kubernetes deployment. It can be multi-step authentication or encryption at the cluster level.4.1.3 Istio to Ease Kubernetes Deployment HeadacheIstio is an open source service mesh. It addresses the Microservices’ application deployment challenges like failure recovery, load balancing, rate limiting, A/B testing, and canary testing. In 2019, companies might combine Istio and Kubernetes. This can facilitate a smooth Container orchestration, resulting in an effortless application and data migration.4.2 Software Defined Data CentersMore companies will embark on their journey to Multi-Cloud and Hybrid-Cloud. They’ll expect a seamless migration of existing applications to a heterogeneous Cloud environment. As a result, SDDC will undergo a strategic bent to accommodate the new Cloud requirements.In 2019, companies will start cobbling DevOps and SDDC. The pursuit of DevOps in SDDC will be to instigate a revamp of COBIT and ITIL practice. Frankly, without wielding DevOps, cloud-based SDDC will remain in a vacuum.5. DevOpsIn 2019, companies will implement a programmatic DevOps approach to accelerate the development and deployment of software products. Per this survey, DevOps enabled 46x code deployment. It also skyrocketed the deploy lead time by 2556x. This year, AI/ML, Automation, and FaaS will orchestrate changes to DevOps.5.1 DevOps Practice Will Experience a Spur with AI/MLIn 2019, AI/ML centric applications will experience an upsurge. Data science teams will leverage DevOps to unify complex operations across the application lifecycle. They’ll also look to automate the workflow pipeline – to rebuild, retest and redeploy, concurrently.5.2 DevOps will Add Value to Functions as a Service (FaaS)Functions as a Service aims to achieve serverless architecture. It leads to a hassle-free application development without perturbing companies to handle the monolithic REST server. It is like a panacea moment for developers.Hitherto, FaaS hasn’t achieved a full-fledged status. Although FaaS is inherently scalable, selecting wrong user cases will increase the bills. Thus, in 2019, we’ll see companies leveraging DevOps to fathom productive user cases and bring down costs drastically.5.3 Automation will be the Mainstream in DevOpsManual DevOps is time-consuming, less efficient, and error-prone. As a result, in 2019, CI/CD automation will become central in the DevOps practice. Consequently, Infrastructure as a Code to be in the driving seat.6. Cloud’s Bull Run to ContinueIn 2019, organizations will reimagine the use of Cloud. There will be a new class of ‘born-in-cloud’ start-ups, that will extract more value by intelligent Cloud operations. This will be centered around Multi-Cloud, Cloud Interoperability, and High Performance Computing. More companies will look to establish a Cloud Center of Excellence (CoE). Per RightScale survey, 57 percent of enterprises already have a Cloud Center of Excellence.6.1 Companies will Drift from “One-Cloud Approach.”In 2018, companies realized that having a ‘One-Cloud Approach’ encumbers their competitiveness. In 2019, Cloud leadership teams will bask upon the Hybrid-Cloud Architecture. Hybrid-Cloud will be the new normal within Cloud Computing in 2019.6.2 Cloud Interoperability will be a Major ConcernIn 2019, companies will start addressing the issues of interoperability by standardizing Cloud architecture. The use of the Application Programming Interface (APIs) will also accelerate. APIs will be the key to instill the capability of language neutrality, which augments system portability.6.3 High Performance Computing (HPC) will Get its Place on CloudIndustries such as Finance, Deep Learning, Semiconductors or Genomics are facing the brunt of competition. They’ll envision to deliver high-end compute-intensive applications with high performance. To entice such industries, Cloud providers will start imparting HPC capabilities in their platform. We’ll also witness large scale automation in Cloud.7. Artificial IntelligenceFor 2019 AI/ML will come out of the research and development model to be widely implemented in organizations. Customer engagements, infrastructure optimization, and Glass-Box AI, will be in the forefront.7.1 AI to Revive Customer EngagementsBusinesses (startups or enterprise) will leverage AI/ML to enable a rich end-user experience. Per Adobe, enterprises using AI will more than double in 2019. Tech and non-tech companies, alike, will strive to offer personalized services leveraging Natural Language Processing. The focus will remain to create a cognitive customer persona to generate tangible business impacts.7.2 AI for Infrastructure OptimizationIn 2019, there will a spur in the development of AI embedded monitoring tools. This’ll help companies to create a nimble infrastructure to respond to the changing workload. With such AI-driven machines, they’ll aim to cut down the infrastructure latency, infuse robustness in applications, enhance performances, and amplify outputs.7.3 Glass-Box AI will be crucial in Retail, Finance, and HealthcareThis is where Explainable AI will play its role. Glass-Box AI will create key customers’ insights with underlying methods, errors or biases. In this way, retailers don’t necessarily follow every suggestion. They can sort out responses that fit rights in that present scenario. The bottom-line will be to avoid customer altercations and bring out fairness in the process.

Aziro Marketing

blogImage

What Makes Protractor the Best for Testing AngularJS based Web Applications

AngularJS allows you to extend HTML vocabulary for your application. The resulting environment is extraordinarily expressive, readable, and quick to develop. It can be quickly used as it is present in other computing devices. It requires a testing tool, which can readily adapt to its properties but not just any testing tool.Protractor is an end-to-end test (e2e) framework for AngularJS applications. It is a Node.js program built on top of WebDriverJS. Protractor runs tests against your application running in a real browser (or headless), interacting with it as a real user would have used. It is a preferred choice of tool for AngularJS based Web Applications.AdvantagesBased on AngularJS, makes it easy for AngularJS DeveloperProvides advance locators for locating AngularJS Web Applicationsjs for finding elements for helping with writing codeAvoids explicit sleep to optimize test executionRun tests using grid in multiple browsers, headless browsersCombines powerful tools such as NodeJS, Selenium, WebDriver, Jasmine, Cucumber and MochaAllows tests to be organized based on Jasmine, can write both unit and functional tests using Jasmine4 Reasons that make Protractor Best for Testing AngularJS based Web Applications  Protractor is built on top of WebDriverJSIt uses native events and browser-specific drivers to interact with your application similar to the real world user. Protractor supports Angular-specific locator strategies, which allows you to test Angular-specific elements without any additional setup effort. New locators such as By.binding, By.repeater, By.textarea, By.model, etc. are introduced to deal with AngularJS Web Applications. Element explorer is packaged with Protractor; it identifies elements based on locators. Protractor also automatically executes the next step in your test as soon as the web page finishes pending tasks, so there is no need to write additional sleep calls to get in sync with a web page. Protractor runs on top of Selenium and implicitly provides all the benefits of it. Protractor framework is integrated with JasmineThis makes it easy to write, execute and organize tests. Jasmine is compatible with Protractor due to which all resources that are extracted from the browser can be used to make tests as promises, these promises are resolved using Jasmine expect() command. Protractor is a framework for automationIt is used for functional tests for covering the Acceptance Criteria of the user but this does not mean that we should not write unit tests and integration tests, they are still required. Protractor has a strong testing community supportIt is evolving the framework to follow AngularJS, by making it compatible with Selenium WebDriver and Jasmine new releases. Protractor project is open in GitHub which puts all its users to look for help and if issues are found they can report it and developers can take it further in to successive releases.ReferencesAngularJS – https://angularjs.orgNode.js – https://www.npmjs.com/package/protractorReleases – https://github.com/angular/protractor/releasesRelease Contents – https://github.com/angular/protractor/blob/master/CHANGELOG.mdAngularJS Guide – https://angular.github.io/protractor/#/From Specialist – http://ramonvictor.github.io/protractor/slides/#/

Aziro Marketing

blogImage

What is Chef Automate?

Introduction to Chef AutomateChef Automate provides a full suite of enterprise capabilities for workflow, node visibility, and compliance. Chef Automate integrates with the open-source products Chef, InSpec, and Habitat. It comes with comprehensive 24×7 support services for the entire platform, including open source components.These capabilities include the ability to build, deploy, manage, and collaborate across all aspects of software production: infrastructure, applications, and compliance. Each capability represents a set of collective actions and the resulting artifacts.Collaborate:As software deployment speed increases across your organization, the need for fast real-time collaboration becomes critical. Different teams may use different tools to accomplish various tasks. The ability to integrate a variety of third-party products is necessary in support of continuous deployment of infrastructure and applications. Chef Automate provides tools for local development, several integration points including APIs and SDKs, in addition to deployment pipelines that support a common workflow.Build:Practicing continuous integration and following proper deployment workflows that methodically test all proposed changes help you to build code for production use. Packaging code into a reusable artifact ensures that you are testing, approving, and promoting use of an atomic change that is consistent across multiple environments and prevents configuration drift.Deploy:Deployment pipelines increase the speed and efficiency of your software deployments by simplifying the number of variables and removing the unpredictable nature of manual steps. Deployment pipelines have a specific beginning, a specific end, and a predictable way of working each time; thereby removing complexity, reducing risk, and improving efficiency. Establishing standard workflows that utilize deployment pipelines give your operations and development teams a common platform.Manage:With increased speed comes an increased demand to understand the current state of your underlying software automation. Organizations cannot ship software quickly, yet poorly, and still manage to outperform their competitors. The ability to visualize fleetwide status and ensure security and compliance requirements act as risk mitigation techniques to resolve errors quickly and easily. Removing manual processes and checklist requirements means that shifting management capabilities becomes a key component of moving to continuous automation.OSS Automation Engines:Chef Automate is powered by three open source engines: Chef, Habitat and InSpec.Chef is the engine for infrastructure automation.Habitat automates modern applications such as those that are in containers and composed of microservices.InSpec lets you specify compliance and security requirements as executable code.Automate Setup Steps1: You must have an ACC account2: Download open VPN (https://chef-vpn.chef.co/?src=connect)3: Download client.ovpn (After login above last link)4: Install Docker5: Install Docker-Compose6: Install Vagrant7: Install Virtual-Box8: Download and install the ChefDK. This will give you the Delivery CLI tool, which will allow you to clone the Workflow project from delivery.shd.chef.co. Remember to log into the VPN to access this site.9: Add your SSH keyOn the Admin page, add your public ssh key (usually found in ~/.ssh/id_rsa.pub) to your account. This will be necessary in a few minutes.>10: Setup deliverydelivery setup --ent=chef --org=products --user=pawasthi --server=automate.chef.co -f master11: Setup tokendelivery token --ent=chef --org=products --user=pawasthi --server=automate.chef.co12: Copy the token from browser and validate.13: Clone automate via deliverydelivery clone automate --ent=chef --org=products --user=pawasthi --server=automate.chef.co14: Goto Automate (cd automate) then run `make`Note: Before make add Hook after That.1:`apt-get update`2: `apt-get install direnv`3: run `direnv hook bash` and put what it prints in your `~/.bashrc` file4: then `source ~/bashrc`Note for error unhealthy cluster: Check the cluster is created first `docker-compose ps -a` then clean all project `make clean` then run `make` try to avoid `sudo` to minimise your error.Note for port: If you get any port as used, try to release that portLike: `netstat -tunlp | grep :port` if this return process is running on your required port then kill that process `kill -9 process_id`Visibility Web UI:Developing for Visibility UI follows the same pattern as Workflow UI: a local file-system watcher builds and syncs changes into visibility_ui container that Nginx redirects to.Before developing, you will need to get the docker-compose environment at the root of this repository running.cd .. && docker-compose upThe visibility_ui container should Exit 0 indicating the JavaScript bundle was built successfully.You can run some operations locally. Make sure your version of Node matches what’s defined in .nvmrc.We recommend you use nvm to install ‘node’ if you don’t have it already. To install node first install nvm with the below linecurl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.1/install.sh | bashThen install node by going into the /visibility-web directory and running the below commandnvm installTo ensure that node is running with the correct version compare the output of this command node -v to the file /visibility-web/.nvmrc.make install – will install the Node modules.make unit – will run the unit tests locally.make e2e – will run the end-to-end tests in the Docker Compose test environment with the functional test suite in ../test/functional.sh, andmake startdev – will start a watch process that’ll rebuild the bundle whenever you make changes. (Reload the browser to see them.)make beforecommit – will run typescript linting, sass linting, and unit tests References:https://learn.chef.io/automate/

Aziro Marketing

blogImage

Learn how to automate cloud infrastructure with ceph storage

The success of virtual machines (VM) is well known today with its mass adoption everywhere. Today we have well-established workflows and tool sets to help manage VM life cycles and associated services. The proliferation and growth of this cycle, ultimately led to cloud deployments. Amazon Web Services and Google Cloud Engines are a few of the dozens of service providers today, who offer terms and services that make provisioning VMs anywhere easier. Both with the proliferation of cloud providers and the scale of cloud, comes today’s newer set of problems. Configuration management and provisioning of those VMs has become a nightmare. While one side of physical infrastructure dependency has been virtually eliminated it has resulted in another domain of configuration management of those VMs (and clusters) that needs to be addressed. A slew of tool sets came out to address them. Chef, Puppet, Ansible, Salt- Stack are widely known and used everywhere. SaltStack being the latest entrant to this club. Given our Python background, we look at SaltStack as a Configuration Management tool. We also used another new entrant Terraform for provisioning VMs needed in the cluster, and bootstrapping them to run Saltstack.IntroductionWith a proliferation of cloud providers providing Infrastructure as a service, there has been a constant innovation to deliver more. Microsoft Azure, Amazon Web Services, Google Cloud Engine are to name a few here. This has resulted in Platform as a Service model, where in not just the infrastructure is managed, but more tools/workflows were defined to enable application development and deployment easier. Google App Engine was one of the earliest success stories here. Nevertheless, for any user of these cloud platform resulted in several headaches.Vendor lock-in of technologies since services and interfaces for cloud are not a standard.Re-creation of platform from development to elsewhere was a pain.Migration from one cloud provider to another was nightmare.The need for the following requirements, flow from earlier pain points and dawned on everyone using cloud deployments:A Specification for infrastructure so it can be captured and restored as need be. By infrastructure we do consider a cluster of VMs and associated services. So network configuration, high availability and other services as dictated by the service provider had to be captured.A way for the bootstrap logic and configuration on those infrastructure needs to be captured.And configuration captured should ideally be agnostic to the cloud provider.All of this, in a tool that is understood by everyone, so its simple and easily adaptable are major plus.When I looked at the suite of tools, Ruby, Ruby on Rails were alien to me. Python was native. Saltstack had some nice features that we could really consider. If Saltstack can bootstrap and initialize resources, Terraform can help customize external environments as well. Put them to work together and we do see a great marriage on the cards. But will they measure up? Let us brush through some of their designs and get to a real life scenario and see how they scale up indeed.2 Our Cloud Toolbox2.1 TerraformTerraform is a successor to Vagrant from the stable of Hashicorp. Vagrant brought spawning of VMs to developers a breeze. The key tenets of Vagrant that made it well loved are its ability to perform lightweight, reproducible and portable environments. Today, the power of Vagrant is well known. As I see it, the need for bootstrapping a distributed cluster applications was not easily doable with it. So we have Terraform from the same makers, who understood the limitations of Vagrant and enabled it to achieve bootstrapping clustered environments easier. Terraform defines extensible providers, that encapsulates connectivity information specific to each cloud provider. Terraform defines resources that encapsulate services from each cloud provider. And each resource could be extended by one or more provisioners. Provisioner has the same concept as in Vagrant but is much more extensible. Provisioner in Vagrant can only provision newly created VMs. But here enters the power of Terraform.Terraform has support for local-exec and remote-exec, through which one can automate extensible scripts through them either locally or on remote ma- chines. As the name implies, local-exec, runs locally on the node where the script is invoked, while remote-exec executes in the targeted remote machine. And several property of the new VM are readily available. And additional custom attributes can be defined through output specification as well. Ad- ditionally, there exists a null-resource, which is pseudo resource along with explicit dependencies support that transforms Terraform to a powerhouse. All of these provide much greater flexibility with setting up complex environments outside of just provisioning and bootstrapping VMs.A better place to understand Terraform in all its glory would be to visit their doc page \cite{Terraform}: [3].2.2 SaltStackSaltStack is used to deploy, manage and automate infrastructure and applica- tions at cloud scale. SaltStack is written over Python and uses Jinja template engine. SaltStack is architect-ed to have a Master node and one or more Minion nodes. Multiple Master Nodes can also be setup to create a High Available environment. SaltStack brings some newer terminology with it that needs some familiarity. But once it is understood, it is fairly easy to use it to suit our pur- pose. I shall briefly touch upon SaltStack here, and would rightly point to their rich source of documentation here \cite{SaltStack}: [4].To put it succinctly, Grains are read-only key-value attributes of Minions. All Minions export their immutable attributes to SaltMaster as Grains. As an example, one can find cpu speed, cpu make, cpu cores, memory capacity, disk capacity, os flavor, version, network cards and many more all available part of that nodes Grains. Pillar is part of SaltMaster, holding all customization needed over the cluster. Configuration kept part of Pillar can be targeted to minions, and only those minions will have that information available. To help with an example, using Pillar one can define two sets of users/groups to be configured on nodes in the cluster. Minion nodes that are part of Finance do- main, will have one set of users applied, while those part of Dev domain will have another set. User/Group definition is defined once in the SaltMaster as a Pillar file, and can be targeted based on Minion nodes domainname, part of its Grain. Few other examples would be package variations across distributions can be handled easily. Any Operations person can easily relate to nightmare for automating a simple request to install Apache Webserver on any Linux dis- tribution (Hint: the complexity lies in the non-standard Linux distributions). Pillar is your friend in this case. All of this configuration part of either Pil- lar or Salt State are confusingly though written in the same file format(.sls) and are called Salt States. These Salt State Files (.sls) specify the configura- tion to be applied either explicitly, or through Jinja templating. A top level state file at both Pillar [default location: /srv/pillar/top.sls] and State [default location: /srv/state/top.sls] exists, wherein targeting of configuration can be accomplished.3 Case StudyLet us understand the power of Terraform and SaltStack together in action, for a real life deployment. Ceph is an open source distributed storage cluster. Needless to say, setting up Ceph cluster is a challenge even with all the documents available \cite{Ceph Storage Cluster Setup}: [6]. Even while using ceph-deploy script, one needs to satisfy pre-flight pre-requisites before it can be used. This case study shall first setup a cluster with prerequisites met and then use ceph-deploy over it, to bring up the ceph cluster.Let us try to use the power of tools we have chosen and summarize our findings while setting up Ceph Cluster. Is it really that powerful and easy to create and replicate the environment anywhere? Let us find out.We shall replicate a similar setup as provided in the ceph documentation[Figure: 1]. We shall have 4 VM nodes in the cluster, ceph-admin, ceph-monitor-0, ceph-osd-0 and ceph-osd-1. Even though in our cluster, we have only a single ceph-monitor node, I have suffix’d it with an instance number. This is to allow later expansion of monitors as needed, since ceph does allow multiple monitor nodes too. It is assumed that the whole setup is being created from ones personal desktop/laptop environment, which is behind a company proxy and cannot act as SaltMaster. We shall use Terraform to create the needed VMs and bootstrap them with appropriate configuration to run either as Salt Master or Salt Minion. ceph-admin node shall act as a Salt Master Node as well and hold all configuration necessary to install, initialize and bring up the whole cluster.3.1 Directory structureWe shall host all files in the below directory structure. This structure is assumed in scripts. The files are referenced below in greater details.We shall use DigitalOcean as our cloud provider for this case study. I am assuming the work machine is signed up with DigitalOcean to enable automatic provisioning of systems. I will use my local workmachine to this purpose. To work with DigitalOcean and provision VMs automatically, there are two steps involved.Create a Personal Access Token(PAN), which is a form of authentication token to enable auto provisioning resources. \cite{DigitalOceanPAN}:[7]. The key created has to saved securely, as it cannot be recovered again from their console.Use the PAN to add the public key of local workmachine to enable auto lo- gin into newly provisioned VMs easily \cite{DigitalOceanSSH}:[8]. This is necessary to allow passwordless ssh sessions, that enable further cus- tomization auto-magically on those created VMs.The next step is to define these details part of terraform, let us name this file provider.tf.variable "do_token" {} variable "pub_key" {} variable "pvt_key" {} variable "ssh_fingerprint" {} provider "digitalocean" { token = "${var.do_token}" }The above defines input variables that needs to be properly setup for provision- ing services with a particular cloud provider. do token is the PAN obtained during registration from DigitalOcean directly. The other three properties are used to setup the VMs provisioned to enable auto login into them from our local workmachine. The ssh fingerprint can be obtained by running ssh-keygen as below.user@machine> ssh-keygen -lf ~/.ssh/myrsakey.pub 2048 00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff /home/name/.ssh/myrsakey.pub (RSA)The above input variables, can be assigned values in a file, so they will be automatically initialized instead of requesting end users every time scripts are invoked. The special file which Terraform looks for initializing the input vari- ables are terraform.tfvars. Below would be a sample content of that file.do_token="07a91b2aa4bc7711df3d9fdec4f30cd199b91fd822389be92b2be751389da90e" pub_key="/home/name/.ssh/id_rsa.pub" pvt_key="/home/name/.ssh/id_rsa" ssh_fingerprint="0:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff"The above settings should ensure successful connection with DigitalOcean cloud provider and enable one to provision services through automation scripts.3.2 Ceph-AdminNow let us spawn and create a VM to act as our Ceph-Admin node. For each node type, let us create a separate terraform file to hold the configuration. It is not a must, but it helps keep sanity while perusing code and is self-explanatory.For Ceph-Admin we have captured bootstrapping part of Terraform config- uration. While the rest of the nodes configuration is captured part of Salt state files. It is possible to run salt minion in Ceph-Admin node as well, and ap- ply configuration. We instead chose Terraform for bootstrapping Ceph Admin totally, to help us understand both ways. In either case, the configuration is cap- tured part of spec and is readily replicable anywhere. The power of Terraform is just not with configuration/provisioning of VMs but external environments as well.The Ceph-Admin node, shall not only satisfy Ceph cluster installation pre- requisites, but have Salt Master running on it as well. It shall have two users defined, cephadm with sudo privileges over the entire cluster, and demo user. The ssh keys are generated everytime the cluster is provisioned without caching and replicating the keys. Also the user profile is replicated on all nodes in the cluster. The Salt configuration and state files have to setup additionally. Setting up this configuration file based on the attributes of the provisioned cluster has a dependency here. This dependency is very nicely handled through Terraform by their null resources and explicit dependency chains.3.2.1 admin.tf – Terraform ListingBelow is listed admin.tf that holds configuration necessary to bring up ceph- admin node with embedded commentscomments# resource maps directly to services provided by cloud providers. # it is always of the form x_y, wherein x is the cloud provider and y is the targeted service. # the last part that follows is the name of the resource. # below initializes attributes that are defined by the cloud provider to create VM. resource "digitalocean_droplet" "admin" { image = "centos-7-0-x64" name = "ceph-admin" region = "sfo1" size = "1gb" private_networking = true ssh_keys = [ "${var.ssh_fingerprint}" ] # below defines the connection parameters necessary to do ssh for further customization. # For this to work passwordless, the ssh keys should be pre-registered with cloud provider. connection { user = "root" type = "ssh" key_file = "${var.pvt_key}" timeout = "10m" } # All below provisioners, perform the actual customization and run # in the order specified in this file. # "remote-exec" performs action on the remote VM over ssh. # Below one could see some necessary directories are being created. provisioner "remote-exec" { inline = [ "mkdir -p /opt/scripts /srv/salt /srv/pillar", "mkdir -p /srv/salt/users/cephadm/keys /srv/salt/users/demo/keys"', "mkdir -p /srv/salt/files", ] } # "file" provisioner copies files from local workmachine (where the script is being run) to # remote VM. Note the directories should exist, before this can pass. # The below copies the whole directory contents from local machine to remote VM. # These scripts help setup the whole environment and can be depended to be available at # /opt/scripts location. Note, the scripts do not have executable permission bits set. # Note the usage of "path.module", these are interpolation extensions provided by Terraform. provisioner "file" { source = "${path.module}/scripts/" destination = "/opt/scripts/" } # A cephdeploy.repo file has to be made available at yum repo, for it to pick ceph packages. # This requirement comes from setting up ceph storage cluster. provisioner "file" { source = "${path.module}/scripts/cephdeploy.repo" destination = "/etc/yum.repos.d/cephdeploy.repo" } # Setup handcrafted custom sudoers file to allow running sudo through ssh without terminal connection. # Also additionally provide necessary sudo permissions to cephadm user. provisioner "file" { source = "${path.module}/scripts/salt/salt/files/sudoers" destination = "/etc/sudoers" } # Now, setup yum repos and install packages as necessary for Ceph admin node. # Additionally ensure salt-master is installed. # Create two users, cephadm privileged user with sudo access for managing the ceph cluster and demo guest user. # The passwords are also set accordingly. # Remember to set proper permissions to the scripts. # The provisioned VM attributes can be easily used to customize several properties as needed. In our case, # the IP address (public and private), VM host name are used to customize the environment further. # For ex: hosts file, salt master configuration file and ssh_config file are updated accordingly. provisioner "remote-exec" { inline = [ "export PATH=$PATH:/usr/bin", "chmod 0440 /etc/sudoers", "yum install -y epel-release yum-utils", "yum-config-manager --enable cr", "yum install -y yum-plugin-priorities", "yum clean all", "yum makecache", "yum install -y wget salt-master", "cp -af /opt/scripts/salt/* /srv", "yum install -y ceph-deploy --nogpgcheck", "yum install -y ntp ntpdate ntp-doc", "useradd -m -G wheel cephadm", "echo \"cephadm:c3ph@dm1n\" | chpasswd", "useradd -m -G docker demo", "echo \"demo:demo\" | chpasswd", "chmod +x /opt/scripts/*.sh", "/opt/scripts/fixadmin.sh ${self.ipv4_address} ${self.ipv4_address_private} ${self.name}", ] } }3.2.2 Dependency scripts – fixadmin.shBelow we list the scripts referenced from above Terraform file. fixadmin.sh script will be used to customize the VM further after creation. This script shall per- form the following functions. It shall update cluster information in /opt/nodes directory, to help further customization to know the cluster attributes (read net- work address etc). Additionally, it patches several configuration files to enable automation without intervention.intervention.#!/bin/bash # Expects ./fixadmin.sh # Performs the following. # a. caches cluster information in /opt/nodes # b. patches /etc/hosts file to connect through private-ip for cluster communication. # c. patches ssh_config file to enable auto connect without asking confirmation for given node. # d. creates 2 users, with appropriate ssh keys # e. customize salt configuration with cluster properties. mkdir -p /opt/nodes chmod 0755 /opt/nodes echo "$1" > /opt/nodes/admin.public echo "$2" > /opt/nodes/admin.private rm -f /opt/nodes/masters* sed -i '/demo-admin/d' /etc/hosts echo "$2 demo-admin" >> /etc/hosts sed -i '/demo-admin/,+1d' /etc/ssh/ssh_config echo "Host demo-admin" >> /etc/ssh/ssh_config echo " StrictHostKeyChecking no" >> /etc/ssh/ssh_config for user in cephadm demo; do rm -rf /home/${user}/.ssh su -c "cat /dev/zero | ssh-keygen -t rsa -N \"\" -q" ${user} cp /home/${user}/.ssh/id_rsa.pub /srv/salt/users/${user}/keys/key.pub cp /home/${user}/.ssh/id_rsa.pub /home/${user}/.ssh/authorized_keys done systemctl enable salt-master systemctl stop salt-master sed -i '/interface:/d' /etc/salt/master echo "#script changes below" >> /etc/salt/master echo "interface: ${2}" >> /etc/salt/master systemctl start salt-master3.2.3 Dependency – Ceph yum repo speccephdeploy.repo defines a yum repo to fetch the ceph related packages. Below is customized to install on CentOS with ceph Hammer package. This comes directly from ceph pre-requisite.[ceph-noarch] name=Ceph noarch packages baseurl=http://download.ceph.com/rpm-hammer/el7/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc3.3 Ceph-MonitorLet monitor.tf be the file that holds all configuration necessary to bring up ceph-monitor node.# resource specifies the attributes required to bring up ceph-monitor node. # Note have the node name has been customized with an index, and the usage of 'count' # 'count' is a special attribute that lets one create multiple instances of the same spec. # That easy! resource "digitalocean_droplet" "master" { image = "centos-7-0-x64" name = "ceph-monitor-${count.index}" region = "sfo1" size = "512mb" private_networking = true ssh_keys = [ "${var.ssh_fingerprint}" ] count=1 connection { user = "root" type = "ssh" key_file = "${var.pvt_key}" timeout = "10m" } provisioner "remote-exec" { inline = [ "mkdir -p /opt/scripts /opt/nodes", ] } provisioner "file" { source = "${path.module}/scripts/" destination = "/opt/scripts/" } # This provisioner has implicit dependency on admin node to be available. # below we use admin node's property to fix ceph-monitor's salt minion configuration file, # so it can reach salt master. provisioner "remote-exec" { inline = [ "export PATH=$PATH:/usr/bin", "yum install -y epel-release yum-utils", "yum-config-manager --enable cr", "yum install -y yum-plugin-priorities", "yum install -y salt-minion", "chmod +x /opt/scripts/*.sh", "/opt/scripts/fixsaltminion.sh ${digitalocean_droplet.admin.ipv4_address_private} ${self.name}", ] } }3.4 Ceph-OsdLet minion.tf file contain configuration necessary to bring up ceph-osd nodes.resource "digitalocean_droplet" "minion" {    image = "centos-7-0-x64"    name = "ceph-osd-${count.index}"    region = "sfo1"    size = "1gb"    private_networking = true    ssh_keys = [      "${var.ssh_fingerprint}"    ]        # Here we specify two instances of this specification. Look above though the        # hostnames are customized already by using interpolation.    count=2  connection {      user = "root"      type = "ssh"      key_file = "${var.pvt_key}"      timeout = "10m"  }  provisioner "remote-exec" {    inline = [      "mkdir -p /opt/scripts /opt/nodes",    ]  }  provisioner "file" {     source = "${path.module}/scripts/"     destination = "/opt/scripts/"  }  provisioner "remote-exec" {    inline = [      "export PATH=$PATH:/usr/bin",      "yum install -y epel-release yum-utils yum-plugin-priorities",      "yum install -y salt-minion",      "chmod +x /opt/scripts/*.sh",      "/opt/scripts/fixsaltminion.sh ${digitalocean_droplet.admin.ipv4_address_private} ${self.name}",    ]  } }3.4.1 Dependency – fixsaltminion.sh scriptThis script enables all saltminion nodes to fix their configuration, so it can reach the salt master. Other salt minion attributes are customized as well below.#!/bin/bash # The below script ensures salt-minion nodes configuration file # are patched to reach Salt master. # args: systemctl enable salt-minion systemctl stop salt-minion sed -i -e '/master:/d' /etc/salt/minion echo "#scripted below config changes" >> /etc/salt/minion echo "master: ${1}" >> /etc/salt/minion echo "${2}" > /etc/salt/minion_id systemctl start salt-minion3.5 Cluster Pre-flight SetupNull resources are great extensions to Terraform for providing the flexibility needed to configure complex cluster environments. Let one create cluster-init.tf to help fixup the configuration dependencies in cluster.resource "null_resource" "cluster-init" {    # so far we have relied on implicit dependency chain without specifying one.        # Here we will ensure that this resources gets run only after successful creation of its        # dependencies.    depends_on = [        "digitalocean_droplet.admin",        "digitalocean_droplet.master",        "digitalocean_droplet.minion",    ]  connection {      host = "${digitalocean_droplet.admin.ipv4_address}"      user = "root"      type = "ssh"      key_file = "${var.pvt_key}"      timeout = "10m"  }  # Below we run few other scripts based on the cluster configuration.    # And finally ensure all the other nodes in the cluster are ready for    # ceph installation.  provisioner "remote-exec" {    inline = [        "/opt/scripts/fixmasters.sh ${join(\" \", digitalocean_droplet.master.*.ipv4_address_private)}",        "/opt/scripts/fixslaves.sh ${join(\" \", digitalocean_droplet.minion.*.ipv4_address_private)}",        "salt-key -Ay",        "salt -t 10 '*' test.ping",        "salt -t 20 '*' state.apply common",        "salt-cp '*' /opt/nodes/* /opt/nodes",        "su -c /opt/scripts/ceph-pkgsetup.sh cephadm",    ]  } }3.5.1 Dependency – fixmaster.sh script#!/bin/bash # This script fixes host file and collects cluster info under /opt/nodes # Also updates ssh_config accordingly to ensure passwordless ssh can happen to # other nodes in the cluster without prompting for confirmation. # args: NODES="" i=0 for ip in "$@" do    NODE="ceph-monitor-$i"    sed -i "/$NODE/d" /etc/hosts    echo "$ip $NODE" >> /etc/hosts    echo $NODE >> /opt/nodes/masters    echo "$ip" >> /opt/nodes/masters.ip    sed -i "/$NODE/,+1d" /etc/ssh/ssh_config    NODES="$NODES $NODE"    i=$[i+1] done echo "Host $NODES" >> /etc/ssh/ssh_config echo "  StrictHostKeyChecking no" >> /etc/ssh/ssh_config3.5.2 Dependency – fixslaves.sh script3.6.3 Dependency – ceph-pkgsetup.sh script#!/bin/bash # This script fixes host file and collects cluster info under /opt/nodes # Also updates ssh_config accordingly to ensure passwordless ssh can happen to # other nodes in the cluster without prompting for confirmation. # args: NODES="" i=0 mkdir -p /opt/nodes chmod 0755 /opt/nodes rm -f /opt/nodes/minions* for ip in "$@" do    NODE="ceph-osd-$i"    sed -i "/$NODE/d" /etc/hosts    echo "$ip $NODE" >> /etc/hosts    echo $NODE >> /opt/nodes/minions    echo "$ip" >> /opt/nodes/minions.ip    sed -i "/$NODE/,+1d" /etc/ssh/ssh_config    NODES="$NODES $NODE"    i=$[i+1] done echo "Host $NODES" >> /etc/ssh/ssh_config echo "  StrictHostKeyChecking no" >> /etc/ssh/ssh_config#!/bin/bash # has to be run as user 'cephadm' with sudo privileges. # install ceph packages on all nodes in the cluster. mkdir -p $HOME/my-cluster cd $HOME/my-cluster OPTIONS="--username cephadm --overwrite-conf" echo "Installing ceph components" RELEASE=hammer for node in `sudo cat /opt/nodes/masters` do    ceph-deploy $OPTIONS install --release ${RELEASE} $node done for node in `sudo cat /opt/nodes/minions` do    ceph-deploy $OPTIONS install --release ${RELEASE} $node done3.6 Cluster BootstrappingWith the previous section, we have completed successfully setting up the cluster to meet all pre-requisites for installing ceph. The below final bootstrap script, just ensures that the needed ceph functionality gets applied before they are brought up online.File: cluster-bootstrap.tfresource "null_resource" "cluster-bootstrap" {    depends_on = [        "null_resource.cluster-init",    ]  connection {      host = "${digitalocean_droplet.admin.ipv4_address}"      user = "root"      type = "ssh"      key_file = "${var.pvt_key}"      timeout = "10m"  }  provisioner "remote-exec" {    inline = [        "su -c /opt/scripts/ceph-install.sh cephadm",        "salt 'ceph-monitor-*' state.highstate",        "salt 'ceph-osd-*' state.highstate",    ]  } }3.6.1 Dependency – ceph-install.sh script#!/bin/bash # This script has to be run as user 'cephadm', because  this user has # sudo privileges set all across the cluster. OPTIONS="--username cephadm --overwrite-conf" # pre-cleanup. rm -rf $HOME/my-cluster for node in `cat /opt/nodes/masters /opt/nodes/minions` do    ssh $node "sudo rm -rf /etc/ceph/* /var/local/osd* /var/lib/ceph/mon/*"    ssh $node "find /var/lib/ceph -type f | xargs sudo rm -rf" done mkdir -p $HOME/my-cluster cd $HOME/my-cluster echo "1. Preparing for ceph deployment" ceph-deploy $OPTIONS new ceph-monitor-0 # Adjust the configuration to suit our cluster. echo "osd pool default size = 2" >> ceph.conf echo "osd pool default pg num = 16" >> ceph.conf echo "osd pool default pgp num = 16" >> ceph.conf echo "public network = `cat /opt/nodes/admin.private`/16" >> ceph.conf echo "2. Add monitor and gather the keys" ceph-deploy $OPTIONS mon create-initial echo "3. Create OSD directory on each minions" i=0 OSD="" for node in `cat /opt/nodes/minions` do    ssh $node sudo mkdir -p /var/local/osd$i    ssh $node sudo chown -R cephadm:cephadm /var/local/osd$i    OSD="$OSD $node:/var/local/osd$i"    i=$[i+1] done echo "4. Prepare OSD on minions - $OSD" ceph-deploy $OPTIONS osd prepare $OSD echo "5. Activate OSD on minions" ceph-deploy $OPTIONS osd activate $OSD echo "6. Copy keys to all nodes" for node in `cat /opt/nodes/masters` do    ceph-deploy $OPTIONS admin $node done for node in `cat /opt/nodes/minions` do    ceph-deploy $OPTIONS admin $node done echo "7. Set permission on keyring" sudo chmod +r /etc/ceph/ceph.client.admin.keyring echo "8. Add in more monitors in cluster if available" for mon in `cat /opt/nodes/masters` do    if [ "$mon" != "ceph-monitor-0" ]; then        ceph-deploy $OPTIONS mon create $mon    fi done3.6.2 SaltStack Pillar setupAs mentioned in the directory structure section, pillar specific files are located in a specific directory. The configuration and files are customized for each node with specific functionality.# file: top.sls base:  "*":    - users# file: users.sls groups: users:  cephadm:    fullname: cephadm    uid: 5000    gid: 5000    shell: /bin/bash    home: /home/cephadm    groups:      - wheel    password: $6$zYFWr3Ouemhtbnxi$kMowKkBYSh8tt2WY98whRcq.    enforce_password: True    key.pub: True  demo:    fullname: demo    uid: 5031    gid: 5031    shell: /bin/bash    home: /home/demo    password: $6$XmIJ.Ox4tNKHa4oYccsYOEszswy1    key.pub: True3.6.3 SaltStack State files# file: top.sls base:    "*":        - common    "ceph-admin":        - admin    "ceph-monitor-*":        - master    "ceph-osd-*":        - minion# file: common.sls {% for group, args in pillar['groups'].iteritems() %} {{ group }}:  group.present:    - name: {{ group }} {% if 'gid' in args %}    - gid: {{ args['gid'] }} {% endif %} {% endfor %} {% for user, args in pillar['users'].iteritems() %} {{ user }}:  group.present:    - gid: {{ args['gid'] }}  user.present:    - home: {{ args['home'] }}    - shell: {{ args['shell'] }}    - uid: {{ args['uid'] }}    - gid: {{ args['gid'] }} {% if 'password' in args %}    - password: {{ args['password'] }} {% if 'enforce_password' in args %}    - enforce_password: {{ args['enforce_password'] }} {% endif %} {% endif %}    - fullname: {{ args['fullname'] }} {% if 'groups' in args %}    - groups: {{ args['groups'] }} {% endif %}    - require:      - group: {{ user }} {% if 'key.pub' in args and args['key.pub'] == True %} {{ user }}_key.pub:  ssh_auth:    - present    - user: {{ user }}    - source: salt://users/{{ user }}/keys/key.pub  ssh_known_hosts:    - present    - user: {{ user }}    - key: salt://users/{{ user }}/keys/key.pub    - name: "demo-master-0" {% endif %} {% endfor %} /etc/sudoers:  file.managed:    - source: salt://files/sudoers    - user: root    - group: root    - mode: 440 /opt/nodes:  file.directory:    - user: root    - group: root    - mode: 755 /opt/scripts:  file.directory:    - user: root    - group: root    - mode: 755# file: admin.sls include:  - master bash /opt/scripts/bootstrap.sh:  cmd.run# file: master.sls # one can include any packages, configuration to target ceph monitor nodes here. masterpkgs:    pkg.installed:    - pkgs:      - ntp      - ntpdate      - ntp-doc# file: minion.sls # one can include any packages, configuration to target ceph osd nodes here. minionpkgs:    pkg.installed:    - pkgs:      - ntp      - ntpdate      - ntp-doc# file: files/sudoers # customized for setting up environment to satisfy # ceph pre-flight checks. # ## Sudoers allows particular users to run various commands as ## the root user, without needing the root password. ## ## Examples are provided at the bottom of the file for collections ## of related commands, which can then be delegated out to particular ## users or groups. ## ## This file must be edited with the 'visudo' command. ## Host Aliases ## Groups of machines. You may prefer to use hostnames (perhaps using ## wildcards for entire domains) or IP addresses instead. # Host_Alias     FILESERVERS = fs1, fs2 # Host_Alias     MAILSERVERS = smtp, smtp2 ## User Aliases ## These aren't often necessary, as you can use regular groups ## (ie, from files, LDAP, NIS, etc) in this file - just use %groupname ## rather than USERALIAS # User_Alias ADMINS = jsmith, mikem ## Command Aliases ## These are groups of related commands... ## Networking # Cmnd_Alias NETWORKING = /sbin/route, /sbin/ifconfig, /bin/ping, /sbin/dhclient, /usr/bin/net, /sbin/iptables, /usr/bin/rfcomm, /usr/bin/wvdial, /sbin/iwconfig, /sbin/mii-tool ## Installation and management of software # Cmnd_Alias SOFTWARE = /bin/rpm, /usr/bin/up2date, /usr/bin/yum ## Services # Cmnd_Alias SERVICES = /sbin/service, /sbin/chkconfig ## Updating the locate database # Cmnd_Alias LOCATE = /usr/bin/updatedb ## Storage # Cmnd_Alias STORAGE = /sbin/fdisk, /sbin/sfdisk, /sbin/parted, /sbin/partprobe, /bin/mount, /bin/umount ## Delegating permissions # Cmnd_Alias DELEGATING = /usr/sbin/visudo, /bin/chown, /bin/chmod, /bin/chgrp ## Processes # Cmnd_Alias PROCESSES = /bin/nice, /bin/kill, /usr/bin/kill, /usr/bin/killall ## Drivers # Cmnd_Alias DRIVERS = /sbin/modprobe # Defaults specification # # Disable "ssh hostname sudo ", because it will show the password in clear. #         You have to run "ssh -t hostname sudo ". # Defaults:cephadm    !requiretty # # Refuse to run if unable to disable echo on the tty. This setting should also be # changed in order to be able to use sudo without a tty. See requiretty above. # Defaults   !visiblepw # # Preserving HOME has security implications since many programs # use it when searching for configuration files. Note that HOME # is already set when the the env_reset option is enabled, so # this option is only effective for configurations where either # env_reset is disabled or HOME is present in the env_keep list. # Defaults    always_set_home Defaults    env_reset Defaults    env_keep =  "COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS" Defaults    env_keep += "MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE" Defaults    env_keep += "LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES" Defaults    env_keep += "LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE" Defaults    env_keep += "LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY" # # Adding HOME to env_keep may enable a user to run unrestricted # commands via sudo. # # Defaults   env_keep += "HOME" Defaults    secure_path = /sbin:/bin:/usr/sbin:/usr/bin ## Next comes the main part: which users can run what software on ## which machines (the sudoers file can be shared between multiple ## systems). ## Syntax: ## ##      user    MACHINE=COMMANDS ## ## The COMMANDS section may have other options added to it. ## ## Allow root to run any commands anywhere root    ALL=(ALL)       ALL ## Allows members of the 'sys' group to run networking, software, ## service management apps and more. # %sys ALL = NETWORKING, SOFTWARE, SERVICES, STORAGE, DELEGATING, PROCESSES, LOCATE, DRIVERS ## Allows people in group wheel to run all commands # %wheel  ALL=(ALL)       ALL ## Same thing without a password %wheel        ALL=(ALL)       NOPASSWD: ALL ## Allows members of the users group to mount and unmount the ## cdrom as root # %users  ALL=/sbin/mount /mnt/cdrom, /sbin/umount /mnt/cdrom ## Allows members of the users group to shutdown this system # %users  localhost=/sbin/shutdown -h now ## Read drop-in files from /etc/sudoers.d (the # here does not mean a comment) #includedir /etc/sudoers.d3.7 Putting it all togetherI agree, that was a lengthy setup process. But with the configuration above in place, we will now see what it takes to fire a ceph storage cluster up. Hold your breath now, since just typing terraform apply does it. Really! Yes that is easy. Not just that, to bring down the cluster, just type terraform destroy, and to look at the cluster attributes, type terraform show. One can create any number of ceph storage clusters at one go; replicate, and recreate it any number of times. So, if one wants to expand the number of ceph monitors, then update the count attribute to your liking, similar to the rest of the VMs in the cluster. And not to forget, Terraform also lets you setup your local environment based on the cluster properties through their local-exec provisioner. The combination seems to get just too exciting, and the options endless.4 ConclusionTerraform and Saltstack both have various functionalities that intersect. But the above case study has enabled us to understand the power those tools bring to the table together. Specifying infrastructure and its dependencies not just as a specification, but allowing it to be reproducible anywhere is truly a marvel. Cloud Technologies and myraid tools that are emerging in the horizon are truly redefining the way of software development and deployment lifecycles. A marvel indeed! References[1] HashiCorp, https://hashicorp.com [2] Vagrant from HashiCorp, https://www.vagrantup.com[3] Terraform from HashiCorp Inc., https://terraform.io/docs/index.html[4] SaltStack Documentation, https://docs.saltstack.com/en/latest/contents.html[5] Ceph Storage Cluster, http://ceph.com[6] Ceph Storage Cluster Setup, http://docs.ceph.com/docs/master/start/[7] DigitalOcean Personal Access Token,https://cloud.digitalocean.com/settings/applications#access-tokensThis blog was the winning entry of the Aziro (formerly MSys Technologies) Blogging Championship 2015.

Aziro Marketing

EXPLORE ALL TAGS
2019 dockercon
Advanced analytics
Agentic AI
agile
AI
AI ML
AIOps
Amazon Aws
Amazon EC2
Analytics
Analytics tools
AndroidThings
Anomaly Detection
Anomaly monitor
Ansible Test Automation
apache
apache8
Apache Spark RDD
app containerization
application containerization
applications
Application Security
application testing
artificial intelligence
asynchronous replication
automate
automation
automation testing
Autonomous Storage
AWS Lambda
Aziro
Aziro Technologies
big data
Big Data Analytics
big data pipeline
Big Data QA
Big Data Tester
Big Data Testing
bitcoin
blockchain
blog
bluetooth
buildroot
business intelligence
busybox
chef
ci/cd
CI/CD security
cloud
Cloud Analytics
cloud computing
Cloud Cost Optimization
cloud devops
Cloud Infrastructure
Cloud Interoperability
Cloud Native Solution
Cloud Security
cloudstack
cloud storage
Cloud Storage Data
Cloud Storage Security
Codeless Automation
Cognitive analytics
Configuration Management
connected homes
container
Containers
container world 2019
container world conference
continuous-delivery
continuous deployment
continuous integration
Coronavirus
Covid-19
cryptocurrency
cyber security
data-analytics
data backup and recovery
datacenter
data protection
data replication
data-security
data-storage
deep learning
demo
Descriptive analytics
Descriptive analytics tools
development
devops
devops agile
devops automation
DEVOPS CERTIFICATION
devops monitoring
DevOps QA
DevOps Security
DevOps testing
DevSecOps
Digital Transformation
disaster recovery
DMA
docker
dockercon
dockercon 2019
dockercon 2019 san francisco
dockercon usa 2019
docker swarm
DRaaS
edge computing
Embedded AI
embedded-systems
end-to-end-test-automation
FaaS
finance
fintech
FIrebase
flash memory
flash memory summit
FMS2017
GDPR faqs
Glass-Box AI
golang
GraphQL
graphql vs rest
gui testing
habitat
hadoop
hardware-providers
healthcare
Heartfullness
High Performance Computing
Holistic Life
HPC
Hybrid-Cloud
hyper-converged
hyper-v
IaaS
IaaS Security
icinga
icinga for monitoring
Image Recognition 2024
infographic
InSpec
internet-of-things
investing
iot
iot application
iot testing
java 8 streams
javascript
jenkins
KubeCon
kubernetes
kubernetesday
kubernetesday bangalore
libstorage
linux
litecoin
log analytics
Log mining
Low-Code
Low-Code No-Code Platforms
Loyalty
machine-learning
Meditation
Microservices
migration
Mindfulness
ML
mobile-application-testing
mobile-automation-testing
monitoring tools
Mutli-Cloud
network
network file storage
new features
NFS
NVMe
NVMEof
NVMes
Online Education
opensource
openstack
opscode-2
OSS
others
Paas
PDLC
Positivty
predictive analytics
Predictive analytics tools
prescriptive analysis
private-cloud
product sustenance
programming language
public cloud
qa
qa automation
quality-assurance
Rapid Application Development
raspberry pi
RDMA
real time analytics
realtime analytics platforms
Real-time data analytics
Recovery
Recovery as a service
recovery as service
rsa
rsa 2019
rsa 2019 san francisco
rsac 2018
rsa conference
rsa conference 2019
rsa usa 2019
SaaS Security
san francisco
SDC India 2019
SDDC
security
Security Monitoring
Selenium Test Automation
selenium testng
serverless
Serverless Computing
Site Reliability Engineering
smart homes
smart mirror
SNIA
snia india 2019
SNIA SDC 2019
SNIA SDC INDIA
SNIA SDC USA
software
software defined storage
software-testing
software testing trends
software testing trends 2019
SRE
STaaS
storage
storage events
storage replication
Storage Trends 2018
storage virtualization
support
Synchronous Replication
technology
tech support
test-automation
Testing
testing automation tools
thought leadership articles
trends
tutorials
ui automation testing
ui testing
ui testing automation
vCenter Operations Manager
vCOPS
virtualization
VMware
vmworld
VMworld 2019
vmworld 2019 san francisco
VMworld 2019 US
vROM
Web Automation Testing
web test automation
WFH

LET'S ENGINEER

Your Next Product Breakthrough

Book a Free 30-minute Meeting with our technology experts.

Aziro has been a true engineering partner in our digital transformation journey. Their AI-native approach and deep technical expertise helped us modernize our infrastructure and accelerate product delivery without compromising quality. The collaboration has been seamless, efficient, and outcome-driven.

Customer Placeholder
CTO

Fortune 500 company