Cloud Updates

Uncover our latest and greatest product updates
blogImage

It’s CLOUDy out there!

“Cloud” or “Cloud Computing” has remained a buzz in the technology space for the last decade or so. But for a layman, what exactly does it mean? How does it affect or benefit us or any organization for that matter? What is the future like when it comes to the cloud?Also most importantly is it really worth all the hype?Let’s try to look into and answer as many concepts as possible here.Cloud Philosophy – Simplified:Firstly, to understand the cloud, we can take a simple example to relate to:Every family needs milk at home. The quantity of the milk needed per family may vary but is more or less constant for each of the family every day or on an average in a week. Now there could be scenarios or situations where one might have some guests visiting or some festivals in which the milk consumption may rise. Also, there could be scenarios where the family goes for a vacation, or some members of the family are out of time due to any reason during which the milk consumption for those many days would decrease. What does the family do during such days of upward spikes or a drop in requirement? They simply buy less milk or ask the milk vendor to deliver only the required quantity for the specified duration.So, the question here is “Would you by the cow, for your intermittently fluctuating milk requirements?” The answer is No!Now just to explain, consider the cow being “the cloud” which instead of milk gives us “resources” to order in the right quantity based on our needs at the given period. Simple, isn’t it? So we don’t spend huge amounts of our money in the infrastructure (cow). We can pay as per the use for the resources (milk) quite literally ‘milking’ the benefits of the cloud (cow).We all use the cloud:What if I told you that all of us used the Cloud even before we knew about it? Yes, we do.Consider you have word file saved on your desktop at the office and you need to access that file at home for further modification. Can you really just open up your computer at home and start working on the file? No, because that would be saved on your office computer hard-drive and you would have to either email it to yourself so that you can download it home for use or you would have to carry it in some pen drive.Now consider you were working on the same word file on some third party platform such as the Google Docs in your G-Drive. All you had to do was have an internet connection at home and sign-in into the G-Drive using the same account! That’s it.Basically, you accessed the Google Cloud where they had saved your file on their server. Same happens when you access your emails. Be it Google, Yahoo or Microsoft emails; these are never on ‘a particular computer’ but on the cloud or server. This makes it possible for us to log into any machine and simply check emails by signing in with our username and password. Cloud was never an alien concept; it’s just that it is more commercialized now and smaller businesses and startups who aren’t financially strong to have the infrastructure are now moving ahead to reap its benefits.Top Players in the Cloud :Now there are many organizations who have joined the ‘cloud party’ but the top contributors as per the latest 2018 survey are AWS (Amazon Web Services), Azure, Google & IBM. The following chart shows the market share of each of the player and how they compete with each other in terms of market adoption, Year on Year growth and footprints.Types of Clouds :Going further, there are various kinds of flavors in Cloud Computing that a business can choose to stick with. Depending on the need of the organization, a decision can be taken on whether an enterprise needs a Public, Private or Hybrid Cloud.Let’s briefly look at this in a bit detail.Public Cloud: This would be when an enterprise or business wants its resources to be available to everyone on the internet. The public cloud model allows users to utilize software that is hosted and managed by a third party and accessed through the internet, such as Google Drive. By allowing a third party to host and manage various aspects of computing, businesses can scale faster and save money on setup and management.Private Cloud: Private cloud infrastructure can be hosted in on-site data centers or by a third-party, but is managed by and accessible to the company alone. Companies can tailor private cloud infrastructure to meet the unique needs of the companies, specifically security and privacy needs. As opposed to the public cloud model, private clouds are not meant to be sold “as-a-service,” but is instead built and managed by each company, similar to a local or shared drive.Hybrid/Multi Cloud: This is just the combination of the private and public cloud. Here a company decided the nature of cloud services depending on resources and their access.Benefits of Cloud:Cost savings: The pay-as-you-go system also applies to the data storage space needed to service your stakeholders and clients. This means that you’ll get and pay for exactly as much space as you need.Security: For one thing, a cloud host’s full-time job is to carefully monitor security, which is significantly more efficient than a conventional in-house system. Because in the latter system, an organization must divide its efforts between a myriad of IT concerns, with security being only one of them.Flexibility: The cloud offers businesses more flexibility overall versus hosting on a local server. And, if you need extra bandwidth, a cloud-based service can meet that demand instantly, rather than undergoing a complex (and expensive) update to your IT infrastructure. This improved freedom and flexibility can make a significant difference to the overall efficiency of your organization.Mobility: Cloud computing allows mobile access to corporate data via smartphones and devices. This ensures everyone is updated considering over 2.6 billion smartphones being used globally today.Disaster recovery: Downtime in your services leads to lost productivity, revenue, and brand reputation. But while there may be no way for you to prevent or even anticipate the disasters that could potentially harm your organization, there is something you can do to help speed your recovery. Cloud-based services provide quick data recovery for all kinds of emergency scenarios from natural disasters to power outages. While 20 percent of cloud users claim disaster recovery in four hours or less, only 9 percent of non-cloud users could claim the same.Automatic software updates: For those who have a lot to get done, there isn’t anything more irritating than having to wait for a system update to be installed. Cloud-based applications automatically refresh and update themselves, instead of forcing an IT department to perform a manual organization-wide update.Competitive edge: While cloud computing is increasing in popularity, there are still those who prefer to keep everything local. That’s their choice, but doing so places them at a distinct disadvantage when competing with those who have the benefits of the cloud at their fingertips.My Experiences with Cloud :Talking of my own experience with cloud first-hand, I have a habit of maintaining and updating my own notes on the tasks I am performing. At the very early stages of my working career, I often maintained notes over some word files or notepad. But as the problem goes with traditional storage, accessing these notes irrespective of place and time was a hinderance. Then I soon realized that Microsoft’s OneNote was quite a solution to this problem. My notes got synced with the Microsoft Account and were accessible to me everywhere and anywhere I needed them. Later on, there were other apps such as Evernote that were synced with my mobile phone and offered me greater flexibility and control over my notes and data.Providing cloud-based storage users may be a small update form a company’ viewpoint; however, from the user perspective, this is a very significant change. It can alter the way you work and makes ones’ life far easier.I am also quite an avid reader, and I have a Kindle to satisfy my need to read. I also have a Kindle app on my mobile phone. Now if it weren’t for the cloud, I would have to carry either my mobile phone or Kindle to every possible place to maintain and continue the reading. But the Amazon Cloud syncs the Kindle application on the phone as well as the Kindle to a level such that I can pick up reading from where I left on my phone from Kindle and vice-versa. Basically the cloud synchronizes whatever I read on either of the devices to make life easier for me.Moreover, I have drafted and worked over this article as and when I could find time in the office, home or even while my commute in the bus! How was this possible? Yes, cloud.I worked on the MSWord online, and I could jot down my points, expand on them, add or edit them as something interesting struck my mind.Verdict:Cloud computing has been evolving the way businesses operate these days. Companies of all the shapes and sizes have been adapting to this new technology. Industry experts believe that cloud computing will continue to benefit the mid-sized and large companies in the coming few years.The Cloud is here to stay and the future is all “cloudy” (in a good way of course) with the growing needs and consumption of resources by Organizations and their clients. This is surely a way forward for also small businesses and individuals who also now need not worry about the price-overheads or infrastructure and just focus on the tasks.Also, it isn’t rocket science to understand that when businesses focus on the actual tasks to be performed rather than the overheads involved, they flourish.Data Sources:State of the Cloud 2018 ReportsSalesforce.com

Aziro Marketing

blogImage

How to configure Storage Box Services with OpenStack

OpenStack is a set of software tools for building and managing cloud computing platforms for public and private clouds.It is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard.Accessing SAN Storage in OpenStackCinder is the OpenStack component that provides access to, and manages block storage. The cinder interface specifies a number of discrete functions such as create, delete and attach volume/drive etc.Cinder provides persistent block storage resources (volumes) to VMs. These volumes can be detached from one instance and re-attached to another.Cinder supports drivers that allow cinder volumes to be created and presented using storage solutions from vendors. Third-party storage vendors use cinder’s plug-in architecture to do the necessary integration work.Advantages:Driver helps to create LUN and extend the storage capacityDriver helps to back-up the data using backup serviceIt supports both managed/unmanaged LUNIt helps to clone and create volumes from a VM imageMinimum system requirement to configure OpenStack ControllerHardware requirements:Dual-core CPU2 GB RAM5 GB diskSupported OS:RHELCentOSFedoraUbuntuOpenSUSESUSE Linux EnterpriseNetwork:2 Network Interface card with 100 Mbps/1 Gbps speed– One network for OpenStack installation– Another network for storage to make connection with SANNote: OpenStack controller means it includes all the services such as nova, cinder, glance and neutron.SAN Storage:Need one third party storage-subsystem to configure storage with OpenStack.Essential steps to configure SAN storage with OpenStackStep 1Need to setup the following property values in OpenStack cinder configuration file (/etc/cinder/cinder.conf).i. Enable storage backend:enabled_backends = storage name // for example NetAppii. Specify volume name: – to identify particular volume in storagevolume_name_template = openstack-%siii. Add NFS storage information and backup driver: – to backup databackup_driver = cinder.backup.drivers.nfs backup_share =nfs storage IP:/nfsshareiv. Storagebox information[storagename]//For Ex: We should specify NetApp here volume_driver=storage driver volume_backend_name=storage name san_login=storage username san_password=storage password san_ip=storage ipv. Enable multipathuse_multipath_for_image_xfer = TrueNote: StorageBox information and Enable multipath has to be added at the end of configuration file.Step 2Array Vendor’s cinder driver has to be added in below location for creating the volumes in storage/usr/lib/python2.7/site-packages/cinder/volume/drivers/Step 3Restart cinder services:systemctl restart openstack-cinder-api.service systemctl restart openstack-cinder-backup.service systemctl restart openstack-cinder-scheduler.service systemctl restart openstack-cinder-volume.serviceNote: If services are not restarted properties set won’t be effectiveTips and Tricks for trouble shootingSerial 1: Specify network IDSymptom:Error (Conflict): Multiple possible networks found, use a Network ID to be more specific. (HTTP 409) (Request-ID: req-251e6d02-5358-41f7-95a4-b58c52cbc74b). Usually this error occurs only if the given name is ambiguous. It is occurred in OpenStack liberty. Network name is specified while creating the instance. It is failing due to the name given is ambiguous.Approach to tackle the symptom:Affected version: Liberty – Instance is created using network name. Usually this error occurs only if the given name is ambiguous.Fixed version: Mitaka – Instance is created using network id.Issue got fixed in Mitaka which is next release of openstack liberty.To solve the issue, we need specify net id while creating instance.Steps:Login to OpenStack controller node using puttyList all the volume which are created in OpenStack controller node[root@mitaka-hos1~(keystone_admin)]# cinder list +--------------------------------------+-----------+------------------+------+------+--------------+----------+-------------+-------------+ |                  ID                  |   Status  | Migration Status | Name | Size | Volume Type  | Bootable | Multiattach | Attached to | +--------------------------------------+-----------+------------------+------+------+--------------+----------+-------------+-------------+ | eee3f8fc-3306-44c2-84c8-d2ab1ab4c775 | available |     success        | vol2 |  5   | array |   true   |    False    |             | +--------------------------------------+-----------+------------------+------+------+--------------+----------+-------------+-------------+ [root@mitaka-hos1 ~(keystone_admin)]#3. List available networks in OpenStack controller node.[root@mitaka-hos1 ~(keystone_admin)]# neutron net-list +--------------------------------------+---------+------------------------------------------------------+ | id                     | name    | subnets                                              | +--------------------------------------+---------+------------------------------------------------------+ | 489a3170-0ee3-4ae0-a5ef-8a766c50249f | public  | 20ae85c9-a89b-4689-9b76-1c395f842b01 172.24.4.224/28 | | ade84d1d-343c-42ee-a603-df2e84274bd4 | private | ef2e62bc-0b94-44f3-bb2c-82963c2eb705 10.0.0.0/24     | +--------------------------------------+---------+------------------------------------------------------4. Create instance using net-id and volume id.nova boot --flavor m1.tiny --boot-volume eee3f8fc-3306-44c2-84c8-d2ab1ab4c775  --availability-zone     nova:host1  inst3   --nic net-id=489a3170-0ee3-4ae0-a5ef-8a766c50249f +--------------------------------------+--------------------------------------------------+ | Property                             | Value                                            | +--------------------------------------+--------------------------------------------------+ | OS-DCF:diskConfig                    | MANUAL                                           | | OS-EXT-AZ:availability_zone             | nova                                             | ….. ….. | OS-EXT-SRV-ATTR:instance_name        | instance-00000009                                | | OS-EXT-STS:task_state                | scheduling | created                              | 2016-08-31T08:39:20Z                             | | flavor                               | m1.tiny (1)                                      | | hostId                               |                                                  | | id                                   | 8d202079-a9c2-4175-b5ff-7bc0638e06f4             | | image                                | Attempt to boot from volume - no image supplied  | | key_name                             | -                                                | | metadata                             | {}                                               | | name                                 | inst3                                            | | os-extended-volumes:volumes_attached | [{"id": "eee3f8fc-3306-44c2-84c8-d2ab1ab4c775"}] | | progress                             | 0                                                | | security_groups                      | default                                          | | status                               | BUILD                                            | | tenant_id                            | 8c786c64ee8143b4b83bd1109b413ce5                 | | updated                              | 2016-08-31T08:39:21Z                             | | user_id                              | 7aa859512fa146c4ba355d3499fffa14                 | +--------------------------------------+--------------------------------------------------+Bug: https://bugs.launchpad.net/python-novaclient/+bug/1569840Serial 2: Specify multipath in cinder fileSymptom:2016-07-29 04:53:29.728 2103 ERROR cinder.scheduler.manager [req-9de37842-0da5-4c05-9ce3-38b4b38aa1bf 91327080eb604f0596eec6f3191f8b76 322494d5ae904c9680b318e7231bbeff - - -] Failed to schedule_manage_existing: No valid host was found. Cannot place volume 7c1a314a-b46e-475f-8c03-823ba2ca6179 on hostApproach to tackle the symptom:To rectify this issue, need to add below line at end of the cinder.conf.use_multipath_for_image_xfer = TrueSerial 3: Specify pool name in cinder fileSymptom:016-07-20 08:27:44.268 718 ERROR oslo_messaging.rpc.dispatcher     response = self._execute_create_vol(volume, pool_name, reserve) 2016-07-20 08:27:44.268 718 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/xxx.py", line 533, in inner_connection_checker 2016-07-20 08:27:44.268 718 ERROR oslo_messaging.rpc.dispatcher     return func(self, *args, **kwargs) 2016-07-20 08:27:44.268 718 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/xxx.py", line 522, in inner_response_checker 2016-07-20 08:27:44.268 718 ERROR oslo_messaging.rpc.dispatcher     raise xxxAPIException(msg) 2016-07-20 08:27:44.268 718 ERROR oslo_messaging.rpc.dispatcher xxxAPIException: API _execute_create_vol failed with error string SM-err-pool-not-found 2016-07-20 08:27:44.268 718 ERROR oslo_messaging.rpc.dispatcher Approach to tackle the symptom:Need to specify pool name in end of the cinder.config.ventor_pool_name= pool nameSerial 4: Specify OpenStack controller IPSymptom:Unable to connect to OpenStack instance console using VNC using OpenStack HorizonError Message: Failed to connect to server (code 1006)Environment: OpenStack RDO Juno, CentOS7Approach to tackle the symptom:You need to update vncserver_proxyclient_address with the OpenStack controller IP address OR novavncproxy_base_url IP address in the nova.conf(/etc/nova/nova.conf)vncserver_proxyclient_address=openstack controller IP addressthen restart you compute service/etc/init.d/openstack-nova-compute restartSerial 5: Specify nfs driver in cinder configSymptom:[root@hiqa-rhel1 ~(keystone_admin)]# cat /var/log/cinder/backup.log | grep unsupport 2017-02-10 04:21:17.143 22496 DEBUG cinder.service [req-5210ae1c-ae31-41e5-b927-9102c776e941 - - - - -] enable_unsupported_driver : False wait /usr/lib/python2.7/site-packages/cinder/service.py:611 2017-02-10 04:21:17.222 22496 DEBUG oslo_service.service [req-5210ae1c-ae31-41e5-b927-9102c776e941 - - - - -] enable_unsupported_driver      = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2622 2017-02-10 04:34:43.918 27333 DEBUG cinder.service [req-95201c9a-8766-4589-b4be-3d076890fc54 - - - - -] enable_unsupported_driver : False wait /usr/lib/python2.7/site-packages/cinder/service.py:611 2017-02-10 04:34:43.977 27333 DEBUG oslo_service.service [req-95201c9a-8766-4589-b4be-3d076890fc54 - - - - -] enable_unsupported_driver      = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2622 2017-02-11 22:17:41.654 17423 DEBUG cinder.service [req-9b064618-21cd-4400-a428-70b909c3d141 - - - - -] enable_unsupported_driver : False wait /usr/lib/python2.7/site-packages/cinder/service.py:611 2017-02-11 22:17:41.709 17423 DEBUG oslo_service.service [req-9b064618-21cd-4400-a428-70b909c3d141 - - - - -] enable_unsupported_driver      = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2622 [root@hiqa-rhel1 ~(keystone_admin)]# Approach to tackle the symptom:To rectify this issue, need to specify nfs driver in the cinder.conf file:backup_driver = cinder.backup.drivers.nfsSerial 6: Grant permission to backup volume in nfs serverSymptom:OSError: [Errno 13] Permission denied: '/var/lib/cinder/backup_mount/f Approach to tackle the symptom:chown cinder:cinder -R /var/lib/cinder/backup_mount/

Aziro Marketing

blogImage

What are the Benefits of Azure Resource Manager?

Right since the time when the cloud technology started becoming affordable and easily available, users are happy to deploy the resources in cloud with the help of cloud giants such as Microsoft’s Azure, Amazon’s AWS, and Google to name a few. Microsoft’s Azure has two different management frameworks for deploying and managing resources. Let’s understand and compare these two frameworks. Infrastructure as a Service (IaaS) involves hosting of hardware, software, servers, storage, and related components. IaaS platforms are highly scalable and hence are best for unexpectedly changing workloads. IaaS includes automating the administrative tasks, dynamic scaling, desktop virtualization, and policy based services. Many complex applications running on IaaS require a combination of resources, for example, Virtual network, Virtual Machine, Storage Account, and a Network Interface. To help their users with this combination, Azure first introduced Azure Service Management (ASM) API. Being a REST based API, users can manage deployments, hosted services, and storage accounts. ASM deployments include viewing, creating, deleting, modifying configuration settings, and more such functions. ASM – now known as Classic – is a service that helps users to have programmatic access to many of the functions available through Management portal. In this management service, cloud service acts as a container for resource deployments. Load Balancers, with a public IP, are tightly coupled to the cloud service to have internet access to the VMs in the cloud service and balance the traffic coming to VMs. Though this is good, still there are some major concerns that users face. Major concerns with ASM are: ASM has resource specific APIs as there is no facility to create a resource group. Hence, it is not possible to manage all the resources in a single coordinated operation. To deploy, start, configure, or manage every resource individually is time consuming and tedious. ASM lacks control over access of resources. Any user who has access to the cloud service has access to all the resources. In such case, users with role that is not specific to the resource also get access to that resource, which is not safe. ASM uses XML templates, but these are difficult to maintain as compared to JSON templates. Dynamic IP addresses of VMs are routable only within the cloud service. Hence, VMs in two different Cloud Services, though part of same subscription, cannot communicate directly. Those Cloud Services need to be brought under same virtual network. Since every resource has to be provisioned independently, the dependencies are not taken care of automatically. For example, before provisioning a VM, a storage account needs to be created, because VM needs that for storing its components. Such needs are not managed automatically. As there is no grouping facility, user, on its own, has to manage relationship amongst resources to determine the total cost for its web application. This process is not easy. If automated provisioning fails while provisioning dependent resources then it needs to be handled manually which is not the right thing to do. Considering these drawbacks, Azure has come up with new management framework Azure Resource Manager (ARM). It helps the users deploy the related and interdependent components of a single unit as a group. It follows the concept of resource group. Resources can be grouped logically. Though ARM is not a complete replacement for ASM as of now, it definitely addresses the issues and makes the resources management simpler. Benefits of ARM are: User can spun up a JSON template which includes instructions for creating all the resources and build a resource group (example: VM, storage, database, and network) in real time. This facilitates deploying all the resources in a single operation. Managing JSON templates is simpler than managing XML templates. Since a resource group is treated as a single unit of management, it becomes easier to identify the costs for entire resource group and manage accounting. ARM provides Role Based Access Control (RBAC) facility to secure the resources. Hence, user with role specific to the resource has access only to that resource. Since the deployment is template based, ARM identifies the already existing resources and provisions only those that are missing. All the VMs are part of a Virtual Network .Thus all VMs can communicate with each other. ARM provisions those resources simultaneously that are not interdependent. This care is taken automatically by its management framework. Cloud Services is not needed as the Availability Set itself is the container to indicate the availability of resources. Virtual IP is needed only when users create load balancer to manage traffic coming to the virtual machines. There are three Fault domains in ARM. This helps in having more VMs in an Availability Set. Since, fault domains are racks in which VMs are provisioned, if one rack fails due to network or power issue, VMs in other racks continue to provide services. Network adapter object is individually placed in the Virtual Network and then attached to VM. This helps because, if the VM, to which it is attached fails, the same network adapter object could be attached to another VM. Resource Manager helps in tagging the resources. The tags have key/value pairs that identify the resources with the properties that user defines. Resources from same category if tagged with same tag, then those resources could be viewed at the same time even if these resources lie in different resource groups. ARM takes care of dependencies of the resources in the resource group. It identifies the dependent resources and provisions the resources that need to be provisioned prior to the dependent resource. ARM has started supporting most of the services, but some are yet unsupported. Users, for their existing deployments on ASM, can continue to use ASM and consider ARM for their new deployments. A tailored strategy to adopt ARM supported by a trusted cloud consulting services provider can be fruitful. .filledCheckboxes input[type="checkbox"]{opacity:0;display :none;}.multiStepFormBody button.close{z-index:99;}

Aziro Marketing

blogImage

3 Reasons Why Converged Data Centers Actually Exist

Technically, a converged infrastructure is named so because it pools compute, network and storage resources to simplify management making it easier for shared resources to scale up/down, move and better support fluctuating demands of a data center. But is it really the way to go? Is it business-critical and decisive to the development of next-generation data center infrastructure? Let’s take a look under the hood.Why Convergence?There are several contributing parameters to the emergence of converged data centers. However, we would limit ourselves to these 3:COTS – Commercial off-the-shelf HardwareThe emergence of commercial off-the-shelf (COTS) hardware essentially has led to disaggregation of software and hardware now making it possible for data centers to be managed by virtualized software platforms. This means we no more need wide range of proprietary, specialized systems, but virtualized platforms that can connect or converge in to software-defined infrastructure.Reduced CAPEX and OPEXWhile COTS established industry standards for hardware commodities, SDx infrastructure has made sure to confer benefits such as:– Reduced CAPEX – Storage, network and compute resources are now coupled as one appliance– Reduced OPEX – Lesser consumption of real estate and powerReadiness for Hybrid Cloudhas become the underlying foundation for Hybrid cloud. Having said that, the ‘converged’ theory makes it possible to have a scaled-out infrastructure that can be integrated and orchestrated easily with Hybrid cloud environment.Are Top Players game?EMCWith VMware as a strategic partner, and along with VCE, EMC has pretty much all the components of converged and hyper converged to itself. Hence, it is set to reap converge benefits via its own storage, VMWare’s virtualization platform and Dell’s hardware.VMwareVMware’s VSAN pretty much takes care of everything converged or hyper-converged. With products such as EVO:RAIL and EVO:SDDC, VMWare is already set to lead the converged and HI trajectory.CiscoEarlier with UCE, FCoE, and now with its HyperFlex line of servers, Cisco wins the title of an early mover as well as sustained player in the game of converged and Hyperconverged.Up for some numbers?According to IDC, WW Converged Systems Revenue increased by 8.3% in 2015 at $10.6 Billion.(Source: IDC Worldwide Converged Systems Tracker, March 31, 2016)

Aziro Marketing

blogImage

Cloud Orchestration: Everything you want to know

Have you ever wondered how complex the online systems are? Systems such as online airline ticket booking system, Internet Services, Scientific research data systems, social networking sites and more such online systems make an end user’s job simple. Actually, these systems have lots of complex structure with complex processes running in the background which make these systems work as a single workflow. Consider a case where a user is ordering a service by using an application hosted in cloud. The interface makes the entire process of ordering, approving, and provisioning look simple, as if it is a single application hosted on the same cloud. Most of the times, it is a set of applications hosted in various cloud environments, some to process the data and some to store data. Also, various platforms and infrastructure are involved in it. From user’s point of view it does not make a difference, but the service provider, whose system consists of various applications having a single interface, needs to manage the parts (modules of an app and various interlinked apps) of system hosted in various cloud environments. Managing all the parts of the system needs automation to minimize the admin intervention. So, what exactly does the service provider need to manage? Service providers need to take care that the system is up and running all the time. As the traffic grows, the system needs to be scaled by creating new environments. Creating a new environment involves functions such as spinning up a VM, adding new instances during an auto-scaling event with auto scaling groups and elastic load balancers, and configuring the OS. Automating all these functions is part of cloud automation process. The functions may also include deployment automation tools. It is a must for engineers to arrange these automation tools in definite order under specific security groups or tools. All this involves numerous manual tasks that engineers need to complete to create an environment. This is where cloud orchestration helps. Cloud Orchestration is a way to manage, co-ordinate, and provision all the components of a cloud platform automatically from a common interface. It orchestrates the physical as well as virtual resources of the cloud platform. Cloud orchestration is a must because cloud services scale up arbitrarily and dynamically, include fulfillment assurance and billing, and require workflows in various business and technical domains. Orchestration tools combine automated tasks by interconnecting the processes running across the heterogeneous platforms in multiple locations. Orchestration tools create declarative templates to convert the interconnected processes into a single workflow. The processes are so orchestrated that the new environment creation workflow is achieved with a single API call. Creation of these declarative templates, though complex and time consuming, is simplified by the orchestration tools. Cloud orchestration includes two types of models: Single Cloud model and the Multi-cloud model. In Single cloud model, all the applications designed for a system run on the same IaaS platform (same cloud service provider). Applications, interconnected to create a single workflow, running on various cloud platforms for the same organization define the concept of multi-cloud model. IaaS requirement for some applications, though designed for same system, might vary. This results in availing services of multiple cloud service providers. For example, application with patient’s sensitive medical data might reside in some IaaS, whereas the application for online OPD appointment booking might reside in another IaaS, but they are interconnected to form one system. This is called multi-cloud orchestration. Multi-cloud models provide high redundancy as compared to single IaaS deployments. This reduces the risk of down time. Key features of Multi-Cloud Model Flexibility to run applications on various IaaS platforms depending on the applications’ needs Higher redundancy than single cloud models thus reducing the down time risk Interoperability across multiple cloud environments Benefits of Cloud Orchestration Reduce overall IT costs: By reducing the number of administrators, to a larger extent, required per server By reusing the IT resources depending upon the business demands thus saving the cost for new purchase By providing the facility of paying for only those resources that are being used Improve delivery times and free up engineering time for new projects: By reducing the provisioning time from weeks to hours By increasing the capacity with the use of virtual servers thus avoiding the addition of physical hardware By providing self-service management facility for end-user Have smooth coordination between System teams and Development teams: By standardizing service descriptions and policies, and SLAs By building automated provisioning templates Make the Catalog of Services available through a single pane of glass: By aligning the business perspective with the IT perspective Conclusion Without cloud orchestration it is difficult to optimize cloud computing to its maximum potential. Owing to its multi-fold benefits, you can be assured that cloud orchestration easily helps service providers scale, reduce downtime risks and seamlessly align the various process for a great user experience.

Aziro Marketing

blogImage

Learn how to automate cloud infrastructure with ceph storage

The success of virtual machines (VM) is well known today with its mass adoption everywhere. Today we have well-established workflows and tool sets to help manage VM life cycles and associated services. The proliferation and growth of this cycle, ultimately led to cloud deployments. Amazon Web Services and Google Cloud Engines are a few of the dozens of service providers today, who offer terms and services that make provisioning VMs anywhere easier. Both with the proliferation of cloud providers and the scale of cloud, comes today’s newer set of problems. Configuration management and provisioning of those VMs has become a nightmare. While one side of physical infrastructure dependency has been virtually eliminated it has resulted in another domain of configuration management of those VMs (and clusters) that needs to be addressed. A slew of tool sets came out to address them. Chef, Puppet, Ansible, Salt- Stack are widely known and used everywhere. SaltStack being the latest entrant to this club. Given our Python background, we look at SaltStack as a Configuration Management tool. We also used another new entrant Terraform for provisioning VMs needed in the cluster, and bootstrapping them to run Saltstack.IntroductionWith a proliferation of cloud providers providing Infrastructure as a service, there has been a constant innovation to deliver more. Microsoft Azure, Amazon Web Services, Google Cloud Engine are to name a few here. This has resulted in Platform as a Service model, where in not just the infrastructure is managed, but more tools/workflows were defined to enable application development and deployment easier. Google App Engine was one of the earliest success stories here. Nevertheless, for any user of these cloud platform resulted in several headaches.Vendor lock-in of technologies since services and interfaces for cloud are not a standard.Re-creation of platform from development to elsewhere was a pain.Migration from one cloud provider to another was nightmare.The need for the following requirements, flow from earlier pain points and dawned on everyone using cloud deployments:A Specification for infrastructure so it can be captured and restored as need be. By infrastructure we do consider a cluster of VMs and associated services. So network configuration, high availability and other services as dictated by the service provider had to be captured.A way for the bootstrap logic and configuration on those infrastructure needs to be captured.And configuration captured should ideally be agnostic to the cloud provider.All of this, in a tool that is understood by everyone, so its simple and easily adaptable are major plus.When I looked at the suite of tools, Ruby, Ruby on Rails were alien to me. Python was native. Saltstack had some nice features that we could really consider. If Saltstack can bootstrap and initialize resources, Terraform can help customize external environments as well. Put them to work together and we do see a great marriage on the cards. But will they measure up? Let us brush through some of their designs and get to a real life scenario and see how they scale up indeed.2 Our Cloud Toolbox2.1 TerraformTerraform is a successor to Vagrant from the stable of Hashicorp. Vagrant brought spawning of VMs to developers a breeze. The key tenets of Vagrant that made it well loved are its ability to perform lightweight, reproducible and portable environments. Today, the power of Vagrant is well known. As I see it, the need for bootstrapping a distributed cluster applications was not easily doable with it. So we have Terraform from the same makers, who understood the limitations of Vagrant and enabled it to achieve bootstrapping clustered environments easier. Terraform defines extensible providers, that encapsulates connectivity information specific to each cloud provider. Terraform defines resources that encapsulate services from each cloud provider. And each resource could be extended by one or more provisioners. Provisioner has the same concept as in Vagrant but is much more extensible. Provisioner in Vagrant can only provision newly created VMs. But here enters the power of Terraform.Terraform has support for local-exec and remote-exec, through which one can automate extensible scripts through them either locally or on remote ma- chines. As the name implies, local-exec, runs locally on the node where the script is invoked, while remote-exec executes in the targeted remote machine. And several property of the new VM are readily available. And additional custom attributes can be defined through output specification as well. Ad- ditionally, there exists a null-resource, which is pseudo resource along with explicit dependencies support that transforms Terraform to a powerhouse. All of these provide much greater flexibility with setting up complex environments outside of just provisioning and bootstrapping VMs.A better place to understand Terraform in all its glory would be to visit their doc page \cite{Terraform}: [3].2.2 SaltStackSaltStack is used to deploy, manage and automate infrastructure and applica- tions at cloud scale. SaltStack is written over Python and uses Jinja template engine. SaltStack is architect-ed to have a Master node and one or more Minion nodes. Multiple Master Nodes can also be setup to create a High Available environment. SaltStack brings some newer terminology with it that needs some familiarity. But once it is understood, it is fairly easy to use it to suit our pur- pose. I shall briefly touch upon SaltStack here, and would rightly point to their rich source of documentation here \cite{SaltStack}: [4].To put it succinctly, Grains are read-only key-value attributes of Minions. All Minions export their immutable attributes to SaltMaster as Grains. As an example, one can find cpu speed, cpu make, cpu cores, memory capacity, disk capacity, os flavor, version, network cards and many more all available part of that nodes Grains. Pillar is part of SaltMaster, holding all customization needed over the cluster. Configuration kept part of Pillar can be targeted to minions, and only those minions will have that information available. To help with an example, using Pillar one can define two sets of users/groups to be configured on nodes in the cluster. Minion nodes that are part of Finance do- main, will have one set of users applied, while those part of Dev domain will have another set. User/Group definition is defined once in the SaltMaster as a Pillar file, and can be targeted based on Minion nodes domainname, part of its Grain. Few other examples would be package variations across distributions can be handled easily. Any Operations person can easily relate to nightmare for automating a simple request to install Apache Webserver on any Linux dis- tribution (Hint: the complexity lies in the non-standard Linux distributions). Pillar is your friend in this case. All of this configuration part of either Pil- lar or Salt State are confusingly though written in the same file format(.sls) and are called Salt States. These Salt State Files (.sls) specify the configura- tion to be applied either explicitly, or through Jinja templating. A top level state file at both Pillar [default location: /srv/pillar/top.sls] and State [default location: /srv/state/top.sls] exists, wherein targeting of configuration can be accomplished.3 Case StudyLet us understand the power of Terraform and SaltStack together in action, for a real life deployment. Ceph is an open source distributed storage cluster. Needless to say, setting up Ceph cluster is a challenge even with all the documents available \cite{Ceph Storage Cluster Setup}: [6]. Even while using ceph-deploy script, one needs to satisfy pre-flight pre-requisites before it can be used. This case study shall first setup a cluster with prerequisites met and then use ceph-deploy over it, to bring up the ceph cluster.Let us try to use the power of tools we have chosen and summarize our findings while setting up Ceph Cluster. Is it really that powerful and easy to create and replicate the environment anywhere? Let us find out.We shall replicate a similar setup as provided in the ceph documentation[Figure: 1]. We shall have 4 VM nodes in the cluster, ceph-admin, ceph-monitor-0, ceph-osd-0 and ceph-osd-1. Even though in our cluster, we have only a single ceph-monitor node, I have suffix’d it with an instance number. This is to allow later expansion of monitors as needed, since ceph does allow multiple monitor nodes too. It is assumed that the whole setup is being created from ones personal desktop/laptop environment, which is behind a company proxy and cannot act as SaltMaster. We shall use Terraform to create the needed VMs and bootstrap them with appropriate configuration to run either as Salt Master or Salt Minion. ceph-admin node shall act as a Salt Master Node as well and hold all configuration necessary to install, initialize and bring up the whole cluster.3.1 Directory structureWe shall host all files in the below directory structure. This structure is assumed in scripts. The files are referenced below in greater details.We shall use DigitalOcean as our cloud provider for this case study. I am assuming the work machine is signed up with DigitalOcean to enable automatic provisioning of systems. I will use my local workmachine to this purpose. To work with DigitalOcean and provision VMs automatically, there are two steps involved.Create a Personal Access Token(PAN), which is a form of authentication token to enable auto provisioning resources. \cite{DigitalOceanPAN}:[7]. The key created has to saved securely, as it cannot be recovered again from their console.Use the PAN to add the public key of local workmachine to enable auto lo- gin into newly provisioned VMs easily \cite{DigitalOceanSSH}:[8]. This is necessary to allow passwordless ssh sessions, that enable further cus- tomization auto-magically on those created VMs.The next step is to define these details part of terraform, let us name this file provider.tf.variable "do_token" {} variable "pub_key" {} variable "pvt_key" {} variable "ssh_fingerprint" {} provider "digitalocean" { token = "${var.do_token}" }The above defines input variables that needs to be properly setup for provision- ing services with a particular cloud provider. do token is the PAN obtained during registration from DigitalOcean directly. The other three properties are used to setup the VMs provisioned to enable auto login into them from our local workmachine. The ssh fingerprint can be obtained by running ssh-keygen as below.user@machine> ssh-keygen -lf ~/.ssh/myrsakey.pub 2048 00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff /home/name/.ssh/myrsakey.pub (RSA)The above input variables, can be assigned values in a file, so they will be automatically initialized instead of requesting end users every time scripts are invoked. The special file which Terraform looks for initializing the input vari- ables are terraform.tfvars. Below would be a sample content of that file.do_token="07a91b2aa4bc7711df3d9fdec4f30cd199b91fd822389be92b2be751389da90e" pub_key="/home/name/.ssh/id_rsa.pub" pvt_key="/home/name/.ssh/id_rsa" ssh_fingerprint="0:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff"The above settings should ensure successful connection with DigitalOcean cloud provider and enable one to provision services through automation scripts.3.2 Ceph-AdminNow let us spawn and create a VM to act as our Ceph-Admin node. For each node type, let us create a separate terraform file to hold the configuration. It is not a must, but it helps keep sanity while perusing code and is self-explanatory.For Ceph-Admin we have captured bootstrapping part of Terraform config- uration. While the rest of the nodes configuration is captured part of Salt state files. It is possible to run salt minion in Ceph-Admin node as well, and ap- ply configuration. We instead chose Terraform for bootstrapping Ceph Admin totally, to help us understand both ways. In either case, the configuration is cap- tured part of spec and is readily replicable anywhere. The power of Terraform is just not with configuration/provisioning of VMs but external environments as well.The Ceph-Admin node, shall not only satisfy Ceph cluster installation pre- requisites, but have Salt Master running on it as well. It shall have two users defined, cephadm with sudo privileges over the entire cluster, and demo user. The ssh keys are generated everytime the cluster is provisioned without caching and replicating the keys. Also the user profile is replicated on all nodes in the cluster. The Salt configuration and state files have to setup additionally. Setting up this configuration file based on the attributes of the provisioned cluster has a dependency here. This dependency is very nicely handled through Terraform by their null resources and explicit dependency chains.3.2.1 admin.tf – Terraform ListingBelow is listed admin.tf that holds configuration necessary to bring up ceph- admin node with embedded commentscomments# resource maps directly to services provided by cloud providers. # it is always of the form x_y, wherein x is the cloud provider and y is the targeted service. # the last part that follows is the name of the resource. # below initializes attributes that are defined by the cloud provider to create VM. resource "digitalocean_droplet" "admin" { image = "centos-7-0-x64" name = "ceph-admin" region = "sfo1" size = "1gb" private_networking = true ssh_keys = [ "${var.ssh_fingerprint}" ] # below defines the connection parameters necessary to do ssh for further customization. # For this to work passwordless, the ssh keys should be pre-registered with cloud provider. connection { user = "root" type = "ssh" key_file = "${var.pvt_key}" timeout = "10m" } # All below provisioners, perform the actual customization and run # in the order specified in this file. # "remote-exec" performs action on the remote VM over ssh. # Below one could see some necessary directories are being created. provisioner "remote-exec" { inline = [ "mkdir -p /opt/scripts /srv/salt /srv/pillar", "mkdir -p /srv/salt/users/cephadm/keys /srv/salt/users/demo/keys"', "mkdir -p /srv/salt/files", ] } # "file" provisioner copies files from local workmachine (where the script is being run) to # remote VM. Note the directories should exist, before this can pass. # The below copies the whole directory contents from local machine to remote VM. # These scripts help setup the whole environment and can be depended to be available at # /opt/scripts location. Note, the scripts do not have executable permission bits set. # Note the usage of "path.module", these are interpolation extensions provided by Terraform. provisioner "file" { source = "${path.module}/scripts/" destination = "/opt/scripts/" } # A cephdeploy.repo file has to be made available at yum repo, for it to pick ceph packages. # This requirement comes from setting up ceph storage cluster. provisioner "file" { source = "${path.module}/scripts/cephdeploy.repo" destination = "/etc/yum.repos.d/cephdeploy.repo" } # Setup handcrafted custom sudoers file to allow running sudo through ssh without terminal connection. # Also additionally provide necessary sudo permissions to cephadm user. provisioner "file" { source = "${path.module}/scripts/salt/salt/files/sudoers" destination = "/etc/sudoers" } # Now, setup yum repos and install packages as necessary for Ceph admin node. # Additionally ensure salt-master is installed. # Create two users, cephadm privileged user with sudo access for managing the ceph cluster and demo guest user. # The passwords are also set accordingly. # Remember to set proper permissions to the scripts. # The provisioned VM attributes can be easily used to customize several properties as needed. In our case, # the IP address (public and private), VM host name are used to customize the environment further. # For ex: hosts file, salt master configuration file and ssh_config file are updated accordingly. provisioner "remote-exec" { inline = [ "export PATH=$PATH:/usr/bin", "chmod 0440 /etc/sudoers", "yum install -y epel-release yum-utils", "yum-config-manager --enable cr", "yum install -y yum-plugin-priorities", "yum clean all", "yum makecache", "yum install -y wget salt-master", "cp -af /opt/scripts/salt/* /srv", "yum install -y ceph-deploy --nogpgcheck", "yum install -y ntp ntpdate ntp-doc", "useradd -m -G wheel cephadm", "echo \"cephadm:c3ph@dm1n\" | chpasswd", "useradd -m -G docker demo", "echo \"demo:demo\" | chpasswd", "chmod +x /opt/scripts/*.sh", "/opt/scripts/fixadmin.sh ${self.ipv4_address} ${self.ipv4_address_private} ${self.name}", ] } }3.2.2 Dependency scripts – fixadmin.shBelow we list the scripts referenced from above Terraform file. fixadmin.sh script will be used to customize the VM further after creation. This script shall per- form the following functions. It shall update cluster information in /opt/nodes directory, to help further customization to know the cluster attributes (read net- work address etc). Additionally, it patches several configuration files to enable automation without intervention.intervention.#!/bin/bash # Expects ./fixadmin.sh # Performs the following. # a. caches cluster information in /opt/nodes # b. patches /etc/hosts file to connect through private-ip for cluster communication. # c. patches ssh_config file to enable auto connect without asking confirmation for given node. # d. creates 2 users, with appropriate ssh keys # e. customize salt configuration with cluster properties. mkdir -p /opt/nodes chmod 0755 /opt/nodes echo "$1" > /opt/nodes/admin.public echo "$2" > /opt/nodes/admin.private rm -f /opt/nodes/masters* sed -i '/demo-admin/d' /etc/hosts echo "$2 demo-admin" >> /etc/hosts sed -i '/demo-admin/,+1d' /etc/ssh/ssh_config echo "Host demo-admin" >> /etc/ssh/ssh_config echo " StrictHostKeyChecking no" >> /etc/ssh/ssh_config for user in cephadm demo; do rm -rf /home/${user}/.ssh su -c "cat /dev/zero | ssh-keygen -t rsa -N \"\" -q" ${user} cp /home/${user}/.ssh/id_rsa.pub /srv/salt/users/${user}/keys/key.pub cp /home/${user}/.ssh/id_rsa.pub /home/${user}/.ssh/authorized_keys done systemctl enable salt-master systemctl stop salt-master sed -i '/interface:/d' /etc/salt/master echo "#script changes below" >> /etc/salt/master echo "interface: ${2}" >> /etc/salt/master systemctl start salt-master3.2.3 Dependency – Ceph yum repo speccephdeploy.repo defines a yum repo to fetch the ceph related packages. Below is customized to install on CentOS with ceph Hammer package. This comes directly from ceph pre-requisite.[ceph-noarch] name=Ceph noarch packages baseurl=http://download.ceph.com/rpm-hammer/el7/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc3.3 Ceph-MonitorLet monitor.tf be the file that holds all configuration necessary to bring up ceph-monitor node.# resource specifies the attributes required to bring up ceph-monitor node. # Note have the node name has been customized with an index, and the usage of 'count' # 'count' is a special attribute that lets one create multiple instances of the same spec. # That easy! resource "digitalocean_droplet" "master" { image = "centos-7-0-x64" name = "ceph-monitor-${count.index}" region = "sfo1" size = "512mb" private_networking = true ssh_keys = [ "${var.ssh_fingerprint}" ] count=1 connection { user = "root" type = "ssh" key_file = "${var.pvt_key}" timeout = "10m" } provisioner "remote-exec" { inline = [ "mkdir -p /opt/scripts /opt/nodes", ] } provisioner "file" { source = "${path.module}/scripts/" destination = "/opt/scripts/" } # This provisioner has implicit dependency on admin node to be available. # below we use admin node's property to fix ceph-monitor's salt minion configuration file, # so it can reach salt master. provisioner "remote-exec" { inline = [ "export PATH=$PATH:/usr/bin", "yum install -y epel-release yum-utils", "yum-config-manager --enable cr", "yum install -y yum-plugin-priorities", "yum install -y salt-minion", "chmod +x /opt/scripts/*.sh", "/opt/scripts/fixsaltminion.sh ${digitalocean_droplet.admin.ipv4_address_private} ${self.name}", ] } }3.4 Ceph-OsdLet minion.tf file contain configuration necessary to bring up ceph-osd nodes.resource "digitalocean_droplet" "minion" {    image = "centos-7-0-x64"    name = "ceph-osd-${count.index}"    region = "sfo1"    size = "1gb"    private_networking = true    ssh_keys = [      "${var.ssh_fingerprint}"    ]        # Here we specify two instances of this specification. Look above though the        # hostnames are customized already by using interpolation.    count=2  connection {      user = "root"      type = "ssh"      key_file = "${var.pvt_key}"      timeout = "10m"  }  provisioner "remote-exec" {    inline = [      "mkdir -p /opt/scripts /opt/nodes",    ]  }  provisioner "file" {     source = "${path.module}/scripts/"     destination = "/opt/scripts/"  }  provisioner "remote-exec" {    inline = [      "export PATH=$PATH:/usr/bin",      "yum install -y epel-release yum-utils yum-plugin-priorities",      "yum install -y salt-minion",      "chmod +x /opt/scripts/*.sh",      "/opt/scripts/fixsaltminion.sh ${digitalocean_droplet.admin.ipv4_address_private} ${self.name}",    ]  } }3.4.1 Dependency – fixsaltminion.sh scriptThis script enables all saltminion nodes to fix their configuration, so it can reach the salt master. Other salt minion attributes are customized as well below.#!/bin/bash # The below script ensures salt-minion nodes configuration file # are patched to reach Salt master. # args: systemctl enable salt-minion systemctl stop salt-minion sed -i -e '/master:/d' /etc/salt/minion echo "#scripted below config changes" >> /etc/salt/minion echo "master: ${1}" >> /etc/salt/minion echo "${2}" > /etc/salt/minion_id systemctl start salt-minion3.5 Cluster Pre-flight SetupNull resources are great extensions to Terraform for providing the flexibility needed to configure complex cluster environments. Let one create cluster-init.tf to help fixup the configuration dependencies in cluster.resource "null_resource" "cluster-init" {    # so far we have relied on implicit dependency chain without specifying one.        # Here we will ensure that this resources gets run only after successful creation of its        # dependencies.    depends_on = [        "digitalocean_droplet.admin",        "digitalocean_droplet.master",        "digitalocean_droplet.minion",    ]  connection {      host = "${digitalocean_droplet.admin.ipv4_address}"      user = "root"      type = "ssh"      key_file = "${var.pvt_key}"      timeout = "10m"  }  # Below we run few other scripts based on the cluster configuration.    # And finally ensure all the other nodes in the cluster are ready for    # ceph installation.  provisioner "remote-exec" {    inline = [        "/opt/scripts/fixmasters.sh ${join(\" \", digitalocean_droplet.master.*.ipv4_address_private)}",        "/opt/scripts/fixslaves.sh ${join(\" \", digitalocean_droplet.minion.*.ipv4_address_private)}",        "salt-key -Ay",        "salt -t 10 '*' test.ping",        "salt -t 20 '*' state.apply common",        "salt-cp '*' /opt/nodes/* /opt/nodes",        "su -c /opt/scripts/ceph-pkgsetup.sh cephadm",    ]  } }3.5.1 Dependency – fixmaster.sh script#!/bin/bash # This script fixes host file and collects cluster info under /opt/nodes # Also updates ssh_config accordingly to ensure passwordless ssh can happen to # other nodes in the cluster without prompting for confirmation. # args: NODES="" i=0 for ip in "$@" do    NODE="ceph-monitor-$i"    sed -i "/$NODE/d" /etc/hosts    echo "$ip $NODE" >> /etc/hosts    echo $NODE >> /opt/nodes/masters    echo "$ip" >> /opt/nodes/masters.ip    sed -i "/$NODE/,+1d" /etc/ssh/ssh_config    NODES="$NODES $NODE"    i=$[i+1] done echo "Host $NODES" >> /etc/ssh/ssh_config echo "  StrictHostKeyChecking no" >> /etc/ssh/ssh_config3.5.2 Dependency – fixslaves.sh script3.6.3 Dependency – ceph-pkgsetup.sh script#!/bin/bash # This script fixes host file and collects cluster info under /opt/nodes # Also updates ssh_config accordingly to ensure passwordless ssh can happen to # other nodes in the cluster without prompting for confirmation. # args: NODES="" i=0 mkdir -p /opt/nodes chmod 0755 /opt/nodes rm -f /opt/nodes/minions* for ip in "$@" do    NODE="ceph-osd-$i"    sed -i "/$NODE/d" /etc/hosts    echo "$ip $NODE" >> /etc/hosts    echo $NODE >> /opt/nodes/minions    echo "$ip" >> /opt/nodes/minions.ip    sed -i "/$NODE/,+1d" /etc/ssh/ssh_config    NODES="$NODES $NODE"    i=$[i+1] done echo "Host $NODES" >> /etc/ssh/ssh_config echo "  StrictHostKeyChecking no" >> /etc/ssh/ssh_config#!/bin/bash # has to be run as user 'cephadm' with sudo privileges. # install ceph packages on all nodes in the cluster. mkdir -p $HOME/my-cluster cd $HOME/my-cluster OPTIONS="--username cephadm --overwrite-conf" echo "Installing ceph components" RELEASE=hammer for node in `sudo cat /opt/nodes/masters` do    ceph-deploy $OPTIONS install --release ${RELEASE} $node done for node in `sudo cat /opt/nodes/minions` do    ceph-deploy $OPTIONS install --release ${RELEASE} $node done3.6 Cluster BootstrappingWith the previous section, we have completed successfully setting up the cluster to meet all pre-requisites for installing ceph. The below final bootstrap script, just ensures that the needed ceph functionality gets applied before they are brought up online.File: cluster-bootstrap.tfresource "null_resource" "cluster-bootstrap" {    depends_on = [        "null_resource.cluster-init",    ]  connection {      host = "${digitalocean_droplet.admin.ipv4_address}"      user = "root"      type = "ssh"      key_file = "${var.pvt_key}"      timeout = "10m"  }  provisioner "remote-exec" {    inline = [        "su -c /opt/scripts/ceph-install.sh cephadm",        "salt 'ceph-monitor-*' state.highstate",        "salt 'ceph-osd-*' state.highstate",    ]  } }3.6.1 Dependency – ceph-install.sh script#!/bin/bash # This script has to be run as user 'cephadm', because  this user has # sudo privileges set all across the cluster. OPTIONS="--username cephadm --overwrite-conf" # pre-cleanup. rm -rf $HOME/my-cluster for node in `cat /opt/nodes/masters /opt/nodes/minions` do    ssh $node "sudo rm -rf /etc/ceph/* /var/local/osd* /var/lib/ceph/mon/*"    ssh $node "find /var/lib/ceph -type f | xargs sudo rm -rf" done mkdir -p $HOME/my-cluster cd $HOME/my-cluster echo "1. Preparing for ceph deployment" ceph-deploy $OPTIONS new ceph-monitor-0 # Adjust the configuration to suit our cluster. echo "osd pool default size = 2" >> ceph.conf echo "osd pool default pg num = 16" >> ceph.conf echo "osd pool default pgp num = 16" >> ceph.conf echo "public network = `cat /opt/nodes/admin.private`/16" >> ceph.conf echo "2. Add monitor and gather the keys" ceph-deploy $OPTIONS mon create-initial echo "3. Create OSD directory on each minions" i=0 OSD="" for node in `cat /opt/nodes/minions` do    ssh $node sudo mkdir -p /var/local/osd$i    ssh $node sudo chown -R cephadm:cephadm /var/local/osd$i    OSD="$OSD $node:/var/local/osd$i"    i=$[i+1] done echo "4. Prepare OSD on minions - $OSD" ceph-deploy $OPTIONS osd prepare $OSD echo "5. Activate OSD on minions" ceph-deploy $OPTIONS osd activate $OSD echo "6. Copy keys to all nodes" for node in `cat /opt/nodes/masters` do    ceph-deploy $OPTIONS admin $node done for node in `cat /opt/nodes/minions` do    ceph-deploy $OPTIONS admin $node done echo "7. Set permission on keyring" sudo chmod +r /etc/ceph/ceph.client.admin.keyring echo "8. Add in more monitors in cluster if available" for mon in `cat /opt/nodes/masters` do    if [ "$mon" != "ceph-monitor-0" ]; then        ceph-deploy $OPTIONS mon create $mon    fi done3.6.2 SaltStack Pillar setupAs mentioned in the directory structure section, pillar specific files are located in a specific directory. The configuration and files are customized for each node with specific functionality.# file: top.sls base:  "*":    - users# file: users.sls groups: users:  cephadm:    fullname: cephadm    uid: 5000    gid: 5000    shell: /bin/bash    home: /home/cephadm    groups:      - wheel    password: $6$zYFWr3Ouemhtbnxi$kMowKkBYSh8tt2WY98whRcq.    enforce_password: True    key.pub: True  demo:    fullname: demo    uid: 5031    gid: 5031    shell: /bin/bash    home: /home/demo    password: $6$XmIJ.Ox4tNKHa4oYccsYOEszswy1    key.pub: True3.6.3 SaltStack State files# file: top.sls base:    "*":        - common    "ceph-admin":        - admin    "ceph-monitor-*":        - master    "ceph-osd-*":        - minion# file: common.sls {% for group, args in pillar['groups'].iteritems() %} {{ group }}:  group.present:    - name: {{ group }} {% if 'gid' in args %}    - gid: {{ args['gid'] }} {% endif %} {% endfor %} {% for user, args in pillar['users'].iteritems() %} {{ user }}:  group.present:    - gid: {{ args['gid'] }}  user.present:    - home: {{ args['home'] }}    - shell: {{ args['shell'] }}    - uid: {{ args['uid'] }}    - gid: {{ args['gid'] }} {% if 'password' in args %}    - password: {{ args['password'] }} {% if 'enforce_password' in args %}    - enforce_password: {{ args['enforce_password'] }} {% endif %} {% endif %}    - fullname: {{ args['fullname'] }} {% if 'groups' in args %}    - groups: {{ args['groups'] }} {% endif %}    - require:      - group: {{ user }} {% if 'key.pub' in args and args['key.pub'] == True %} {{ user }}_key.pub:  ssh_auth:    - present    - user: {{ user }}    - source: salt://users/{{ user }}/keys/key.pub  ssh_known_hosts:    - present    - user: {{ user }}    - key: salt://users/{{ user }}/keys/key.pub    - name: "demo-master-0" {% endif %} {% endfor %} /etc/sudoers:  file.managed:    - source: salt://files/sudoers    - user: root    - group: root    - mode: 440 /opt/nodes:  file.directory:    - user: root    - group: root    - mode: 755 /opt/scripts:  file.directory:    - user: root    - group: root    - mode: 755# file: admin.sls include:  - master bash /opt/scripts/bootstrap.sh:  cmd.run# file: master.sls # one can include any packages, configuration to target ceph monitor nodes here. masterpkgs:    pkg.installed:    - pkgs:      - ntp      - ntpdate      - ntp-doc# file: minion.sls # one can include any packages, configuration to target ceph osd nodes here. minionpkgs:    pkg.installed:    - pkgs:      - ntp      - ntpdate      - ntp-doc# file: files/sudoers # customized for setting up environment to satisfy # ceph pre-flight checks. # ## Sudoers allows particular users to run various commands as ## the root user, without needing the root password. ## ## Examples are provided at the bottom of the file for collections ## of related commands, which can then be delegated out to particular ## users or groups. ## ## This file must be edited with the 'visudo' command. ## Host Aliases ## Groups of machines. You may prefer to use hostnames (perhaps using ## wildcards for entire domains) or IP addresses instead. # Host_Alias     FILESERVERS = fs1, fs2 # Host_Alias     MAILSERVERS = smtp, smtp2 ## User Aliases ## These aren't often necessary, as you can use regular groups ## (ie, from files, LDAP, NIS, etc) in this file - just use %groupname ## rather than USERALIAS # User_Alias ADMINS = jsmith, mikem ## Command Aliases ## These are groups of related commands... ## Networking # Cmnd_Alias NETWORKING = /sbin/route, /sbin/ifconfig, /bin/ping, /sbin/dhclient, /usr/bin/net, /sbin/iptables, /usr/bin/rfcomm, /usr/bin/wvdial, /sbin/iwconfig, /sbin/mii-tool ## Installation and management of software # Cmnd_Alias SOFTWARE = /bin/rpm, /usr/bin/up2date, /usr/bin/yum ## Services # Cmnd_Alias SERVICES = /sbin/service, /sbin/chkconfig ## Updating the locate database # Cmnd_Alias LOCATE = /usr/bin/updatedb ## Storage # Cmnd_Alias STORAGE = /sbin/fdisk, /sbin/sfdisk, /sbin/parted, /sbin/partprobe, /bin/mount, /bin/umount ## Delegating permissions # Cmnd_Alias DELEGATING = /usr/sbin/visudo, /bin/chown, /bin/chmod, /bin/chgrp ## Processes # Cmnd_Alias PROCESSES = /bin/nice, /bin/kill, /usr/bin/kill, /usr/bin/killall ## Drivers # Cmnd_Alias DRIVERS = /sbin/modprobe # Defaults specification # # Disable "ssh hostname sudo ", because it will show the password in clear. #         You have to run "ssh -t hostname sudo ". # Defaults:cephadm    !requiretty # # Refuse to run if unable to disable echo on the tty. This setting should also be # changed in order to be able to use sudo without a tty. See requiretty above. # Defaults   !visiblepw # # Preserving HOME has security implications since many programs # use it when searching for configuration files. Note that HOME # is already set when the the env_reset option is enabled, so # this option is only effective for configurations where either # env_reset is disabled or HOME is present in the env_keep list. # Defaults    always_set_home Defaults    env_reset Defaults    env_keep =  "COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS" Defaults    env_keep += "MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE" Defaults    env_keep += "LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES" Defaults    env_keep += "LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE" Defaults    env_keep += "LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY" # # Adding HOME to env_keep may enable a user to run unrestricted # commands via sudo. # # Defaults   env_keep += "HOME" Defaults    secure_path = /sbin:/bin:/usr/sbin:/usr/bin ## Next comes the main part: which users can run what software on ## which machines (the sudoers file can be shared between multiple ## systems). ## Syntax: ## ##      user    MACHINE=COMMANDS ## ## The COMMANDS section may have other options added to it. ## ## Allow root to run any commands anywhere root    ALL=(ALL)       ALL ## Allows members of the 'sys' group to run networking, software, ## service management apps and more. # %sys ALL = NETWORKING, SOFTWARE, SERVICES, STORAGE, DELEGATING, PROCESSES, LOCATE, DRIVERS ## Allows people in group wheel to run all commands # %wheel  ALL=(ALL)       ALL ## Same thing without a password %wheel        ALL=(ALL)       NOPASSWD: ALL ## Allows members of the users group to mount and unmount the ## cdrom as root # %users  ALL=/sbin/mount /mnt/cdrom, /sbin/umount /mnt/cdrom ## Allows members of the users group to shutdown this system # %users  localhost=/sbin/shutdown -h now ## Read drop-in files from /etc/sudoers.d (the # here does not mean a comment) #includedir /etc/sudoers.d3.7 Putting it all togetherI agree, that was a lengthy setup process. But with the configuration above in place, we will now see what it takes to fire a ceph storage cluster up. Hold your breath now, since just typing terraform apply does it. Really! Yes that is easy. Not just that, to bring down the cluster, just type terraform destroy, and to look at the cluster attributes, type terraform show. One can create any number of ceph storage clusters at one go; replicate, and recreate it any number of times. So, if one wants to expand the number of ceph monitors, then update the count attribute to your liking, similar to the rest of the VMs in the cluster. And not to forget, Terraform also lets you setup your local environment based on the cluster properties through their local-exec provisioner. The combination seems to get just too exciting, and the options endless.4 ConclusionTerraform and Saltstack both have various functionalities that intersect. But the above case study has enabled us to understand the power those tools bring to the table together. Specifying infrastructure and its dependencies not just as a specification, but allowing it to be reproducible anywhere is truly a marvel. Cloud Technologies and myraid tools that are emerging in the horizon are truly redefining the way of software development and deployment lifecycles. A marvel indeed! References[1] HashiCorp, https://hashicorp.com [2] Vagrant from HashiCorp, https://www.vagrantup.com[3] Terraform from HashiCorp Inc., https://terraform.io/docs/index.html[4] SaltStack Documentation, https://docs.saltstack.com/en/latest/contents.html[5] Ceph Storage Cluster, http://ceph.com[6] Ceph Storage Cluster Setup, http://docs.ceph.com/docs/master/start/[7] DigitalOcean Personal Access Token,https://cloud.digitalocean.com/settings/applications#access-tokensThis blog was the winning entry of the Aziro (formerly MSys Technologies) Blogging Championship 2015.

Aziro Marketing

blogImage

7 Reasons Why Cloud Computing Is The Best For Agile Software Development

Businesses today are reaping huge benefits from cloud computing. Cloud computing has churned a completely new gamut of services and solutions that has enabled businesses to exhibit their software development prowess. The innovative features and user friendly nature of cloud has made itself appealing to the IT community as a whole. The range of cloud computing services encompasses a wide range, so much so, that some of these services are still beyond imagination. Often the disadvantages of cloud computing are shrouded by its advantages; however, this doesn’t deter users from optimizing its potential. The Nexus between Cloud Computing and Agile Software Development Agile development methods, being iterative and continuous in nature can experience a slack due to various infrastructure and software shortcomings. This is best addressed by cloud computing services that involve cloud platforms, software and virtualized machines. Cloud computing and virtualization are fast, interactive and flexible so that the development process runs smoothly right up to production. Cloud computing and virtualization make it easy for Agile development teams to seamlessly combine multiple development, test and production environments with other cloud services. Let’s look at some reasons why cloud computing is best for Agile software development. How cloud computing aids Agile software development process 1. Saves time due to multiple servers A developer using physical servers is restricted to one server for development, staging and production, leading to slower processes. Developers working on the cloud have access to an unlimited number of servers, virtual servers or cloud instances; thus speeding up their work. They are independent of physical servers being available for them to continue working. 2. Provisioning servers to suit your needs With a physical environment, developers are reliant on IT staff to provision the servers or install the desired platforms, software, etc. Despite using responsive development methods, you could experience delays in such situations. With cloud computing, developers can install the necessary software or platforms on their own, without reliance on the IT department. 3. Cloud Computing encourages innovation via investigation Agile development teams can create instances on the go as and when the need arises. Not just that, they can also experiment with novel instances whenever they stumble upon an interesting user story. As these instances can be coded and tested simultaneously, there is no waiting time involved. Developers can develop experimental instances and test them in a cloud computing environment. This helps them to stay true to the Agile philosophy of innovation through experimentation. 4. Boosts continuous integration and continuous delivery Builds and automation take time to develop. For codes that don’t yield results during automation, the Agile team will have to code and test them recurrently until the desired results are seen. With the availability of large number of virtual machines, Agile teams can fix the errors faster. Cloud computing accelerates the speed on delivery. Hence virtualization enhances integration and delivery. 5. Cloud computing simplifies code branching In Agile, the development cycle outlasts the release cycle. Code refactoring is generally enhanced and used during the production phase. At such times, code branching becomes absolutely necessary so that modifications happen in parallel along the branches. Having a cloud computing software means reduced cost of renting servers for this purpose. 6. Increases accessibility of development platforms and external services Agile development needs several project management, issue management, and automated testing environments. Most of these services are available as SaaS, including Salesforce and Basecamp; then there are IaaS offerings like AWS, OpSource, Rackspace cloud etc. and PaaS instances like Oracle Database Cloud Service and Google app engine. These services are known to specifically assist Agile development. 7. Parallel Testing Another advantage of the cloud is the ability to create multiple environments, where you can easily build a new environment and isolate the versions of code that you are testing. You can have multiple environments where one developer tests for a feature while another environment is created for another developer testing a different feature. This arrangement allows multiple people to work on different parts of the code and work in parallel. Agile Development For Cloud Related Services IaaS platforms offer great functionality around provisioning new instances with a full range of features and configuration options. When entrusted to system administrators and Agile developers, these platforms can provide the flexibility to create custom environments perfectly suited to the requirements of an application. Cloud computing and its related services are extremely essential when Agile teams aim to produce products via continuous integration and delivery. This makes Agile development a more parallel activity than a linear one. Virtual servers also eliminate delays in provisioning. Thus enterprises utilize this combination for innovation with standard business ideas.

Aziro Marketing

blogImage

What are the Common Misconceptions about Cloud Computing?

Cloud computing has become indispensable to businesses lately. In the last decade, it had gained unprecedented importance and continues to do with the ever evolving technological advancement. Despite this reality, misconceptions about cloud computing persist. For instance, while 51% companies have cheerfully migrated or taken up the cloud computing model, many are still apprehensive about it. Undeniably, every business, small or big, private or public, is utilizing the cloud technology at some level. What is interesting to note is that while enterprises have been liberal with their cloud computing budgets (IDC recently reported that enterprises have increased their cloud by 19% in the last one year!), the other business models are proving to be a skeptical lot. These apprehensions are not entirely unfounded, though. Instances like cloud hacks, server failures, practicality of the usage, and ambiguity about pricing often lead decision makers to back out before they have even taken a step forward. With proper research, knowledge and experience, one can begin to dispel most of these “myths”. Cloud-based services today have become so prevalent that, for many organizations, it’s impossible to imagine using applications and technology infrastructure components without the cloud. Compared with traditional environments, cloud computing has brought efficiencies that enable companies to reduce capital costs and increase business flexibility. According to experts, the cloud has had a dramatic impact on how web hosts and data centers operate. Despite convincing case studies and business cases, why are companies reluctant to embrace cloud computing? A study points out that while enterprises are sparing no expenses to adopt the cloud, small business are busy convincing themselves that cloud may not be necessary after all. Most people still believe that cloud computing is reserved for the elite enterprise business. This is completely untrue. The flexible and practical nature of cloud computing is highly suitable to small businesses. As a business that is about to generate a vast amount of data across the internet and wants to cater to its customers in a competitive environment, cloud computing and its consequent services is inevitable. With the experts managing the more complicated side of data management, hosting, recovery and security, business are able to focus on their products and services. Security is of utmost importance to users today. Perhaps the most rampant misconception about the cloud is that it is not a secure gateway. Most users perceive the cloud to be an open source of data that can be accessed and manipulated by anyone. The truth is that cloud infrastructure can be safer than your local storage devices. Cloud infrastructure providers secure the cloud against security breaches. Employees having access to cloud data are screened for their past experience or criminal background. Cloud providers aren’t taking security issues for granted either. Recent developments like the Amazon Inspector, and similar efforts by Microsoft, are a testimony to that. Security threats in the cloud computing environment is not a myth. However, backing out of cloud computing owing to this threat is a grave mistake, as viable solutions are widely available for tackling this issue. While cloud users worry about the security on the provider’s site, they fail to secure their own devices. Misplacing mobile devices, insecure local host providers can all contribute to a less than secure cloud computing environment. Data loss is more probable by losing hand held devices, USB sticks or other storage devices, as compared to cloud storage. Besides, cloud computing eliminates delays in adopting the latest security patches. Clouds detect and eliminate threats faster, thus giving you uninterrupted security. Users should remember cloud security is dependent on them too. For instance, it is recommended to have strong verification mechanisms for devices (especially mobile) that are used to access the cloud. The increasing BYOD dictates organizations to maintain tighter access controls to the data being accessed. Reestablishing access control and authorization will highly eliminate the possibility of breach. Additionally, cloud providers can consider vulnerability testing to maintain a healthy environment. As companies and users alike continue to transition to cloud system resources, more personal data such as bank information, transaction reports, domain services, and even full imaged operating systems are increasingly targeted and vulnerable. This data is susceptible to attackers if continuous monitoring and maintenance is not conserved. This gives rise to the need of vulnerability testing. Testing for vulnerabilities beforehand and at regular intervals can mitigate impending risks and alert cloud providers. Another common misbelief among nonbelievers is that cloud computing services are expensive. Let’s take a look at the cost of traditional computing. Extensive data generation from multiple touch points decrees businesses to maintain high speed, vast servers for themselves. Such servers are very expensive, yet very limiting. Not to mention the maintenance of infrastructure and IT staff for its upkeep. Email hosts, common UI for organizational data are some of the applications that you will have to invest in. To top it all you have to install some kind of data security or firewall system to safeguard your data. With cloud computing, you end up saving most of these expenses. Cloud computing means to save your data in the cloud. The backend infrastructure is developed and maintained by your cloud provider. Depending on the flexibility and expertise of your cloud provider, you can easily expand or scale your business without worrying about additional infrastructure expenses. Since you don’t have to buy or install anything, you save on your upfront capital expenditure. Cloud installations use virtualization to diversify the software from the characteristics of physical servers, enabling scalability for customers. Virtualization further enables data backup and recovery during power failures of server downtime. Saving crucial data and making it available at all times would be a heavy expense if organizations were to manage on their own. Besides having the expertise to do it, cloud computing providers follow a very practical pricing approach, namely the pay as you go model. This ensures that you pay only for the services and for the time span that you avail those services. Your cloud provider will also provide data security at nominal charges. Expect dramatic cost saving for your business in such a setup. This leads us to conclude that unlike popular beliefs, cloud computing is not exactly what people fear it to be. Cloud computing is not the average “risk” involved in running a business. In case skepticism still prevails, consider partnering with companies that provide cloud computing services that will cater to your exact needs. With an effective, reliable and experienced cloud service provider, you will be able to worry less about these things. Such an arrangement can benefit you with reduced costs, ease of use and a secure environment.

Aziro Marketing

EXPLORE ALL TAGS
2019 dockercon
Advanced analytics
Agentic AI
agile
AI
AI ML
AIOps
Amazon Aws
Amazon EC2
Analytics
Analytics tools
AndroidThings
Anomaly Detection
Anomaly monitor
Ansible Test Automation
apache
apache8
Apache Spark RDD
app containerization
application containerization
applications
Application Security
application testing
artificial intelligence
asynchronous replication
automate
automation
automation testing
Autonomous Storage
AWS Lambda
Aziro
Aziro Technologies
big data
Big Data Analytics
big data pipeline
Big Data QA
Big Data Tester
Big Data Testing
bitcoin
blockchain
blog
bluetooth
buildroot
business intelligence
busybox
chef
ci/cd
CI/CD security
cloud
Cloud Analytics
cloud computing
Cloud Cost Optimization
cloud devops
Cloud Infrastructure
Cloud Interoperability
Cloud Native Solution
Cloud Security
cloudstack
cloud storage
Cloud Storage Data
Cloud Storage Security
Codeless Automation
Cognitive analytics
Configuration Management
connected homes
container
Containers
container world 2019
container world conference
continuous-delivery
continuous deployment
continuous integration
Coronavirus
Covid-19
cryptocurrency
cyber security
data-analytics
data backup and recovery
datacenter
data protection
data replication
data-security
data-storage
deep learning
demo
Descriptive analytics
Descriptive analytics tools
development
devops
devops agile
devops automation
DEVOPS CERTIFICATION
devops monitoring
DevOps QA
DevOps Security
DevOps testing
DevSecOps
Digital Transformation
disaster recovery
DMA
docker
dockercon
dockercon 2019
dockercon 2019 san francisco
dockercon usa 2019
docker swarm
DRaaS
edge computing
Embedded AI
embedded-systems
end-to-end-test-automation
FaaS
finance
fintech
FIrebase
flash memory
flash memory summit
FMS2017
GDPR faqs
Glass-Box AI
golang
GraphQL
graphql vs rest
gui testing
habitat
hadoop
hardware-providers
healthcare
Heartfullness
High Performance Computing
Holistic Life
HPC
Hybrid-Cloud
hyper-converged
hyper-v
IaaS
IaaS Security
icinga
icinga for monitoring
Image Recognition 2024
infographic
InSpec
internet-of-things
investing
iot
iot application
iot testing
java 8 streams
javascript
jenkins
KubeCon
kubernetes
kubernetesday
kubernetesday bangalore
libstorage
linux
litecoin
log analytics
Log mining
Low-Code
Low-Code No-Code Platforms
Loyalty
machine-learning
Meditation
Microservices
migration
Mindfulness
ML
mobile-application-testing
mobile-automation-testing
monitoring tools
Mutli-Cloud
network
network file storage
new features
NFS
NVMe
NVMEof
NVMes
Online Education
opensource
openstack
opscode-2
OSS
others
Paas
PDLC
Positivty
predictive analytics
Predictive analytics tools
prescriptive analysis
private-cloud
product sustenance
programming language
public cloud
qa
qa automation
quality-assurance
Rapid Application Development
raspberry pi
RDMA
real time analytics
realtime analytics platforms
Real-time data analytics
Recovery
Recovery as a service
recovery as service
Retail
rsa
rsa 2019
rsa 2019 san francisco
rsac 2018
rsa conference
rsa conference 2019
rsa usa 2019
SaaS Security
san francisco
SDC India 2019
SDDC
security
Security Monitoring
Selenium Test Automation
selenium testng
serverless
Serverless Computing
Site Reliability Engineering
smart homes
smart mirror
SNIA
snia india 2019
SNIA SDC 2019
SNIA SDC INDIA
SNIA SDC USA
software
software defined storage
software-testing
software testing trends
software testing trends 2019
SRE
STaaS
storage
storage events
storage replication
Storage Trends 2018
storage virtualization
support
Synchronous Replication
technology
tech support
test-automation
Testing
testing automation tools
thought leadership articles
trends
tutorials
ui automation testing
ui testing
ui testing automation
vCenter Operations Manager
vCOPS
virtualization
VMware
vmworld
VMworld 2019
vmworld 2019 san francisco
VMworld 2019 US
vROM
Web Automation Testing
web test automation
WFH

Real People, Real Replies.
No Bots, No Black Holes.

Big things at Aziro often start small - a message, an idea, a quick hello. A real human reads every enquiry, and a simple conversation can turn into a real opportunity.
Start yours with us.

Phone

Talk to us

+1 844 415 0777

Email

Drop us a line at

info@aziro.com

Got a Tech Challenge? Let’s Talk