Virtualization Updates

Uncover our latest and greatest product updates
blogImage

4 most common challenges of desktop virtualization

Lately, IT managers are considering desktop virtualization models as an alternative to traditional distributed software deployment due to ever-increasing pressure on issues like, manageability, security, regulatory compliance, and cost control. What is desktop virtualization? Desktop virtualization is a virtualization technology that separates an individual’s PC applications from his or her desktop. Virtualized desktops are generally hosted on a remote central server, rather than the hard drive of the personal computer. Desktop virtualization is also known as client virtualization because the client-server computing model is used in virtualizing desktops. In this article we will look at some common challenges of desktop virtualization and how you can evaluate and choose the one that works best for your business environment. Challenges of desktop virtualization Network Network latencies are a common deterrent for the optimal performance of desktop virtualization. Even desktop delivery that are known to perform flawlessly can fail in WAN environment where latencies exceed worst case LAN limits. Network upgrades may be necessary before implementing VDI. You may need to upgrade your network before implementing VDI. Storage A great amount of data that was once stored on local systems, is now saved in data centers. This creates more stress on storage systems; especially if the implementation of the VDI is not planned carefully. One of the challenging VDI storage issues is performance, which can be compromised when multiple VMs on the same server access share physical resources at the same time. Storage area network IT may face this unique problem on introducing your employees to VDIs. This can be tackled with a sound storage infrastructure in place before VDI is installed. Another challenge for VDI storage is large, random IO performance demands. While individual desktops may be issuing sequential disk operation, when several desktops run on a single machine as VMs, the resulting storage workload becomes random. Most of these challenges can be met by employing an expert at storage development services. User experience To get users on board, a new VDI can be quite a task. They expect it to be as good and as easy to use as the traditional ones. Analysts and IT experts say virtual desktop projects tend to sidetrack the user experience during planning phases. User experience is a critical component that has direct effects on the success and feasibility of any VDI project. The requirements of the end user must be considered and implemented to ensure that it benefits the end users. The end user experience needs to be proactively managed and monitored to make sure the projected benefits of VDI result in happy employees. IT staff needs to take the end users through this and explain to them the benefits of a desktop virtualization system over other clients. Cost efficiency VDIs are perceived are costly investments, or something that doesn’t respond to ROIs very quickly. VDI projects often need investment in thin clients, besides enhancements to the existing network and storage infrastructure. This can make it a costly project. The ROI case for VDIs breaks even only after 3-5 years of installing it. Even though vendors like Windows have minimized costs of virtual desktop operating system licenses and simplified the pricing scheme, desktop virtualizations can be still be mighty expensive. On the other hand, desktop virtualization can significantly extend the life of client devices, including regular desktop PCs, lowering costs in the long run. Conclusion Desktop virtualization is an ace solution to connect users and applications. Your desktop virtualization solution should: Satisfy end-users with better boot time, access, performance, and support than before. It should tolerate WAN latency to deliver agility, economy, and security. Remember that the VDI market is still in its early stages, so tread carefully by evaluating and qualifying a full VDI solution — including compute, virtualization, networking, storage, and management.

Aziro Marketing

blogImage

5 Easy Steps To Deploy A Virtual Desktop Infrastructure

Though the buzz around VDI has reduced since its inception, the challenges surrounding the virtual desktop infrastructure space are far from being addressed. The main reasons why organizations choose VDI are:Simpler management – less panes of glass to manageCentralized storageBetter storage and resource utilizationSecurity (data resides only in the data center)Reduced costs (though very difficult to measure)Once a decision for VDI transformation has been made, it’s time to pick the desktops that would be good fits for VDI. Some would still be better off being physical desktops. On top of this, there are other key points to consider:VDI model – VMware/Xen/Hyper-VUser Experience/Usage Patterns – Different expectations on storage, compute, network for different sets of users (defining IOPs, SLAs etc.)Profile/Persona managementPersistent or non-persistent desktopsFloating or Fixed desktop poolsApp virtualization, layering, and deliveryClient devices – thin clients, zero clients, mobile, BYOD, BYOPCA Typical VDI DeploymentA typical VDI environment (e.g. with VMware) , like the one shown above, has centralized storage, View Manager, and View Composer, Connection Server and the end devices.Centralized Storage – this stores all the virtual desktops data, the profile data, parent images and linked clones.VMware View Connection Server – enables the virtual desktop connect to the end devicesVMware View Composer – this enables the creation of linked clones for better space utilization while deploying desktop pools or provisioning virtual desktops.VMware Linked Clones – The difference between a full clone and linked clone is that only a single parent VM or a master image is created for a desktop pool. The usable VM has a linked clone disk which is unique to that desktop VM. In essence, a desktop pool is a combination of the linked clone disk and the master image disk (which is common for all desktops in a given pool).For example, a Windows 7 desktop pool will have in its master image the base OS and all the required software packages. With this as a parent, individual Windows 7 desktops can be provisioned on the fly by just creating the linked clone disk which is unique to that desktop and also smaller in size than the master.1. Creating the Master ImageCreate a VM with required settings, install the required OS and log in to the VM.Install View AgentJoin the machine to the domain.Issue IP release using Windows command ipconfig /release and shutdown the VM.Take a snapshot.2. Creating the Desktop PoolOpen VMware View Administrator, and enter the vCenter Server settings to connect the View system.Once the vCenter is added successfully, a new pool can be added by selecting Add Pool and Automated as the option (most popular).The User assignment setting can be selected as Dedicated if users should receive the same desktops all the time. Else, Floating desktops can be selected.In the vCenter Server page, select linked clone option and specify a name.Specify maximum number of desktops.Select the Snapshot of the master VM created in previous step (Creating the master image) and confirm.Once this is done, the desktop pool gets created and inside the pool, desktops start getting provisioned using linked clone technology (Master VM snapshot + linked clone disk)3. Connecting to an Individual DesktopOpen View Client.Specify the Connection Server IP with domain credentials of the end user.Select the protocol, either PCoIP or RDP.Connect.A remote session will open up with a desktop available from the provisioned desktops in the desktop pool.4. RefreshOnce the user logs off from the session, the desktop goes through a refresh wherein the linked clone disk is reverted back to its original state. The master VM anyway is a read-only VM since all the desktops in the pool access this. Once the linked clone is reverted to its original state, the desktop resurfaces again as an available desktop in the pool, to be used again as fresh Windows system for another user.5. RecomposeThis process is to create the desktops once again using a different master image. For example, if there was a patch to be updated in one of the applications (anti-virus update) in the master image, the patch can be applied and a snapshot taken. In the Recompose option, the new snapshot can be selected, and a new set of desktops are created that have the application patch installed.VDI ChallengesScalability constraints: Though VDI deployments with converged storage are scalable, they may not keep up to the performance expectations. For example, when scaling up VMs, you might add additional storage but may not have the available IOPs or bandwidth to service them immediately.Boot storms: High IOPs is expected during user boot times, which usually is the opening hours for a virtualized office. Similarly, high load is placed on the storage during virus scans.RoI: VDI costs need to be lower than the physical infrastructure it replaced, but ROI calculation for VDI is a tricky exercise.Experience: Desktop experience for the user needs to be consistent or at least at par with the physical counterparts. This entails some monitoring and re-configuration of infrastructure on a timely basis.Hyperconvergence for VDIOne of the solutions to address the above challenges is hyperconverged storage for VDI. As the new data center enabler, it’s only natural that hyperconvergence is put to good use for VDI deployments.In a hyperconverged environment, storage, compute and network elements are optimized to work together on a commodity equipment. Each system or node has its own physical resources, including traditional disks and a flash storage to access ‘hot’ data. So in a hyperconverged deployment, when you scale up, you scale up compute, storage and network proportionately, which is exactly what’s required during VDI scale-up. Many VDI deployments using centralized storage having failed to provide the expected advantages, we’ll have to wait and see if hyperconverged infrastructure can unleash the true power of VDI…!

Aziro Marketing

blogImage

Why Hyper-Converged is best solution for Storage, Networking, Virtualization?

The storage industry is evolving, albeit with an increasing need of ease of managing datacentres. With the advent of hyper-converged solutions, there has been a significant advantage in running datacentres. In this article, we will discuss datacentre related scenarios which can be circumvented with a hyper-converged solution. Scenario 1: The need to build a data centre which requires storage, networking and virtualization. Scenario 2: System failures in the storage system; considering that for the storage industry, triaging with failures is very tedious process. Solution The efficient solution to the above problem is to have a hyper-converged solution which can be useful to provide a one stop resolution to your storage, networking and virtualization requirements. Let’s discuss this in detail. Hyper-converged solutions are sold as a single deck box having in-built storage, networking and virtualization solutions. So considering the first scenario, where you want to build a datacentre automation system, you need to rely on other vendors for solutions. Hyper-converged is a highly scalable platform that gives you everything in a single deck which seamlessly integrates your SAN, servers and virtualization software. Considering the second scenario of system failure, you may not necessarily be aware of the root cause, and triaging failure usually needs the involvement of vendors. Instead, if you opt for a hyper-converged solution, you get to know about the root-cause of any failure, as hyper-converged infrastructures provide single page application that offer easy fault detection. Cost optimization, data optimization, data rebalancing and high availability are also some key features which are provided by Hyper-Converged Solutions. Most hyper-converged solutions provide 24*7 data availability, greatly reducing downtime and data-loss possibilities. Hence most hyper-converged solutions provide zero RPO (Recovery Point Objective) which corresponds to no data loss and zero RTO (Recovery Time Objective) which corresponds to no downtime. In the storage industry, data protection is very crucial, and hyper converged solutions provide efficient answers for them. Also disaster recovery and data protection is a very important aspect for storage systems. With the help of hyper-converged solutions, disaster recovery and data protection are managed very well. Hyper-converged solutions come with bare metal along with pre-installed operating systems which provide ease of management and avoid any compatibility issues. Aziro (formerly MSys Technologies) specializes in Storage, Networking, Virtualization and Networking and is a global leader in providing services to build hyper converged solutions. Aziro (formerly MSys Technologies) can play an important role in providing the best in class solutions to builds hyper-converged solution for your organisation.

Aziro Marketing

blogImage

Why Automating Virtualized Data Centers is the need of the Hour?

Virtualization has been applied in many technical and non-technical process for a long time.This post deals with the most recent development of- virtualized data centers and their automation.Why automate data centers?With increasing number of virtualized servers, storage and computing resources, management of the resources poses a difficult task for IT organizations.Another key reason for the evolution of this technology is cloud computing. The dynamic nature of cloud computing has pushed data center workload, and storage to seek for greener pastures. Consumers and providers of data have increased in disproportionate amounts today. From industries to smart phone users to interconnected devices to futuristic marketing enthusiasts, data is churned by the second. The rapidly increasing data collection companies expect data center providers to be able to scale accordingly, to evade redundancy and inefficiency.Data center automation can provide systematic policies which are pre-defined to automatically facilitate and filter out unnecessary data. Automating data proves time saving for the overloaded IT staff by monitoring and managing virtualized and regular infrastructure thus reducing costs.Components of automated data centersTypical high availability architecture includes application servers with apps replicated multiple times. The applications in this scenario are virtualized in every application delivery controller. This architecture supports high availability and performance necessities. However, high efficiency may not be supported in this model, as the resources provisioned for each app is not efficiently reallocated to the other apps.An automation architecture based on server virtualization requires images to be created and software to be installed in each physical server in order to support automated provisioning of applications through VM images. Additional storage in case of server virtualization or OS virtualization incurs additional costs. While server virtualization deploys applications across all usable resources, OS virtualization stores them locally. In case of OS virtualization, however, not only is the application stored, but the application server and the virtual image are also stored. In order to reduce the impact on either of these methodologies, storage virtualization can be used.Storage virtualization is the concept of amalgamating multiple storage networks into a single storage unit. Usually, the storage unit is managed and utilized with the help of software, making the storage software-defined. Virtualization obfuscates the storage systems used, making any kind of storage network be part of the virtualized environment. Storage virtualization makes such tasks as backup, archiving, and recovery easier and faster.Since virtualized storage can be shared across all physical as well as virtual servers, it reduces the impact on server virtualization. All kinds of files can be stored in one place and be accessed by applications with the help of a proper storage virtualization solution. Obviation of physical installation of applications makes the creation and deployment of virtual images quite simpler in the case of storage virtualization.Automated data centers should be mandate, not just a requirementAutomation is drastically changing the IT scene. Almost 40% of technology professionals recently surveyed claim to employ automation services in some or the other capacity. This includes automating data centers. Automation of virtual data centers provide numerous benefits.Unavoidable repetitive tasks are a cause for many errors in a traditional environment. This can be wiped out with the implementation of automation. Automation also frees up critical time of IT team and lets them invest time in more value adding tasks. This leads to a responsive business, thus accelerating time-to-market for IT services. Virtualized data centers reduce configuration issues by maintaining consistent systems across the data center. Like most automated processes, it minimizes risk.

Aziro Marketing

blogImage

How CPU helped in the Evolution of Virtualization?

Data Center (R)evolutionThe 1st decade of the 21st century saw the emergence of many dominating players in the Enterprise data centers. Inheriting from the traditional ideas, players such as GE, IBM, Apple forayed into the arena of datacenters and adjacent technologies. This being more of a disruptive move, was born a new age thought – which is today known as ‘Virtualization’. Coined by VMware in 1998, the same ideology was also worked on by other players such as Citrix (Xen), Microsoft, etcThere is an age old debate whether the software propels the need for hardware innovation or hardware innovation propels a Software innovation. Though most would side with the former, I guess it’s the business or the customer who propels the change and when software/ hardware is in sync with those requirements, changes take place.Technological changes have not always been ‘disruptive’. The past 3 decades were the decades of “upgrade” and specifically, “non-disruptive” one. When FC-SAN evolved from 2 Gbps to 16 Gbps, or when 32 mbps SCSI evolved to 300 mbps and eventually SAS, that’s now 12G (Serial attached SCSI) or when InfiniBand, FCoE crossed lines, it didn’t change the way applications were deployed or managed. However, in the last couple of years there have been significant changes in the way an infrastructure is deployed and the way it’s managed.In recent times, there’s been a rapid change in the thought process of every IT Investor or an innovator about the possible changes in storage infrastructure. And this thinking is being influenced with the developments in cloud (Access anywhere, Always available), big data analytics, hyper convergence, etc.Let’s review one major technology that influenced a change & has contributed significantly to what Virtualization is today.Virtualization- A Brief HistoryIt was IBM, in the early 60’s, who came up with an innovation to have a time-sharing computer to do away with batch processes in a mainframe computer, in response to a similar solution by GE.IBM’s CP-67 (CP/CMS) was the first commercial mainframe system that supported virtualization. The approach for this time sharing computer was to divide up the memory and other system resources between users. MultiCS was one such operating system which later on evolved as Unix. In fact, the idea of application virtualization was materialized in Unix OS and pioneering work was done by Sun Microsystems in the early 90’s. In 1987, the SoftPC developed by Insignia solutions was a hardware virtualization software that was able to run DOS on a UNIX box and later Mac on a UNIX box. Connectix’s Virtual PC in 2001 was able to run Windows in Mac environment and was considered as the most optimal host virtualized solution till VMware came up with the ESX/ GSX product series. And that began in 2001 but with a significant breakthrough after 2005.Till 2005, Virtualization was all about hosting a software on an operating system that can perform device emulations, binary translations, etc. The virtual OS was a guest at the behest of resources enjoyed and controlled by the host operating systems. That was a dead lock condition where the performance of the virtual OS wasn’t reliable or scaling up. And the field of usage of such a software driven OS was significantly small. And that was an era of “Para Virtualization”.Hardware Assisted VirtualizationPerformance was a key challenge for Virtualization to overcome. Para Virtualization failed to evolve to a point where it could find Virtualization a place in enterprises. Imagine a PC/ workstation that had a single CPU, hardly 256 MB of RAM, and barely an NIC trying to run two parallel operating systems. The resource requirement could never be optimally shared, and if shared, it meant the other operating system (host/ guest) had to be put on a freeze mode with no real access to underlying hardware. Even with transactional databases, ERP or webserver workloads, the server OEM could not think of a way in which a virtualized solution/ software could be deployed in servers that were running mission critical applications.This is despite the fact that the application didn’t fully use the scaled up server hardware resources too efficiently and optimally. VMware wanted to challenge this eco-system by trying to build efficient GSX/ ESX operating systems, but were draining the CPU/ NW resources of the host operating systems. Mostly, the solution was used as a test bed in production support sites to simulate a customer problem running real time applications on a virtual OS. All this compelled a need for processer manufacturers to explore a way, that resource optimization and sharing could be achieved through cost effective Virtualization solutions.Somehow, Intel or AMD did dare to think of changing the way their Processors (CPUs) handled the access to multiple operating systems hosted on them. They did this by finding a different way to handle privilege/ de-privilege of the operating system(s) in their CPU rings. Thus was born the hardware assisted or CPU assisted Virtualization.CPU (Processor) is the heart of any computing system. Traditionally, the resource management requests were handled by the kernel of the host operating system (single OS). So, the solution that hardware Virtualization aimed to provide was to find a way that a virtual OS or multiple virtual OS’s can leverage access to a hardware resource as good as the host operating system.The CPU logically operates different access levels known as “Rings”. Ring 0, being the most privileged and Ring 3 the least. Before hardware/ CPU assisted virtualization, the CPU rings were organized as follows.Ring 0- The innermost of operational level of a CPU that which has the root access. OS Kernel accessed this ring.Ring 1- OS approved device drivers/ hosted OS’s. Any virtual OS had to reside here.Ring 2- Third party device drivers (VM)/ lower privilege drivers.Ring 3- User applications (Those that hosted on the host OS as well as the VM)Intel’s VT-x/ VT-I (2005) & AMD-V (SVM) in 2006 enabled the “Hardware assisted Virtualization” by providing root access to the Virtual Machine Monitor a.k.a Hypervisor (Hypervisor is essentially a Microsoft coined Term). This meant that a VM OS which was traditionally a guest OS became a host which could access the Ring 0 privilege of a CPU getting complete access control to the resources vested to it by the VMM. For e.g. If the VMM decides to allocate 4 cores of CPU, one each of the 4 VM’s, each VM OS can get dedicated access to a single CPU core. This capability meant when we have scalable hosts such as Blade servers, etc. the resource/ application efficiency and performance can be scaled proportionally to the number of CPU cores, Memory, NIC/ IO ports, etc. available in the server/ host. And this opened up many opportunities that ushered the age of a Virtualized Data center where mission critical applications could run on VM’s.Here’s what changed with Hardware assisted Virtualization:Root (New access level) – VMM/ Hypervisor + Memory/ IO Virtualization (resource sharing).Ring 0- VM OS.Ring 1 – Eliminated/ shadowed.Ring 2- Eliminated/ shadowedRing 3- User Applications hosted on the VM’sFigure 1. Role of CPU in the evolution of VirtualizationFew Major Impacts of the ChangeThe operating system runs directly on the hardware using the core of the CPU functionality.Reduced/ limited binary translations as the VM OS can own handling of I/O, interrupts, resource requests, etc.Elimination of delayed device simulations as VMM allocates a VM OS an isolated (at times dedicated) resource.Optimal resource utilization through Network/ Storage/ CPU isolation. Resource lock/ release handled by VMM.Enhanced security, availability and reliability through device isolation.Scalable hardware and software architecture that enabled VM migration, replication across hosts.Hop across OS stacks, VM entry/ exit traverse times completely eliminated, bringing down the I/O latency.Possibility of complete server virtualization, the idea that boomed the Hyper Converged storage era.ConclusionHardware assisted Virtualization changed the way Virtualization was perceived and deployed. In the last and the current decade, we have seen an increase in investments, and innovation in enterprises deploying mission critical workloads on Virtual Machines, bringing in enormous savings in TCO/ Opex. The credit for this is attributed to efficient deployment and optimal utilization of server hardware. Since 2005, the Intel VT-x/ AMD-v continues to evolve to match the real time application needs with lower latencies, reduced power consumption and reliable host experience as if enjoyed on a physical infrastructure. The multicore architecture, evolutions in high speed RAMs, TB scale hard-disks, SSD’s etc., enable Virtualization to evolve into software defined data centers/ webscale IT infrastructure, etc. How many wonder that one powerful innovation in CPU Architecture could prove to be a significant catalyst in redefining Virtualization and the way data centers are deployed and managed?

Aziro Marketing

blogImage

vRealize Operations Manager: Everything you want to Know!

vROps is a tool from VMware that helps IT administrators monitor, troubleshoot, and manage the health and capacity of their virtual environment.The VMware vRealize Operations (vROps) Management Suite provides complete information about performance, capacity, and health of our infrastructure. vRealize Operations Manager collects performance data from each object at every level of our virtual environment. It stores and analyzes the data, and uses that analysis to provide real-time information about issues in our virtual environment.The vROps Manager delivers intelligent operations management with application-to-storage visibility across physical, virtual, and cloud infrastructures. We can automate key processes and improve IT efficiency by using policy-based automation. Using the data collected from system resources, vROps Manager identifies issues before the customer notices a problem, and suggests necessary actions to take to fix the problem.vRealize Operations Manager Architecture:vRealize Operations Manager tracks and analyzes the operation of multiple data sources within the Software-Defined Data Center. It uses specialized analytics algorithms to learn and predict the behavior of every object it monitors. Views, reports, and dashboards help the users to get all information.Image source: VMwareTypes of Nodes and ClustersWe can deploy several vRealize Operations Manager instances in a cluster with the various roles for HA and scalability.Master Node – Manages all other nodes in large scale environments and single standalone vROps Manager node in small-scale environments.Master Replica Node – Enables the HA of the master node.Data Node – Enables scale-out of vRealize Operations Manager in larger environments.Remote Collector Node – Remote collector nodes only gather objects for the inventory and forward collected data to the data nodes. It does not store data or perform the analysis.Analytics clusters – Track, analyzes and predict the operation of monitored systems. Consists of a master node, data nodes, and optionally of a master replica node.Remote collectors cluster – Only collects diagnostics data without storage or analysis. It only consists of remote collector nodes.Realize Operations Manager Logical Node ArchitectureImage source: VMwareThe components of vRealize Operations Manager node perform these tasks:Admin / Product UI server– The UI server is a web application that serves as both user and administration interface.REST API / Collector– The Collector collects data from all components in the data center.Controller-The Controller handles the data flow of the UI server, Collector, and analytics engine.Analytics– The Analytics engine creates all associations and correlations between various data sets, handles all super metric calculations, performs all capacity planning functions, and handles triggering alerts.Persistence– The persistence layer handles the read and write operations on the databases across all nodes.FSDB– The File System Database stores collected metrics in raw format, and it is available in all the nodes.xDB (HIS)-The xDB stores data from the Historical Inventory Service (HIS) which is available only on the master and master replica nodes.Global xDB– Stores user preferences, alerts, and alarms, and customization that is related to the vRealize Operations Manager. It is available only on the master and master replica nodes.Management Packs– Contain extensions and third-party integration software. They add dashboards, alert definitions, policies, reports, and other content to the inventory of vRealize Operations Manager.vROps Badges:VMware vRealize Operations (vROps) uses Badges as a way to test objects. The three crucial badges it uses are Health, Risk, and Efficiency.1. Health Badge – Major Badge – Deals With Immediate IssuesThis is a high-level indicator of the overall status of your environment. It is the first badge that should be looked at by an administrator and take necessary action as soon as possible.2. Risk Badge – Deals with Future IssuesIndicates potential problems that lead to degrading the performance of the system. Risk indicates problems that might require your attention in the near future, but not immediately.3. Efficiency Badge – Deals with Optimization OpportunitiesIt does not tell you about current or future performance problems but shows how to run a more efficient datacenter.It is imperative to understand these badges. They help us to take necessary action to correct and avoid problems. vROps is a robust operation management solution with numerous facets and use cases. To fully understand how you can derive the optimum use of vROps, reach out to us and we’d love to help you.Reference: www.vmware.com

Aziro Marketing

blogImage

What are the Best Practices to Automate vSphere Web Client 6.0 & Plugins

This blog walks you through the different phases involved, in order to automate Flex vSphere Web Client & Web Client Plugins. Also this blog provides developers with the ability to create applications that use the customized selenium Flex Automation APIs.Constraints in automating Flex Application compare to any standard Web ApplicationsIn order to interface with the Flex GUI Widgets, Adobe provided an external API, which enables interaction between action scripts and the Flash Player Container – for example, an HTML page with JavaScript. Adobe recommends using ExternalInterface for all JavaScript to ActionScript interaction.For Example: Function call b/w ActionScript function in Flash Player, JavaScript function from the HTML page is possible through the use of ExternalInterface.Flex Automation ToolsFollowing are the set of tools that support the Web Based Flex GUI Automation. Let’s take a look at each one about their nature and pros & cons.ToolsPros & ConsOpen SourceSelenium Flex APIAutomates Flex mx components, but need to extend APIs for supporting spark and other custom made components.AZIRO FLEX Automation ToolAutomates most of the Flex Components including mx and spark, but has a problem in automating VMware pop-up and dialogsOpen ScriptAutomation based on mouse coordinatesGenieObserved that it has a support for Flex components which needs to be extended and compiled.ProprietaryRanorex($$)Automates vSphere Web Client with trial build. But it has language support only for VB Script and C#.RIA($) Test Complete($$) Silk Test($$$$)Involves statically compiling Test Agent and automation libraries with source code of application under test.Automation ApproachesIncase if we are moving forward with the open source tool choice, then there are numerous challenges, which have to be mitigated for automating the web based Flex GUI application.Let’s take a look at a couple of different approach to mitigate the challenges:1. Dynamic LoaderIt is a wrapper with dynamic loading capability to load the Flex application’s SWF file.This loads the entire Flex Application, which was instrumented to enable the record and playback capabilities.Identified issues in using Dynamic Loader (VMWare specific):Able to automate most of the VMWare components.Encountered a hard problem while automating dialog and pop-up windows, which involves scenarios like create, edit datastore etc, of VMWare components.Analyzed and found that “usage of FlexGlobals.TopLevelApplication()” function by VMWare Dialog, Popup Widget ends up in making a call to its parent application(VMWare). Since we are loading the entire vSphere Web client page into the dedicated custom loader [which is the parent application]. The above mentioned case fails always and cease the loading of Web based Flex GUI.Static Compiling:Requires access to the source code of the Flex Application under test.Compile the application with the automation SWC files by using the compiler’s include-libraries option.Problem encountered in dynamic loader approach is not applicable here. This is because, there is no loader involved in this approach, as we are compiling the automation libraries inside the source code of the application under test.Pros & Cons of above mentioned approaches:Dynamic LoaderStatic LoaderAble to automate Flex mx and spark componentsOpen source tool has support only for mx components. Can be extended to support spark componentsAble to automate most of the Flex Components but having problem in automating VMware pop-up and wizardsEasily automate pop up and wizards, since there is no loader concept in this caseNeed to load the entire application into a loader, which in turn has necessary automation libraries for record and playbackLoader concept is not involved here, thereby we can test the functionality by launching the web client in browserNo need to have source code, as the application will be tested inside a loaderNeed to have access for the source code of the application under test, in order to compile the same automation librariesAutomation Approach Selection:VMware suggest to choose the static loader approach to mitigate the problem with dynamic loader approach and added capability from SFAPI [Open Source]. VMware supports by extending Selenium Flex API to support Spark and VMWare custom components.Short Note about Selenium Flex API:An open source tool, which has the capability to automate vSphere Web client, as it requires their automation API (sfapi.swc) to be compiled with VMware/VC Plugin source codes. This tool has been extended by VMware, in order to support Spark and VMWare custom components.This tool can also be expanded with required automation libraries, compiled and deployed to generate a custom build, to support automation of VMWare and other required components.Also to automate components, properties of flex components to be identified, which can be achieved by Flash Inspection Tools whose library file(swc) were also needed to be compiled along with the above mentioned custom build.Reference:SFAPI Source Code: https://github.com/hirsivaja/sfapivSphere 6.0 – Requesting Custom Build from VMWare with extended support to add third party Automation LibrariesVMWare provided a custom build, which have extended support to include third party Automation Libraries like SFAPI. This custom build does not have support for automating Spark and custom components.Hence there is a need to extend SFAPI framework to support automation of Spark and Custom VMWare Components. This involves the below mentioned work to be performed in order to enable automation support for vSphere 6.0Extending SFAPI:a. Need to extend existing SFAPI Framework with libraries for automating components which doesn’t have support in the existing SFAPI. This includes spark and other custom components.Inspection Tool Library: Monster Debugger(.swc)a. Respected swc were also need to be compiled along with custom SFAPI.Enable Automation Support in vCenter Server:Reference:vSphere 6.0 Build Steps from VMWare:https://communities.vmware.com/thread/520914Web Client Not Ready for Automation:Web Client Ready for Automation:Sample Automation Script:@BeforeClass   public static void setupClass() throws Exception {      // Initializing Selenium Server      remoteControlConfiguration = new RemoteControlConfiguration();      remoteControlConfiguration.setTrustAllSSLCertificates(true);      seleniumServer = new SeleniumServer(            remoteControlConfiguration);      // Selenium Server Start      Seleniumserver.boot();      Seleniumserver.start();      // Selenium Start      selenium = new DefaultSelenium(SERVER_HOST,            Integer.parseInt(SELENIUM_PORT),            FIREFOX_LOCATION, VC_URL);      selenium.start();      selenium.setTimeout("30000");   } @Test   public void sampleTest(){ // Enter UserName      enterText(VC_USERNAME_TEXT_BOX, VC_USERNAME);      // Enter Password      enterText(VC_PASSWORD_TEXT_BOX, VC_PASSWORD);      // Click Login Button      clickButton(VC_LOGIN_BUTTON);   } @AfterClass   public static void tearDownClass() throws Exception {      if (primarySeleniumserver != null) {            // getContext().shutDownSeleniumServer();            primarySeleniumserver.stop();            primarySeleniumserver = null;      }    }   public void enterText(String textField, String text) {      call("doFlexType", textField, text);    }   public String click(String objectId, String optionalButtonLabel) {      return call("doFlexClick", objectId, optionalButtonLabel);    }

Aziro Marketing

blogImage

How to Achieve Rapid Deployment of Virtual Machines Using SAN Copy Technology?

IntroductionDeployment of virtual machines from template over the network, consumes more time to build environment and utilizes huge amount of resources to build the datacenter infrastructure.Challenges involved in deploying VM over the network: Time consuming taskConsumes huge server resources like RAM, CPU, and Memory etc.Involves heavy IO trafficTo address these challenges, Microsoft introduce the technology called SCVMM Rapid Provisioning, which delivers capabilities, adaptive-performance with high availability, capacity savings and efficient data protection with limited time using SAN copy capable Virtual Hard Disk (VHD).Advantages of rapid provisioning using SAN-copy:Quick VM DeploymentUse data transfer within storage subsystem (SAN Box)Less Server/Hardware utilizationDescriptionSCVMM Rapid provisioning provides a method for deploying new virtual machines to storage arrays without the requirement for copying virtual machines over the network. SCVMM enables to take advantage of storage area network (SAN) infrastructure for cloning the virtual machines, and use the VM template to customize the guest operating system.Rapid provisioning can be achieved by either of the following technique:CloningSAN Capable source VM or Template cloning is performed using snapshot-based clone copies. This operation can be performed at volume level. Typically VHD will be residing on the source volume. Snapshot of the source volume will be created, once we deploy the VM.SnapshotSAN Capable source VM or Template cloning is performed using SAN volume Snapshot copies. This operation can be performed at block level and only uses storage capacity as “blocks change” on the originating volume. Snapshot of the source volume will be created, once we deploy the VM.Step by step procedure for creating SAN-capable VHD with Rapid Provisioning clone technique:Rapid Provisioning using SAN Copy:Creation of new virtual machine using the SAN copy-capable template, SCVMM quickly creates a read-write copy of the logical unit that contains the virtual hard disk (VHD), and places the virtual machine files on the new logical unit. Deploying the virtual machine by using rapid provisioning through SAN copy, SCVMM uses a SAN transfer instead of a network transfer.SAN transfer performs the SAN copy of the logical unit that contains the virtual machine is created and assigned the copied LUN to the destination host or host cluster. Here the files of the virtual machine are not actually moved over the network, it resides on the Storage Subsystem, where the source LUN resides. Hence this operation is much faster than a transfer over a standard network.High Level Steps/Process to be followed in SCVMM:1.Navigate to Add Resources in SCVMM Tool bar to add the storage provider2.Add the storage provider using below typeSAN and NAS devices are discovered and managed by a SMI-S provider3.Select the pool of the array and the host groups4.Once the storage is added successfully, navigate to ‘Arrays’ to select the method (Snapshot / Clone ) to be used for rapid provisioning. Set array to use clone for deploying VM5.Add the iscsi session and create a volume to particular host.Create a logical unit from a storage pool managed by VMM and allocate it to the host group(where the library server resides). Assign the logical unit to the library server.6.Map the volume to the host by creating mount point.On the library server, mount the logical unit to a folder path in the library share.7.Create a virtual hard disk on the ‘mount-point’ in hyper-v.8.Add library server of host in scvmm. Once it is added , add the library share (mount point – where VHDx resides)9.Create a SAN-copy capable template by using the virtual hard disk file.VMM creates a clone of the logical unit, which automatically creates a new logical unit from the storage pool. VMM automatically unmasks the new logical unit to the host.While deploying vm ensure that template is in ‘san-copy capable’Conclusion:Rapid provisioning through SAN copy enables quick virtual machines creation from a SAN copy-capable template, which is much faster than traditional VM creation. Aziro (formerly MSys Technologies) has been offering top notch SAN storage solutions, vouched by its leading ISV and Enterprise clientele. If you think, you require help, do contact us today.

Aziro Marketing

blogImage

How to do Rapid VM Backup and Clone by Using Native Storage APIs?

IntroductionVMware supports native snapshot and clone technology at VM level where users can take snapshots or clones for a VM backup and fast VM provisioning by using the clone. In the vSphere UI, user can right click on VM and initiate a snapshot or clone operations.There are also command line options available to initiate the snapshot of a VM. VMware provides option to revert to given snapshot through “Snapshot Manager” in case of any data corruption at VM level or if the user intentionally wants to revert VM to particular snapshot.However, considering the technologies stack, this is the best offering by VMware, though performance degrades as we increase the size of VMs which eventually happens in enterprise DC, caveats, it can longer be instantaneous. How do we make VM backup or clone even faster in enterprise Datacenter deployment?It is a well-known fact that snapshot and clone features are also offered by storage vendors though the granularity is at block or at file level depending on type of storage solution offerings. By leveraging the storage snapshot and clone technologies for a VM backup, it is possible to increase the VM backup and clone performance.Please refer to the “VMware Infrastructure” diagram below:Some limitations of existing snapshot or clone offerings are:Hypervisor LayerEnterprise Server LayerEnterprise Network LayerEnterprise Storage LayerAny snapshot or clone offering by hypervisors is at top of the technological stack, i.e. at Hypervisor Layer. This reduces the performance since each i/o needs to traverse the technology stack before committing to disk. What if we could bypass some layers of the stack or minimize the above technological stack? This could be achieved by taking advantage of the storage vendor’s snapshot and clone technologies.DetailsStorage vendors offer their snapshot and clone technologies solution and they are made available to end user via REST/SOAP SDKs. On leveraging the storage APIs, it is possible to take the snapshot/clone of a volume. Since it is volume level backup, the respective VMware APIs and Storage APIs could be used to amalgamate the VM, datastore and its associated volume, in order to achieve VM level backup or clone.If we consider VMware technologies, VMs are made up of files *.vmdk,*.vmx , *.vswap etc and gets stored in data stores. Datastore is directly mapped to a Volume.Using VMware APIs, you can get the file structure of VMs and its storage details like volume properties, host properties, storage details etc. Once this information is available, invoke native Storage APIs and initiate a snapshot of a volume. You then need to maintain the relationship between VM and volume snapshot and present this association to user.Similarly, for clone, take the clone of a volume and present its VM clone to vSphere.The above solution could be offered as:Command line interface (CLI)VMware UI PluginDesignVMware exposes a vSphere API as a web service, running on VMware vSphere server systems. The API provides access to vSphere management components that can be used to manage and control life-cycle operations of virtual machines. The APIs are made available via VMware vSphere Web Services SDK.Storage vendors also expose their APIs for snapshot and clone operations and could be used for building any integration solutions. Thus by leveraging the VMware APIs and storage vendor’s APIs, the following solution is developed:Plugin UIThis is a user visible component and sits inside the vSphere GUI from where user can list all the VMs and also request any snapshot or clone. Any user driven request comes to Plugin Server via REST APIs. It offers the following features to end user:List view of VMsDrop down menu option for snapshot and clone of a VM.List view of snapshots of a VMPlugin ServerThis is a REST based server application which acts as client as well as server. It takes request from Plugin UI and acts as client for VMware vCenter Server and Storage Arrays. Primary responsibilities of this application is to process the Plugin UI request and invoke VMware vCenter Server APIs to get the necessary VM details, if the plugin server requests to take snapshot or clone, it further invokes Storage APIs and take snapshot or clone at volume level. The relationship between storage volumes, snapshot and VM is locally stored.ConclusionThe performance of a snapshot or a clone of a VM is significantly increased due to the fact that it has not only minimized the technological stack but also by using the best native snapshot or clone technology offered by storage vendors. The intention here is to give fair perspective of various redundancy features and technology solutions available which could be used in a manner to achieve desired end user performance results.

Aziro Marketing

EXPLORE ALL TAGS
2019 dockercon
Advanced analytics
Agentic AI
agile
AI
AI ML
AIOps
Amazon Aws
Amazon EC2
Analytics
Analytics tools
AndroidThings
Anomaly Detection
Anomaly monitor
Ansible Test Automation
apache
apache8
Apache Spark RDD
app containerization
application containerization
applications
Application Security
application testing
artificial intelligence
asynchronous replication
automate
automation
automation testing
Autonomous Storage
AWS Lambda
Aziro
Aziro Technologies
big data
Big Data Analytics
big data pipeline
Big Data QA
Big Data Tester
Big Data Testing
bitcoin
blockchain
blog
bluetooth
buildroot
business intelligence
busybox
chef
ci/cd
CI/CD security
cloud
Cloud Analytics
cloud computing
Cloud Cost Optimization
cloud devops
Cloud Infrastructure
Cloud Interoperability
Cloud Native Solution
Cloud Security
cloudstack
cloud storage
Cloud Storage Data
Cloud Storage Security
Codeless Automation
Cognitive analytics
Configuration Management
connected homes
container
Containers
container world 2019
container world conference
continuous-delivery
continuous deployment
continuous integration
Coronavirus
Covid-19
cryptocurrency
cyber security
data-analytics
data backup and recovery
datacenter
data protection
data replication
data-security
data-storage
deep learning
demo
Descriptive analytics
Descriptive analytics tools
development
devops
devops agile
devops automation
DEVOPS CERTIFICATION
devops monitoring
DevOps QA
DevOps Security
DevOps testing
DevSecOps
Digital Transformation
disaster recovery
DMA
docker
dockercon
dockercon 2019
dockercon 2019 san francisco
dockercon usa 2019
docker swarm
DRaaS
edge computing
Embedded AI
embedded-systems
end-to-end-test-automation
FaaS
finance
fintech
FIrebase
flash memory
flash memory summit
FMS2017
GDPR faqs
Glass-Box AI
golang
GraphQL
graphql vs rest
gui testing
habitat
hadoop
hardware-providers
healthcare
Heartfullness
High Performance Computing
Holistic Life
HPC
Hybrid-Cloud
hyper-converged
hyper-v
IaaS
IaaS Security
icinga
icinga for monitoring
Image Recognition 2024
infographic
InSpec
internet-of-things
investing
iot
iot application
iot testing
java 8 streams
javascript
jenkins
KubeCon
kubernetes
kubernetesday
kubernetesday bangalore
libstorage
linux
litecoin
log analytics
Log mining
Low-Code
Low-Code No-Code Platforms
Loyalty
machine-learning
Meditation
Microservices
migration
Mindfulness
ML
mobile-application-testing
mobile-automation-testing
monitoring tools
Mutli-Cloud
network
network file storage
new features
NFS
NVMe
NVMEof
NVMes
Online Education
opensource
openstack
opscode-2
OSS
others
Paas
PDLC
Positivty
predictive analytics
Predictive analytics tools
prescriptive analysis
private-cloud
product sustenance
programming language
public cloud
qa
qa automation
quality-assurance
Rapid Application Development
raspberry pi
RDMA
real time analytics
realtime analytics platforms
Real-time data analytics
Recovery
Recovery as a service
recovery as service
rsa
rsa 2019
rsa 2019 san francisco
rsac 2018
rsa conference
rsa conference 2019
rsa usa 2019
SaaS Security
san francisco
SDC India 2019
SDDC
security
Security Monitoring
Selenium Test Automation
selenium testng
serverless
Serverless Computing
Site Reliability Engineering
smart homes
smart mirror
SNIA
snia india 2019
SNIA SDC 2019
SNIA SDC INDIA
SNIA SDC USA
software
software defined storage
software-testing
software testing trends
software testing trends 2019
SRE
STaaS
storage
storage events
storage replication
Storage Trends 2018
storage virtualization
support
Synchronous Replication
technology
tech support
test-automation
Testing
testing automation tools
thought leadership articles
trends
tutorials
ui automation testing
ui testing
ui testing automation
vCenter Operations Manager
vCOPS
virtualization
VMware
vmworld
VMworld 2019
vmworld 2019 san francisco
VMworld 2019 US
vROM
Web Automation Testing
web test automation
WFH

LET'S ENGINEER

Your Next Product Breakthrough

Book a Free 30-minute Meeting with our technology experts.

Aziro has been a true engineering partner in our digital transformation journey. Their AI-native approach and deep technical expertise helped us modernize our infrastructure and accelerate product delivery without compromising quality. The collaboration has been seamless, efficient, and outcome-driven.

Customer Placeholder
CTO

Fortune 500 company