Storage Updates

Uncover our latest and greatest product updates
blogImage

Immunize Customer Experience With These Cloud Storage Security Practices

Cloud Storage, a Great ChoiceA 21st-century industry looking for uncompromising scalability and performance cannot possibly come across Cloud Storage and say, “I’ll pass.” Be it fintech or healthcare, small-sized customers, or multi-national clients; cloud storage is there to store and protect all business sensitive data for all business use cases. While modern services like smart data lakes, automated data backup and restore, mobility, and IoT revamp the customer experience, cloud storage would ensure impeccable infrastructure for data configuration, management, and durability. Any enterprise working with cloud storage is guaranteed to enjoy:Optimized Storage CostsMinimized Operational OverheadContinuous MonitoringLatency-Based Data TieringAutomated Data Backup, Archival & RestoreThroughput Intensive Storage AndSmart Workload ManagementHowever, such benefits come with a pre-requisite priority for the security of the cloud storage infrastructure. The data center and the network it operates in need to be highly secured from internal and external mishaps. Therefore, in this blog, we will discuss the various practices which would help you ensure the security of your cloud storage infrastructure. For a more technical sense of these practices, we will talk about one of the most popular cloud storage services – Amazon S3. However, the discussion around practices will be more generic to ensure that you can use them for any cloud storage vendor of your choice.Comprehending Cloud Storage SecurityA recent study suggests that 93% of companies are concerned about the security risks associated with the cloud. The technical architects and admins directly in contact with cloud storage solutions often face security issues that they don’t fully comprehend. With an increasing number of ransomware and phishing attacks, the organization might often find themselves skeptical about migrating the data. So, how does one overcome these doubts and work towards a secure, business-boosting storage infrastructure? The answer, actually, is two-part:External Security – The security of the storage infrastructure itself is more of a vendor’s job. For instance, in the case of Amazon S3, AWS takes the onus of protecting the infrastructure that you trust your data with. Managing the cloud storage infrastructure makes sense for the vendor to carry out regular tests, audit, and verify the security firewalls of the cloud. Moreover, a lot of data compliance issues rightly fall under the vendor’s scope of responsibility so that you don’t have to worry about the administrative regulations for your data storage.Internal Security – Ensuring the security from the inside is where you, as a cloud storage service consumer, share the responsibility. Based on the services you’ve employed from your cloud storage vendor, you are expected to be fully aware of the sensitivity of your data, the compliance regulations of your organization, and the regulations mandatory as per the local authorities in your geography. The reason behind these responsibilities is the control you get as a consumer over the data that goes into the cloud storage. While the vendor would provide you with a range of security tools and services, it should be your final choice that would align with the sensitivity of your business data.Thus, in this blog, we will discuss all the security services and configurations you can demand from your vendor to ensure that cloud storage is an ally against your competition and not another headache for your business.Confirm Data DurabilityThe durability of infrastructure should be among the first pre-requisites for storing mission-critical data on the cloud. Redundant storage of data objects across multiple devices ensures reliable data protection. Amazon S3, for that matter, uses its PUT and PUTObject operations to copy the data objects at multiple facilities simultaneously. These facilities are then vigilantly monitored for any loss so that immediate repairs can be arranged. Some of the important practices to ensure data durability are:Versioning – Ensure that the data objects are versioned. This will allow recovering older data objects in the face of any internal or external application failure.Role-Based Access – Setting up individual accounts for each user with rightful liberties and restrictions discourages data leakage due to unnecessary access.Encryption – Server-side and in-transit data encryption modules provide an additional layer of protection, assuring that the data objects aren’t harmed during business operations. Amazon S3, for instance, uses Federal Information Processing Standard (FIPS) 140-2 validated cryptographic modules for such purpose.Machine Learning – Cloud Storage vendors also offer machine learning-based data protection modules that recognize the business sensitivity of data objects and alert the storage admins about unencrypted data, unnecessary access, and shared sensitive data objects. Amazon Macie is one such tool offered by AWS.Making the Data UnreadableThe in-transit data (going in and out of the cloud storage data centers) is vulnerable to network-based attacks. Measures need to be taken to ensure that this data, even if breached, is of no use to the attacker. The best method to achieve this is Data Encryption. Encryption modules like SSL/TLS are available to make sure that the data is unreadable without proper decryption keys. The cloud storage vendors provide server-side and client-side encryption strategies for the same purpose. In the case of Amazon S3, the objects can be encrypted when they are stored and decrypted back when they are downloaded. You, as a client, can manage the encryption keys and choose the suitable tools for your requirements.Managing the Traffic MischiefWhile the traffic on the public network is vulnerable to data thievery, the private network might often fall prey to internal mismanagement. To avoid both cases, most cloud vendors offer security sensitive APIs. These help the application operate with transport layer security while working with cloud storage data. TLS1.2 or above are usually recommended for modern data storage infrastructures, including the cloud. Talking about Amazon S3 in particular, AWS offers VPN and private link connections like Site-to-site and Direct connect to support safe connectivity for on-premise networks. To connect with other resources in the region, S3 uses a Virtual private cloud (VPC) endpoint that ensures that the requests are limited to and from the Amazon S3 bucket and VPC cloud.SSL cipher suites provide the guidelines for secure network operations. A category of such cipher suites supports what is known as Perfect Forward Secrecy – which essentially makes sure that the encryption and decryption keys are regularly changed. As a client, you should look for cloud storage service providers that support such suites in order to ensure a secure network. Amazon S3, for this purpose, uses DHE (Diffie-Hellman Ephermal) or ECDHE (Elliptic Curve Diffie-Hellman Ephermal. Both are highly recommended suites supported by any application running on modern programming paradigms.Ask Before AccessAdmins handling cloud storage operations should follow strict access policies for resource access control. Both the resource and user-based access policies are offered by the cloud storage provider for the organization to choose from. It is imperative that you choose the right combination of these policies so that the permissions to your cloud storage infrastructure are tightly defined. A handy ally for this purpose in the case of Amazon S3 is an Access control list (ACL) where the access policies are defined for the S3 bucket, and you can easily choose the combo of your choice.Watchful MonitoringMaintain reliability, guaranteed availability, and untroubled performance are all results of a dark knight level monitoring. For cloud storage, you need a centralized monitoring dashboard of sorts that provides multi-point monitoring data. Check if your cloud vendor provides tools for:Automated single metric monitoring – Monitoring system that takes care of a specific metric and immediately flags any deviations from the expected resultsRequest Trailing – Request triggered by any user or service needs to be trailed for details like request source IP, request time, etc., to log the actions taken on the cloud storage data. Server access requests are also logged for this purpose.Security Incident Logging – Fault tolerance can only be strengthened if any and every misconduct is logged with associated metrics and the resolutions assigned for the purpose. Such logs also help for automated recommendations for future conducts related to cloud storage.ConclusionThere’ve been multiple episodes where companies serving high-profile customer-base faced humiliating attacks that went undetected over a considerable period of time. Such security gaps are not at all conducive to the customer experience we aim to serve. The security practices mentioned above will ensure that fragile corners of your cloud storage are all cemented and toughened up against the looming threats of ransomware and phishing hacks.

Aziro Marketing

blogImage

AI/ML for Archival Storage in Quartz Glass

Data plays a crucial part in our modern communication world and daily life. As the usage of our data increases, exponential, users and customers are looking for long term efficient storage mechanisms. It’s evident that our existing storage technologies have a limited lifetime. From the below diagram, we can concur that there is a gap between data generation vs data storage. So, the need of the hour is to find technologies that will store data for a long period of time, at affordable cost and enhanced performance.Data storage in quartz glass is the upcoming new technology which addresses the limitations of the current ones. In this blog, we can see about this new technology in detail.Data storage:We all know, we can store the data in HDD, SSD and Tape drive. Each having its own Pros and cons. Based on the user requirement, cost, Performance and other factors we can choose it. Based on the temperature, we can categorize the data as Hot, Warm and Cold.For Hot data -> we use SSD,For Warm Data -> we use HDD andFor Cold data -> we use Tape DrivesArchival storage: Tape driveData archiving is the process of moving data that is no longer actively used to a separate storage device for long-term retention. Archive data consists of older data that remains important to the organization or must be retained for future reference.Need for Archival Storage: Keep the data safe and secure, Pass the information to future generations.Because of Low Cost and long archival stability, the Tape drive is the best option for Archival storage.The lifetime of Magnetic tape is around five to seven years. So, we need to Proactively migrate data to avoid any degradation issues as Regular Data migration results in high cost as the year goes on.A tape drive is Long-lasting, but they still can’t guarantee data safety over a long period of time, and it has high latency. Due to this, Archival storage is a big concern as the amount of data in the world grows. A solution to overcome this problem – To keep the data safely and securely and for over a long period of time?New Medium for Data Storage – Quartz glass.Quartz glass: Data storageQuartz is the most common form of crystalline silica and Second most common mineral on the earth’s surface, so it’s widely available and cost also less. It withstands extreme environmental conditions and doesn’t need any special environment like energy-intensive air conditioning. We are writing data in the glass (Not on the Glass). It means that even if something happens to the outer surface of the Quartz crystal, still we can able to retrieve the data. In general, we call it a WORM – Write Once Read Many.In Quartz glass, we can retain the data even after being put the glass in boiling water, put in a flame, or scratched the outer surface of the glass. Data always exists, even after 1000s of years.Tape and hard disks were designed before the cloud existed and both of them have limitations around temperature, humidity, air quality, and life-span.In Quartz glass, we can Access data non-sequentially, which is one of the best advantages when compared to a Tape drive, where we can access the data sequentially, which takes more time to retrieve the data.Data write in Quartz glass:By using Ultrafast laser optics and artificial intelligence, we are storing data in quartz glass. Femtosecond lasers — ones that emit ultrashort optical pulses and that are commonly used in LASIK surgery — permanently change the structure of the glass so that the data can be preserved over a long period of time. A laser encodes data in glass by creating layers of three-dimensional nanoscale gratings and deformations at various depths and angles.Data Read in Quartz glass:A special device – Computer-controlled microscope is used to read the data. A Piece of Quartz glass is placed in the read head and to begin with, it focuses on the layer of interest, and a set of polarization images are taken. These images are then processed to determine the orientation and size of the voxels. The process is then repeated for other layers. The images are fused using machine learning. To read the data back, machine learning algorithms decode the patterns created when polarized light shines through the glass. ML algorithms can quickly zero in on any point within the glass, which reduces lag time to retrieve information.Below is the image, how Quartz glass looks after storing the data.Future of Quartz glass:By using Quartz glass, we are able to store the data permanently for life long. We can store Lifelong medical data, financial regulation data, legal contracts, geologic information. By using this, we can Pass not only data, entire information to the future generations.At present, we are able to store 360 TB of data – Piece of Glass.A lot of research is going on to Store more amount of data, Maximize the performance and minimize the cost. If all these researches get success full and we can able to store the data permanently with less cost and able to scalable with no limits then “Quartz Glass will be the best archival cloud storage solution and revamp the entire Data Storage industry.

Aziro Marketing

blogImage

High Performance Computing Storage – Hybrid Cloud, Parallel File Systems, Key Challenges, and Top Vendors’ Products

The toughest Terminator, T-1000 can demonstrate rapid shapeshifting, near-perfect mimicry, and recovery from damage. This is because it is made of mimetic polyalloy with robust mechanical properties. T-1000s naturally require top of the world speed, hi-tech communication system, razor-sharp analytical speed, and most powerful connectors and processors. Neural networks are also critical to the functioning of terminators. It stacks an incredible amount of data in nodes, which then communicates with the outer world depending on the input received. We infer one important thing – these Terminators produce an arduous amount of data. Therefore, it must require a sleek data storage system that scales and carry capabilities to compute massive datasets. Which, rings a bell – just like the case of terminators, High Performance Computing (HPC) also require equally robust storage to maintain compute performance. Well, HPC has been the nodal force to path defining innovations and scientific discoveries. This is because HPC enables processing of data and powering highly complex calculations at the speed of light. To give it a perspective, HPC leverages compute to deliver high performance. The rise of AI/ML, deep learning, edge computing and IoT created a need to store and process incredible amount of data. Therefore, HPC became the key enabler to bring digital technologies within the realm of daily use. In layman’s term, HPC can be referred as the supercomputers. The Continual Coming of the age of HPC The first supercomputer – CDC 6600 reigned for five years from its inception in 1964. CDC 6000 was paramount to the critical operations of the US government and the US military. It was considered 10 times faster to its nearest competitor – IBM 7030 Stretch. Well, it worked with a speed of up to 3 million floating-point operations per second (flops). The need for complex computer modeling and simulation never stopped over the decades. Likewise, we also witnessed evolution of high-performance computers. These supercomputer were made of core-components, which had more power and vast memories to handle complex workloads and analyze datasets. Any new release of supercomputers would make its predecessors obsolete. Just like new robots from the Terminator series. The latest report by Hyperion Research states that iterative simulation workloads and new workloads such as Al and other Big Data jobs would be driving the adoption of HPC Storage. Understanding Data Storage as an Enabler for HPC Investing in HPC is exorbitant. Therefore, one must bear in mind that it is essential to have a robust and equally proficient data storage system that runs concurrently with the HPC environment. Further some, HPC workloads differ based on its use cases. For example, HPC at the government & military secret agency consumes heavier workloads versus HPC at a national research facility. This means HPC storage require heavy customization for differential storage architecture, based on its application. Hybrid Cloud – An Optimal Solution for Data-Intensive HPC Storage Thinking about just the perfect HPC storage will not help. There has to an optimal solution that scales based on HPC needs. Ideally, it has to the right mix of best of the both – traditional storage (on-prem disk drives) and cloud (SSDs and HDDs). Complex, data-intensive IOPS can be channeled to SSDs, while usual streaming data can be handled by disk drives. An efficient combination of Hybrid Cloud – software defined storage and hardware configuration ultimately helps scale performance, while eliminating the need to have a storage tier separately. The software-defined storage must come with key characteristics – write back, read persistence performance statistics, dynamic flush, and I/O histogram. Finally, the HPC storage should support parallel file systems by handling complex sequential I/O. Long Term Solution (LTS) Lustre for Parallel File System More than 50 percent of the global storage architecture prefer Lustre – an open-source parallel file system to support HPC clusters. Well, for starters it offers free installation. Further, it provides massive data storage capabilities along with unified configuration, centralized management, simple installation, and powerful scalability. It is built on LTS community release allowing parallel I/O spanning multiple servers, clients, and storage devices. It offers open APIs for deep integration. The throughput is more than 1 terabyte/second. It also offers integrated support for an application built on Hadoop MapReduce applications. Challenges of Data Management in Hybrid HPC Storage Inefficient Data Handling The key challenge in implementing hybrid HPC storage is inefficient data handling. Dealing with the large and complex dataset and accessing it over WAN is time-consuming and tedious. Security Security is an another complex affair for HPC storage. The hybrid cloud file system also must include in-built data security. These small files must not be vulnerable to external threats. Providing SMBv3 encryption for files moving within the environment could be a great deal. Further, building the feature of snapshot replication can deliver integrated protection to the data in a seamless manner. Right HPC product End users usually find it difficult to choose the right product relevant to their services and industry. Hyperion Research presents an important fact. It states, “Although a large majority (82%) of respondents were relatively satisfied with their current HPC storage vendors, a substantial minority said they are likely to switch storage vendors the next time they upgrade their primary HPC system. The implication here is that a fair number of HPC storage buyers are scrutinizing vendors for competencies as well as price.” Top HPC Storage products Let’s briefly understand the top varied HPC Storage products in the market. ClusterStor E1000 All Flash – By Cray (A HPE Company) ClusterStor E1000 enables handling of the data at the speed of exascale. Its core is a combination of SSD and HDD. ClusterStor 1000 is a policy-driven architecture enabling you to move data intelligently. ClusterStor E1000 HDD-based configuration offers up to 50% more performance with the same number of drives than its closest competitors. This all-flash configuration is perfect for mainly small files, random access, and terabytes to single-digit PB capacity requirements. Source: Cray Website HPE Apollo 2000 System – By HPE The HPE Apollo 2000 Gen10 system is designed as an enterprise-level, density-optimized, 2U shared infrastructure chassis for up to four HPE ProLiant Gen10 hot-plug servers with the entire traditional data center attributes—standard racks and cabling and rear-aisle serviceability access. A 42U rack fits up to 20 HPE Apollo 2000 system chassis, accommodating up to 80 servers per rack. It delivers the flexibility to tailor the system to the precise needs of your workload with the right compute, flexible I/O, and storage options. The servers can be “mixed and matched” within a single chassis to support different applications, and it can even be deployed with a single server, leaving room to scale as customer’s needs grow. Source: HPE Website PRIMERGY RX2530 M5 – By Fujitsu The FUJITSU Server PRIMERGY RX2530 M5 is a dual-socket rack server that provides high performance of the new Intel® Xeon® Processor Scalable Family CPUs, expandability of up to 3TB of DDR4 memory and the capability to use Intel® Optane™ DC Persistent Memory, and up to 10x 2.5-inch storage devices – all in a 1U space saving housing. The system can also be equipped with the new 2nd generation processors of the Intel® Xeon® Scalable Family (CLX-R) delivering industry-leading frequencies. Accordingly, the PRIMERGY RX2530 M5 is the optimal system for large virtualization and scale-out scenarios, databases and for high-performance computing. Source: Fujitsu Website PowerSwitch Z9332F-ON – By Dell EMC The Z9332F-ON 100/400GbE fixed switch comprises Dell EMC’s latest disaggregated hardware and software data center networking solutions, providing state-of-the-art, high-density 100/400 GbE ports and a broad range of functionality to meet the growing demands of today’s data center environment. These innovative, next-generation open networking high-density aggregation switches offer optimum flexibility and costeffectiveness for the web 2.0, enterprise, mid-market and cloud service provider with demanding compute and storage traffic environments. The compact PowerSwitch Z9332F-ON provides industry-leading density of either 32 ports of 400GbE in QSFP56-DD form factor or 128 ports of 100 or up to 144 ports of 10/25/50 (via breakout), in a 1RU design. Source: Dell EMC Website E5700 – By NetApp E5700 hybrid-flash storage systems deliver high IOPS with low latency and high bandwidth for your mixed workload apps. Requiring just 2U of rack space, the E5700 hybrid array combines extreme IOPS, sub-100 microsecond response times, and up to 21GBps of read bandwidth and 14GBps of write bandwidth. With fully redundant I/O paths, advanced data protection features, and extensive diagnostic capabilities, the E5700 storage systems enable you to achieve greater than 99.9999% availability and provide data integrity and security. Source: NetApp Website ScaTeFS – By NEC Corporation The NEC Scalable Technology File System (ScaTeFS) is a distributed and parallel file system designed for large-scale HPC systems requiring large capacity. To realize load balancing and scale-out, all typical basic functions of a file system (read/write operation, file/directory generation, etc.) are distributed to multiple IO servers uniformly since ScaTeFS does not need a master server for managing the entire file system such as a metadata server. Therefore, the throughput of the entire system increases, and parallel I/O processing can be used for large files. Source: NEC Website HPC-X ScalableHPC – By Mellanox Mellanox HPC-X ScalableHPC toolkit is a comprehensive software package that includes MPI and SHMEM/PGAS communications libraries. HPC-X ScalableHPC also includes various acceleration packages to improve both the performance and scalability of high performance computing applications running on top of these libraries, including UCX (Unified Communication X) which accelerates point-to-point operations, and FCA (Fabric Collectives Accelerations) which accelerates collective operations used by the MPI/PGAS languages. This full-featured, tested and packaged toolkit enables MPI and SHMEM/PGAS programming languages to achieve high performance, scalability and efficiency, and to assure that the communication libraries are fully optimized of the Mellanox interconnect solutions. Source: Mellanox Website Panasas ActiveStor-18 – By Mircorway Panasas® is the performance leader in hybrid scale-out NAS for unstructured data, driving industry and research innovation by accelerating workflows and simplifying data management. ActiveStor® appliances leverage the patented PanFS® storage operating system and DirectFlow® protocol to deliver high performance and reliability at scale from an appliance that is as easy to manage as it is fast to deploy. With flash technology speeding small file and metadata performance, ActiveStor provides significantly improved file system responsiveness while accelerating time-to-results. Based on a fifth-generation storage blade architecture and the proven Panasas PanFS storage operating system, ActiveStor offers an attractive low total cost of ownership for the energy, government, life sciences, manufacturing, media, and university research markets. Source: Mircoway Website Future Ahead Dataset is growing enormously. And, there will be no end to it. HPC storage must be able to process data at the speed of the light to maintain compute efficiency at peak levels. HPC storage should climb to exascale from petascale. It must have robust in-built security, be fault-tolerant, be modular in design and most importantly, scale seamlessly. HPC storage based on hybrid cloud technology is a sensible path ahead; however, the efforts must be geared to control its components at runtime. Further, focus should also be on dynamic marshaling via the applet provisioning and in-built automation engine. This will improve compute performance and reduce costs.

Aziro Marketing

blogImage

How to Build Open-Source AWS S3-Compatible Storage on Docker?

AWS S3 compatible storage is one of the emerging technologies in the enterprise storage medium. Initially, it was used only by Amazon in public cloud environments. However, today it has been commonly used by all the storage & cloud vendors in on-premises and private cloud environments.‘S3 compatible storage’ offers rich Amazon S3 API complaint interfaces.Use-cases:1 Backup & Disaster Recovery:S3 compatible storage is suitable for storing and archiving mission-critical data on-premises providing maximum availability, reliability, and durability.2 Storing large data-sets over network:S3 compatible storage is ideal when you want to store all kinds of documents and unstructured data: images, materials like PDFs and Excel docs, music, videos, backup files, database dumps, log files and render with faster performance.3 File sharing solutions:S3 compatible storage can also be used as a file-sharing solution or a network drive and be integrated into your environment.4 Pricing:Lesser cost than public cloud: With industry-standard hardware/ VMs, S3 compatible storage solutions can be installed and these solutions deliver the highest value.5 Secured & Performance:S3 compatible storage is deployed on industry-standard hardware/ VMs, which is in your data center and has secured data access. It also delivers higher throughput and lesser latencies.Open-source AWS S3-compatible storage solutions:Below, I am going to explain 2 solutions as an example of open-source AWS S3-compatible storage built on the Docker platform.Scality/s3server.MinIO Object Storage.Solution1: Scality/s3serverAbout Scality:Scality is an open-source AWS S3 compatible storage solution that provides an S3-compliant interface for IT professionals. It allows using there S3-compatible storage applications, develop there S3 compliant apps faster by doing testing and integration locally or against any remote S3 compatible cloud.Quick Start: Used Centos-7 VM[root@localhost ~]# docker run --name AWS_S3 -p 8000:8000 -e SCALITY_ACCESS_KEY_ID=accessKey1 -e SCALITY_SECRET_ACCESS_KEY=verySecretKey1 scality/s3server [root@localhost ~]# docker ps CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES bc290f05ad5c        scality/s3server    "/usr/src/app/dock..."   8 hours ago         Up 8 hours          0.0.0.0:8000->8000/tcp   AWS_S3 Testing – Create buckets on Scality S3server using CYBERDUCK UI and create/upload files on the bucket.Solution2: MinIO Object StorageAbout MinIO:MinIO is a 100 percent open-source, distributed object storage system. It is software-defined, runs on industry-standard hardware, and API compatible with Amazon S3 cloud storage service.Quick Start: Used Centos-7 VM[root@localhost ~]# docker run -p 9000:9000 --name S3_minio -e "MINIO_ACCESS_KEY=accessKey1" -e "MINIO_SECRET_KEY=verySecretKey1"  minio/minio server /mnt/data [root@localhost ~]# docker ps CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS                    NAMES 13f2fc802ec9        minio/minio         "/usr/bin/docker-e..."   About a minute ago   Up About a minute   0.0.0.0:9000->9000/tcp   S3_minio Testing – Create buckets on MinIO Object Storage using AWS CLI and create/upload files on the bucket.References:https://min.io/https://www.scality.com/topics/what-is-s3-compatible-storage/pre, .singlePostContent img{margin-bottom:2rem;} .filledCheckboxes input[type="checkbox"]{opacity:0;display :none;}.multiStepFormBody button.close{z-index:99;}

Aziro Marketing

blogImage

What is the Importance of NVMe and NVMe-oF in Modern Storage?

What is NVMe?NVMe is the new protocol which is known as Non Volatile Memory Express. Let’s have a brief idea about volatile and non-volatile memory before moving ahead to the details of the topic. Volatile memory is a type of memory where the data is lost, in-case of power failure. RAM is a good example of volatile memory. In contrast to volatile memory, non-volatile memory is the type of memory which will retain the data in case of a power failure as there is a battery providing the back-up. Flash is non-volatile memory and of two types- nand and nor flash memory.NVM Express (NVMe)NVM Express® (NVMe™) is an optimized, high-performance scalable host controller interface designed to address the needs of Enterprise and Client systems that utilize PCI Express®-based solid-state storage. (https://nvmexpress.org/wp-content/uploads/NVMe_Overview.pdf). Solid-state drives (SSD) are nothing but storage made by two key components- viz. nand flash chips and flash controller. SSDs are faster than traditional hard drives as these drives have no spinning component.Need for PCIE based NVMe SSDsAll hot data should be available on flash. Now-a-days data is like gold mine, faster processing of data can have a great impact on business decisions. For achieving this kind of speed, flash based storage devices and high speed storage protocol like NVMe is required. NVMe has made its mark as a high-performance protocol and is expanding due to an industry wide adoption by storage vendors. PCIe based NVMe SSD drives achieve that speed because NVMe supports 64k commands per queue and 64 queues whereas the SATA devices support 32 and SaaS devices support up to 256 commands per queue. Hence concluding that NVMe leverages all the potential of flash-based SSDs. This technology has emerged for reducing the gap between fast CPU and slow storage. Datacenter, Gaming and entertainment industries will have great performance benefits through NVMe. Peripheral Component Interconnect Express (PCIE) are also evolving in order to support NVMe.Diagram of PCIE NVMe storage IO stack[/capntion]DMA and RDMAA. DMADirect Memory Access provides faster data transfer rate by reducing the CPU cycle of fetching, decoding, and executing the IO. It enables faster processing as the CPU can be utilized in other operations while data transfer is going on. You need a DMA controller for carrying out the operation.Above is the simple example of CPU cycleB. RDMALet’s try to split the term and understand what Remote Direct Memory Access widely means. It is a direct memory access to a remote host’s DMA from a separate computer’s memory without involving the operating system. This increases the throughput low latency networking as this uses zero-copy, which implies that the data transfer is done without the networking stack, data is received or sent directly to the buffers, without being copied between the network stack. In addition, the RDMA bypasses the kernel, data that is transferred from user space. It is used in many markets; some of them are-HPC – High performance computing, BIG data, cloud, FSI (Finance services and insurance). For using RDMA, one needs a network adapter that supports RDMA. It should supports Ethernet or Infiniband as link layer protocol.NVMe over fabrics (NVMe-oF)NVMe over fabrics is a technology, which enables the extension of distance by which PCIE NVMe based hosts and storage drivers can be connected. NVMe-oF standard supports multiple storage networking fabrics for example Ethernet, Infiniband and FC.For Ethernet, RoCE v2 and iWARP are ideal RDMA fabrics. Mellanox is the leading manufacturer of RoCE based network adapter where as Qlogic has FC-NVMe ready adapters. The desired difference in latency is 10 microseconds between a distant NVMe device and a device sitting locally. The NVMe-oF solution provides the NVMe storage on a high-speed storage network to multiple hosts increasing the throughput and with low latency. Most of the areas in NVMe over fabrics are the same as the local NVMe protocol- for instance I/O and administrative commands, NVMe namespaces, registers and properties, power states, reservations, etc.There are some differences in identifier, discovery, queuing and data transfers. Disaggregation of storage from compute, higher utilization of SSDs and leveraging the CPU to its fullest capacity other than transferring the data only, are key benefits for a cloud infrastructure. NVMe over fabric works on a message based system where the NVMe commands or responses are encapsulated into capsules.ConclusionThe NVMe PCIe SSDs and over fabrics will drive future storage industry and it will add value to the business by helping the cloud infrastructure and big data analytics achieve fast access to Data.Lead storage vendors have shown their interest in these areas and some of them have already shipped their products with this specification and some of them are in-design phase of their products.

Aziro Marketing

blogImage

How to use Log Analytics to Detect Log Anomaly?

INTRODUCTIONWe’ll focus on the problem of detecting anomalies in application run-time behaviors from their execution logs.Log Template usage can be broadly classified to determine:The log occurrence counts [include error, info, debug and others] from the specific software components or software package or modules.The cause for the application anomalies, which includes a certain software component(s), actual hardware resource or its associated tasks.The software components or software package or modules, which are “most utilized “or “least utilized”. This helps to tweak the application performance, by focusing on the most utilized modules.This new technique helps to:Overcome the instrumentation requirements or application specific assumptions made in prior log mining approaches.Improve by orders of magnitude the capability of the log mining process in terms of volume of log data that can be processed per day.BENEFITS OF THIS SOLUTION:Product Engineering Team can effectively utilize this solution across several of its products for monitoring and improving the product functional stability and performance.This solution will help detect the application abnormalities in advance and alert the administrator to take corrective actions and prevent application outage.This solution tries to preserve the application logs and anomalies. This can be effectively utilized for improving the operation efficiency by,System IntegratorApplication Administrator(s)Site Reliability Engineer(s)Quality Assurance Ops Engineer(s)SOLUTION ARCHITECTURE:ELK Stack (Elasticsearch, Logstash, and Kibana) is the most popular open source log analysis platform. ELK is quickly overtaking existing proprietary solutions and has become the first choice for companies shipping for log analysis and management solutions.ELK stack is comprised of three separate yet alike open-source products:Elasticsearch, which is based on Apache Lucene is a full-text search engine to perform full-text and other complex searches.Logstash processes the data before sending it to Elasticsearch for indexing and storage.Kibana is the visualization tool that enables you to view log messages and create graphs and visualizations.FilebeatInstalled on a client that will continue to push their logs to Logstash, Filebeat serves as a log shipping agent that utilizes the lumberjack networking protocol to communicate with Logstash. ELK Stack along with Filebeat preserves the application logs as long as we want. These pre-served application logs can be used for log template mining, further triaging to find evidence of the application malfunctioning or anomalies observed.TECHNOLOGIES:Python 3.6, NumPy, Matplotlib, Plotly, and Pandas modules.HIGH LEVEL APPROACHES:Log Transformation Phase: Transformation of logs [unstructured data] into structured data to categorize the log message into the below-mentioned dimensions and facts.Dimension:Time Dimension [ Year, Month, Date, Hour, Min, Second]Application Dimension [ Thread Name, Class Name, Log Template, Dynamic Parameter, and its combinations.Fact: Custom Log MessageLog Template Mining Phase: Log mining process consumes the “Custom Log Message” to discover the Log Templates and enable the analytics in any or all of the dimensions as mentioned above.Log Template Prediction Phase: In addition to the discovery of Log Template pattern, log mining process also helps to predict the relevant log template for the received “Custom Log Message”.LOG TRANSFORMATION PHASE:LOG PARSING:Unstructured Record — Structured Record :Creation of Dimension Table for preserving Time and Application dimension details.LOG TEMPLATE MINING PHASE:Individual log lines are compared against other log lines to identify the common as well as unique word appearance etc.Log Template Generations are accomplished by following the below mentioned steps:Log Lines Clustering: Clustering the log lines, which are closely matching w.r.t common words and its ordering.Unique Word Collection: identifying and collecting unique words within clustersUnique Word Masking: Masking one of the randomly selected log line and using the result log line as Log Template.Log Template Validation: Applying log template to all the clustered log lines to extract the unique words and ensuring that those words are unique.Dynamic Parameter Extraction: Applying log template to all the clustered log lines and extracting and persisting dynamic parameter(s) are against each log lines.LOG TEMPLATE PREDICTION PHASE:Log Lines Cluster Identification: Identifying log line common words and unique word b/w Log Template and Custom Log Message.Log Template Identification: Selecting a closely matched Log Template as Log Template and extracting unique or dynamic parameters using the selected Log Template.Log Template Generation: Triggering process, in case if no log template are closely matching.Dynamic Parameter Extraction: Extracting applied log template to all the clustered log lines and dynamic parameter(s) and persisting them against each log lines.Log Template Persistence:Processing the received real-time log line and found the log template match from the Log Template Inventory.Processing and updating the Inventory Log Template based on the received real-time log line.Processing and creating the new Log Template from the received real-time log line and updating the Inventory Log Template.ANOMALIES DETECTION PHASE:Identifying application anomalies through theDetection of the spike in total log records, error records received on the particular moment [date & time scale].Detection of the spike in processing time.i.e. time difference between the subsequent log records on the particular moment [date & time scale].Detection of the spike in few application threads emitting large number of log records for the particular moment [date & time scale].Registering of the administrator with the system to receive asynchronous notification about the anomalies either through E-Mail or SMS etc.Persisting anomalies details in a distributed database like Cassandra Database with the aggregated information likeSpike in total log records, error records count on the specific timeSpike in processing time on the specific timeApplication threads which emitted a large number of log records on the specificANOMALIES DETECTION USING LOGS COUNT:Plott the line graph using the time-scale to depict the number of log line occurrencesGenerate the same report for error log records too.ANOMALIES SPOT LOG RECORD COUNT:Bar Graph can be used to show the significant contribution by the several log templates, which causes the anomalies.This graph can be launched by clicking on the anomalies point presented from the logs count report.ROOT CAUSE [ACTUAL RESOURCE, SOFTWARE COMPONENT] FOR ANOMALIES POINT:The report will be generated for the selected Log Template.This report can be launched by clicking on the Log Template Occurrence Report for a particular Log Template, where the significant contribution found for anomalies.ANOMALIES DETECTION BASED ON THREADLine Graph can be used to show the significant contribution by the different threads, which causes the anomalies.ANOMALIES DETECTION BASED ON PROCESSING TIME B/W LOGS RECORD ENTRY TIME:Line Graph can be used to depict the cumulative processing time b/w log line[includes regular logs as well as error logs]ANOMALIES ROOT CAUSE ANALYSIS BY SEARCHING & FILTERING FROM RAW LOG RECORD:GUI presents about the list of unique words [which represents the actual resources used by the application] extracted from the Log Record to construct the Log Template.Searching the log record b/w specific time frame for the specific keyword or set of keywords [must be one among the unique words found during the Log Template Mining Phase] with AND or OR condition.Log Record Search result presents the table with the following sortable columns [ single or multiple column sorting ] :Date TimeLog Sequence IDThreadCustom Log Message [ with the highlighted search keywords]SEARCH FORM:SEARCH RESULT:CONCLUSIONSo far, this solution presents the various steps, which can be collectively used to analyze the logs and identify the anomalies in the application, as well as the resource(s) causing those anomalies.Detection of following cases can be considered as an anomaly for an applicationRequest timeout or zero requests processing time i.e. application hung or deadlock.Prolonged, consistent increase in processing time.Heavy and constant increase in application memory usage.5.1 DIRECTIONS FOR FUTURE DEVELOPMENTThis solution can be further extended to analyze the control flow as a whole, using control flow graph mining. This control flow mining helps to detect or determine the application anomalies by detecting the following cases:Deviation from the recorded functional flow.Most and least accessed or utilized functions and the resource associated.Cumulative processing time per control flow, by associated resources.The number of active control flow for a given moment of time on a real-time basis.Control flow graph classification based on the cumulative processing time.REFERENCESAnomaly Detection Using Program Control Flow Graph Mining from Execution Logs by Animesh Nandi, Atri Mandal, Shubham Atreja, Gargi B. Dasgupta, Subhrajit Bhattacharya, IBM Research, IIT Kanpur, 2016.An Evaluation Study on Log Parsing and Its Use in Log Mining by Pinjia He, Jieming Zhu, Shilin He, Jian Li, and Michael R. Lyu, Department of Computer Science and Engineering, 2016

Aziro Marketing

blogImage

Aziro (formerly MSys Technologies) 2019 Tech Predictions: Smart Storage, Cloud’s Bull Run, Ubiquitous DevOps, and Glass-Box AI

2019 brings us to the second-last leg of this decade. From the last few years, IT professionals have been propagating rhetoric. They state that the technology landscape is seeing a revolutionary change. But, most of the “REVOLUTIONARY” changes, has, over the time lost their gullibility. Thanks to the awe-inspiring technologies like AI, Robotics, and upcoming 5G networks most tech pundits consider this decade to be a game changer in the technology sector.As we make headway into 2019, the internet is bombarded with numerous tech prophecies. Aziro (formerly MSys Technologies) presents to you the 2019 tech predictions based on our Storage, Cloud, DevOps and digital transformation expertise.1. Software Defined Storage (SDS)Definitely, 2019 looks promising for Software Defined Storage. It’ll be driven by changes in Autonomous Storage, Object Storage, Self-Managed DRaaS and NVMes. But, SDS will also be required to push the envelope to acclimatize and evolve. Let’s understand why so.1.1 Autonomous Storage to Garner MomentumBacked by users’ demand, we’ll witness the growth of self-healing storage in 2019. Here, Artificial Intelligence powered by intelligent algorithms will play a pivotal role. Consequently, companies will strive to ensure uninterrupted application performance, round the clock.1.2 Self-Managed Disaster Recovery as a Service (DRaaS) will be ProminentSelf-Managed DRaaS reduces human interference and proactively recovers business-critical data. It then duplicates the data in the Cloud. This brings relief during an unforeseen event. Ultimately, it cuts costs. In 2019, this’ll strike chords with enterprises, globally, and we’ll witness DRaaS gaining prominence.1.3 The Pendulum will Swing Back to Object Storage as a Service (STaaS)Object Storage makes a perfect case for cost-effective storage. Its flat structure creates a scale-out architecture and induces Cloud compatibility. It also assigns unique Metadata and ID for each object within storage. This accelerates the data retrieval and recovery process. Thus, in 2019, we expect companies to embrace Object Storage to support their Big data needs.1.4 NMVes Adoption to Register TractionIn 2019, Software Defined Storage will accelerate the adoption rate of NVMes. It rubs off glitches associated with traditional storage to ensure smooth data migration while adopting NVMes. With SDS, enterprises need not worry about the ‘Rip and Replace’ hardware procedure. We’ll see vendors design storage platforms that append to NVMes protocol. For 2019, NMVes growth will mostly be led by FC-NVME and NVMe-oF.2. Hyperconverged Infrastructure (HCI)In 2019, HCI will remain the trump card to create a multi-layer infrastructure with centralized management. We’ll see more companies utilize HCI to deploy applications quickly. This’ll circle around a policy-based and data-centric architecture.3. Hybridconverged Infrastructure will Mark its FootprintHybridconverged Infrastructure (HCI.2) comes with all the features of its big brother – Hyperconverged Infrastructure (HCI.1). But, one extended functionality makes the latter smarter. Unlike HCI.1, it allows connecting with an external host. This’ll help HCI.2 mark its footprint in 2019.4. VirtualizationIn 2019, Virtualization’s growth will be centered around Software Defined Data Centers and Containers.4.1 ContainersContainer technology is ace in the hole to deliver promises of multi-cloud – cost efficacy, operational simplicity, and team productivity. Per IDC, 76 percent of users’ leverage containers for mission-critical applications.4.1.1 Persistent Storage will be a Key ConcernIn 2019, Containers’ users will envision a cloud-ready persistent storage platform with flash arrays. They’ll expect their storage service providers to implement synchronous mirroring, CDP – continuous data protection and auto-tiering.4.1.2 Kubernetes Explosion is ImminentThe upcoming Kubernetes version is rumored to include a pre-defined configuration template. If true, it’ll enable an easier Kubernetes deployment and use. This year, we are also expecting a higher number of Kubernetes and containers synchronization. This’ll make Kubernetes’ security a burgeoning concern. So, in 2019, we should expect stringent security protocols around Kubernetes deployment. It can be multi-step authentication or encryption at the cluster level.4.1.3 Istio to Ease Kubernetes Deployment HeadacheIstio is an open source service mesh. It addresses the Microservices’ application deployment challenges like failure recovery, load balancing, rate limiting, A/B testing, and canary testing. In 2019, companies might combine Istio and Kubernetes. This can facilitate a smooth Container orchestration, resulting in an effortless application and data migration.4.2 Software Defined Data CentersMore companies will embark on their journey to Multi-Cloud and Hybrid-Cloud. They’ll expect a seamless migration of existing applications to a heterogeneous Cloud environment. As a result, SDDC will undergo a strategic bent to accommodate the new Cloud requirements.In 2019, companies will start cobbling DevOps and SDDC. The pursuit of DevOps in SDDC will be to instigate a revamp of COBIT and ITIL practice. Frankly, without wielding DevOps, cloud-based SDDC will remain in a vacuum.5. DevOpsIn 2019, companies will implement a programmatic DevOps approach to accelerate the development and deployment of software products. Per this survey, DevOps enabled 46x code deployment. It also skyrocketed the deploy lead time by 2556x. This year, AI/ML, Automation, and FaaS will orchestrate changes to DevOps.5.1 DevOps Practice Will Experience a Spur with AI/MLIn 2019, AI/ML centric applications will experience an upsurge. Data science teams will leverage DevOps to unify complex operations across the application lifecycle. They’ll also look to automate the workflow pipeline – to rebuild, retest and redeploy, concurrently.5.2 DevOps will Add Value to Functions as a Service (FaaS)Functions as a Service aims to achieve serverless architecture. It leads to a hassle-free application development without perturbing companies to handle the monolithic REST server. It is like a panacea moment for developers.Hitherto, FaaS hasn’t achieved a full-fledged status. Although FaaS is inherently scalable, selecting wrong user cases will increase the bills. Thus, in 2019, we’ll see companies leveraging DevOps to fathom productive user cases and bring down costs drastically.5.3 Automation will be the Mainstream in DevOpsManual DevOps is time-consuming, less efficient, and error-prone. As a result, in 2019, CI/CD automation will become central in the DevOps practice. Consequently, Infrastructure as a Code to be in the driving seat.6. Cloud’s Bull Run to ContinueIn 2019, organizations will reimagine the use of Cloud. There will be a new class of ‘born-in-cloud’ start-ups, that will extract more value by intelligent Cloud operations. This will be centered around Multi-Cloud, Cloud Interoperability, and High Performance Computing. More companies will look to establish a Cloud Center of Excellence (CoE). Per RightScale survey, 57 percent of enterprises already have a Cloud Center of Excellence.6.1 Companies will Drift from “One-Cloud Approach.”In 2018, companies realized that having a ‘One-Cloud Approach’ encumbers their competitiveness. In 2019, Cloud leadership teams will bask upon the Hybrid-Cloud Architecture. Hybrid-Cloud will be the new normal within Cloud Computing in 2019.6.2 Cloud Interoperability will be a Major ConcernIn 2019, companies will start addressing the issues of interoperability by standardizing Cloud architecture. The use of the Application Programming Interface (APIs) will also accelerate. APIs will be the key to instill the capability of language neutrality, which augments system portability.6.3 High Performance Computing (HPC) will Get its Place on CloudIndustries such as Finance, Deep Learning, Semiconductors or Genomics are facing the brunt of competition. They’ll envision to deliver high-end compute-intensive applications with high performance. To entice such industries, Cloud providers will start imparting HPC capabilities in their platform. We’ll also witness large scale automation in Cloud.7. Artificial IntelligenceFor 2019 AI/ML will come out of the research and development model to be widely implemented in organizations. Customer engagements, infrastructure optimization, and Glass-Box AI, will be in the forefront.7.1 AI to Revive Customer EngagementsBusinesses (startups or enterprise) will leverage AI/ML to enable a rich end-user experience. Per Adobe, enterprises using AI will more than double in 2019. Tech and non-tech companies, alike, will strive to offer personalized services leveraging Natural Language Processing. The focus will remain to create a cognitive customer persona to generate tangible business impacts.7.2 AI for Infrastructure OptimizationIn 2019, there will a spur in the development of AI embedded monitoring tools. This’ll help companies to create a nimble infrastructure to respond to the changing workload. With such AI-driven machines, they’ll aim to cut down the infrastructure latency, infuse robustness in applications, enhance performances, and amplify outputs.7.3 Glass-Box AI will be crucial in Retail, Finance, and HealthcareThis is where Explainable AI will play its role. Glass-Box AI will create key customers’ insights with underlying methods, errors or biases. In this way, retailers don’t necessarily follow every suggestion. They can sort out responses that fit rights in that present scenario. The bottom-line will be to avoid customer altercations and bring out fairness in the process.

Aziro Marketing

blogImage

Federated Data Services through Storage Virtualization

When one talks about virtualization, the immediate thought that comes to mind is about server/ host virtualization otherwise understood from the virtualization offerings from the likes of VMware, Citrix, Microsoft, etc. However, there is a not-so-explored & not much known data center technology that can contribute significantly to a modern (future) data center. When we talk of real time cloud application deployment (access anywhere) with enterprise workloads, there should be something more that the Infrastructure should support, to enable effective consolidation and management of storage/ host infrastructure across a data center.This article aims to introduce Storage Virtualization (SV) as a technology and the role this can play in enabling federated data services use cases. Aziro (formerly MSys Technologies) also has been a leading virtualization services provider working on the same technology.The Need for Storage VirtualizationTraditional data centers are largely FC-SAN based, where monoliths of huge enterprise storage arrays are hosted, deployed, configured, and managed but with niche expertise. Most of mission critical applications of the world run on such data centers (DC). EMC (Dell EMC), NetApp, IBM, HP (HPE), etc. are a few major players in this arena. The appliances these companies have built are tested and proven on the field for the reliability, efficiency and availability across various workloads.However, the major constraint of an IT investor of the modern times is related to the DC/ DR manageability and upgradability, more in the context of upcoming products with alternate technologies such as hyper converged storage; than defy the storage array based implementations. With vendor lock-in’s, rigid & propriety storage management API’s/ UI’s, it is a cumbersome process to think of an idea of having heterogeneous storage arrays with various vendors in a DC. Also, it poses the challenge of having skilled administrators who are well-versed on all different product implementations and management.Before it was a hyper-converged storage, the storage majors ventured to innovate an idea that could possibly solve this problem. This is how Storage Virtualization was born – where a way was envisaged to have heterogeneous storage arrays in a DC but still could seamlessly migrate data/ applications between them through a unified management interface. Not just that, the thrust was to see a bigger picture of application continuity to data center business continuity scaling up the scope of the high availability picture.What is Storage Virtualization?Storage virtualization (SV) is the pooling of physical storage from multiple storage arrays or appliances into what appears to be a single storage appliance that can be managed from a central console/ unified storage management application. Storage Virtualization could be an appliance hosted between the host and the target storage or could be just a software VM.Some popular SV SAN solutions available in the market are IBM SVC, EMC VPlex, NetApp V-series, etc.Use case & Implementation – How does it work?Let’s look at a practical data center use case of a heterogeneous data center, where there are 9 enterprise storage arrays, say 2 nos. of Dell EMC VMAX, 1 nos. of HPE 3PAR, 1 nos. of IBM V7000 & 5 nos. of EMC Clariion CX300. Consider that all legacy applications are currently hosted in EMC Clariion array and all the mission critical applications are hosted independently in EMC/ HPE/ IBM arrays. Let’s assume that the total data center storage requirements are already met and with the current infrastructure, it can easily support the requirement for the next 5 years. Consider, just between HPE, EMC and IBM arrays, we have sufficient storage space to accommodate the legacy applications as well. However, there isn’t a way yet to manage such a migration or a consolidated management of all different storage devices.Now, let’s look at some of the use case requirements/ consolidation challenges that a storage consultant should solve:Fully phase out Legacy CX300 Arrays and migrate all the legacy applications to one of enterprise arrays say, IBM V7000, but with minimum down time.Setting up a new data center, DC2 about 15 miles away and moving 2 of the enterprise arrays, say 2* EMC VMAX arrays to the new site and host this as an active-active data center/ disaster recovery site for former DC (DC1).The current site, DC1 should become DR site for the new DC, DC2 however should actively engage I/O and business should continue. (Synchronous use case)Management overhead of using products from 3 different vendors should reduce and should be simplified.The entire cycle of change should happen with minimum downtime except for the case of physical movement/ configuration of VMAX arrays to the new site.The architecture should be scalable for data requirements of next 5 years in such a way that new storage arrays from existing or new vendors can be added with no downtime/ disruption.The DC & DR sites are mutually responsive to each other during an unforeseen disaster and are highly available.Solution IllustrationThis is a classic case for Storage Virtualization Solution. An SV solution is typically an appliance with software & intelligence that is sandwiched between the initiator (hosts) and the target (heterogeneous storage arrays). For the initiator, the SV is the target and for the target, the SV becomes the initiator. All the storage disks from the target (with/ without data) appear as a bunch of unclaimed volumes in the SV. As far as hosts are concerned, they appear to the SV as unmapped initiators unregistered. Storage- Initiator groups are created (registered) in the SV which can be modified/ changed on the fly giving flexible host migration at the time of server disaster.There are different SV solutions available from vendors such as EMC VPlex that can help cases of local DC migration as well as migration between sites / DC’s. Let’s see how the solution unfolds to our use case requirements.Storage from both legacy array and the new array once configured to access the hosts through an SV solution, the storage disks/ LUNs appear as pool of storage at the DV interface. The SV solutions encapsulates the storage so that data migration between both the arrays can happen non-disruptively. Vendor1- Vendor2 replications are challenging and often disruptive.SV solutions are configured in a fully HA configuration providing fault tolerance at every level (device, storage, array, switch, etc.).Across site SV solution such as EMC VPlex Metro can perform a site-site data mirroring (synchronous) that too which both the sites are fully in active-active IO configuration.The entire configuration done through HA Switches provides option to scale to add existing/ new vendor storage arrays as well new Hosts/ Initiators with zero down time.The entire solution be it at local DC level or multi-site would be fully manageable through a common management UI/ Interface reducing the dependence on skilled storage administrators who are vendor specific.A SV solution consolidates the entire storage and host infrastructure to a common platform simplifying the deployment and management. Also, this sets a new dimension to hyper-converged storage infrastructure to be scaled across sites.A SV solution is agnostic, to the host and storage giving diversity of deployment options. For e.g. various host hardware, operating systems, etc.All the features of a storage array are complimented to its full potential along with superior consolidation across Storage/ sites with additional availability/ reliability features.Solutions like VMware vMotion does help in site- site migration, however, an SV solution provides the infrastructure support for that happen at the storage device level that too across sites.ConclusionIt’s just a matter of time, when we will see more efficiently packaged & effectively deployed SV solutions. Perhaps, it could be called software defined SV solution that can be hosted on a VM instead of an appliance. Storage consolidation is a persistent problem, more so in the modern days, due to the diversity of Sever Virtualization/ SDS Solutions, varieties of Backup and recovery applications/ options available to an IT Administrator. There should be a point where DC should become truly converged where best of every vendor can co-exist in its own space complimenting each other. However, there is a business problem to that wish. For now, we can only explore more on what SV can offer us.

Aziro Marketing

EXPLORE ALL TAGS
2019 dockercon
Advanced analytics
Agentic AI
agile
AI
AI ML
AIOps
Amazon Aws
Amazon EC2
Analytics
Analytics tools
AndroidThings
Anomaly Detection
Anomaly monitor
Ansible Test Automation
apache
apache8
Apache Spark RDD
app containerization
application containerization
applications
Application Security
application testing
artificial intelligence
asynchronous replication
automate
automation
automation testing
Autonomous Storage
AWS Lambda
Aziro
Aziro Technologies
big data
Big Data Analytics
big data pipeline
Big Data QA
Big Data Tester
Big Data Testing
bitcoin
blockchain
blog
bluetooth
buildroot
business intelligence
busybox
chef
ci/cd
CI/CD security
cloud
Cloud Analytics
cloud computing
Cloud Cost Optimization
cloud devops
Cloud Infrastructure
Cloud Interoperability
Cloud Native Solution
Cloud Security
cloudstack
cloud storage
Cloud Storage Data
Cloud Storage Security
Codeless Automation
Cognitive analytics
Configuration Management
connected homes
container
Containers
container world 2019
container world conference
continuous-delivery
continuous deployment
continuous integration
Coronavirus
Covid-19
cryptocurrency
cyber security
data-analytics
data backup and recovery
datacenter
data protection
data replication
data-security
data-storage
deep learning
demo
Descriptive analytics
Descriptive analytics tools
development
devops
devops agile
devops automation
DEVOPS CERTIFICATION
devops monitoring
DevOps QA
DevOps Security
DevOps testing
DevSecOps
Digital Transformation
disaster recovery
DMA
docker
dockercon
dockercon 2019
dockercon 2019 san francisco
dockercon usa 2019
docker swarm
DRaaS
edge computing
Embedded AI
embedded-systems
end-to-end-test-automation
FaaS
finance
fintech
FIrebase
flash memory
flash memory summit
FMS2017
GDPR faqs
Glass-Box AI
golang
GraphQL
graphql vs rest
gui testing
habitat
hadoop
hardware-providers
healthcare
Heartfullness
High Performance Computing
Holistic Life
HPC
Hybrid-Cloud
hyper-converged
hyper-v
IaaS
IaaS Security
icinga
icinga for monitoring
Image Recognition 2024
infographic
InSpec
internet-of-things
investing
iot
iot application
iot testing
java 8 streams
javascript
jenkins
KubeCon
kubernetes
kubernetesday
kubernetesday bangalore
libstorage
linux
litecoin
log analytics
Log mining
Low-Code
Low-Code No-Code Platforms
Loyalty
machine-learning
Meditation
Microservices
migration
Mindfulness
ML
mobile-application-testing
mobile-automation-testing
monitoring tools
Mutli-Cloud
network
network file storage
new features
NFS
NVMe
NVMEof
NVMes
Online Education
opensource
openstack
opscode-2
OSS
others
Paas
PDLC
Positivty
predictive analytics
Predictive analytics tools
prescriptive analysis
private-cloud
product sustenance
programming language
public cloud
qa
qa automation
quality-assurance
Rapid Application Development
raspberry pi
RDMA
real time analytics
realtime analytics platforms
Real-time data analytics
Recovery
Recovery as a service
recovery as service
Retail
rsa
rsa 2019
rsa 2019 san francisco
rsac 2018
rsa conference
rsa conference 2019
rsa usa 2019
SaaS Security
san francisco
SDC India 2019
SDDC
security
Security Monitoring
Selenium Test Automation
selenium testng
serverless
Serverless Computing
Site Reliability Engineering
smart homes
smart mirror
SNIA
snia india 2019
SNIA SDC 2019
SNIA SDC INDIA
SNIA SDC USA
software
software defined storage
software-testing
software testing trends
software testing trends 2019
SRE
STaaS
storage
storage events
storage replication
Storage Trends 2018
storage virtualization
support
Synchronous Replication
technology
tech support
test-automation
Testing
testing automation tools
thought leadership articles
trends
tutorials
ui automation testing
ui testing
ui testing automation
vCenter Operations Manager
vCOPS
virtualization
VMware
vmworld
VMworld 2019
vmworld 2019 san francisco
VMworld 2019 US
vROM
Web Automation Testing
web test automation
WFH

Real People, Real Replies.
No Bots, No Black Holes.

Big things at Aziro often start small - a message, an idea, a quick hello. A real human reads every enquiry, and a simple conversation can turn into a real opportunity.
Start yours with us.

Phone

Talk to us

+1 844 415 0777

Email

Drop us a line at

info@aziro.com

Got a Tech Challenge? Let’s Talk