Storage Updates

Uncover our latest and greatest product updates
blogImage

How to enhance Storage Management Productivity using Selenium-Python Framework

GUI testing is the process of ensuring proper functionality of the graphical user interface for a given application and making sure it conforms to its written specifications.In addition to functionality, GUI testing evaluates design elements such as layout, colors, font sizes, labels, text boxes, text formatting, captions, buttons, lists, icons, links and content.Need for GUI TestingAny user first observes the design and looks of the application/software GUI and how easy it is to understand its user interface. If a user is not comfortable with the interface or finds the application complex to understand, he will never use the application again. That’s why, GUI is a matter for concern, and proper testing should be carried out, in order to make sure that the GUI is free of bugs.Storage management Software/GUIThe web-based storage management software is an application designed specifically for monitoring and managing storage products. By using this we can do all functionalities of the storage product like RAID configuration, Firmware update, Getting system report of the product, perform BGA’s like Rebuild, migration, and other various features.We have a customer with different storage products. To test all the basic functionalities across multiple browser as part of feature testing for each weekly build, is really challenging, especially by testing it manually. Hence, we decided to find alternatives to this challenging task.Use case: Functional testing of the productConsider we have storage management GUI screen which contains multiple tabs and various options to users. As part of functional test, we need to test all major functionality for every build w.r.t multiple browsers. For each browser, it takes 2-3 days of engineer’s efforts to complete the test. Suppose if we need to test in 4 browsers then it will take around 8-10 days to complete the regression for each build.Imagine the situation, if we get the build every week then we will be unable to complete the test in a week’s time and we will get the new build before completing the test.As it takes 8-10 days and completely occupies the tester’s time to accomplish repetitive tasks, we came up with the automation plan.Why is Automation Required?Optimization of manual testing time and effortsRegression and accurate testing resultsProduct stability and identifying bugsApproach: Manual to automationThis implementation requires the following:Manual:Manual QA provides list of test cases developed and planned based on PRD to execute for release.Automation:Automation QA identifies the automatable test casesUnderstanding test case stepsCategorize test cases based on complexity\priorityCapture web element paths while performing test case operations manually for first timeWrite automation script for test case including verification check pointsExecute automated test caseBenefits of AutomationEasily validate a single test scenario with different sets of inputsFramework supports to run automated tests in various application builds/releases under regression cycleTester gets more time to testTester can focus on quality work by avoiding the time to do the repetitive tasksWeb Automation Tools – SeleniumSince Selenium has lots of advantages in GUI automation, we had a discussion and consented to the idea of automating the test cases in GUI by using “Selenium”.Open source and supports multiple languagesAllows user to run automated tests in different browsers such as Firefox, Chrome, IE., etc.Support various OS platformsWell defined libraries to interact with Web ApplicationSupports multiple testing frameworksAutomation frameworkA test automation framework is an integrated set of technologies, tools, processes and patterns that provide logical reasoning and clarity of thought thereby simplifying and enabling a team to perform automation not only effectively but also efficientlyMaintainabilityRe-usabilityScalabilityConfigurabilityAuditabilityData DrivenEach page can have scenarios that need to be tested with large data sets, you would want to write automation scripts with a focus on test data i.e. data-drivenPresence of Third Party ModulesExtensive Support LibrariesUser-friendly Data StructuresProductivity and SpeedBetter package managementFrame work designOutcome of automation test suiteThe regression suite takes only 8 hours to complete the test for each browser. It saves the tester’s time drastically wherein she spends only 30 minutes instead of 2-3 days by doing it manually for each browserConclusionIn this way, we can save the tester’s time and the tester can focus on other important tasks which in turn increases productivity drastically. Apart from this, we also found out that our bug finding rate increased exponentially after we did the automation regression suite (almost doubled when compare to previous release cycle testing!).We presented the same to our customer with all data and facts. Needless to say, the customer is very happy with us adopting this method!

Aziro Marketing

blogImage

How to setup a bootloader for an embedded linux machine

This is a three part series of blogs which explains the complete procedure to cross compileBootloaderKernel/O.SFile systemThis will be done for ARM processor based development platform.In short, this blog series explains how to setup an embedded linux machine that suits your needsDevelopment environment prerequisitesLinux machine running any flavour of ubuntu, fedora or arch linux.Internet connection.Hardware needed1.ARM based development board.a.This is very important as the build process and the cross compiler we choose depends on the type of processor. For this blog series we are using beaglebone black development which is based on ARMv7 architecture.2.4/8 GB Micro SD Card.3.USB to Serial adaptor.Topics discussed in this documentWhat is Bootloader?Das U-Boot — the Universal Boot LoaderStages in boot loadingDownloading the sourceBrief about the directories and the functionality it providesCross compiling bootloader for ARM based target platformSetup the environment variablesStart the buildMicro SD card Booting procedure in beaglebone blackWhat is Bootloader?There are so many answers to this question, but if you look at the core of all the answers it would contain some kind of initialization. In short this is the piece of software which is executed as soon as you turn on your hardware device. The hardware device can be anything, from your mobile phones, routers, microwave ovens, smart tv, and to the world’s fastest supercomputer. After all, everything has a beginning right?The reason I said there are so many ways to answer this question, is because the use case of each device is different, and we need to choose the bootloader carefully, which initializes the device. So much research and decision making time is spent on this to make sure that the devices which are initialized are absolutely needed. Everyone likes their devices to boot up fast.In embedded systems the bootloader is a special piece of software whose main purpose is to load the kernel and hand over the control to it. To achieve this, it needs to initialize the required peripherals which helps the device to carry out its intended functionality. In other words, it initializes the absolutely needed peripherals alone and hands over the control to the O.S aka kernel.Das U-Boot — the Universal Boot LoaderU-Boot is the most popular boot loader in linux based embedded devices. It is released as open source under the GNU GPLv2 license. It supports a wide range of microprocessors like MIPS, ARM, PPC, Blackfin, AVR32 and x86. It even supports FPGA based nios platforms. If your hardware design is based out of any of these processors and if you are looking for a bootloader the best bet is to try U-Boot first. It also supports different methods of booting which is pretty much needed on fallback situations.For example, it has support to boot from USB, SD Card, NOR and NAND flash (non volatile memory). It also has the support to boot linux kernel from the network using TFTP. The list of filesystems supported by U-Boot is huge. So you are covered in all aspects that is needed from a bootloader and more so.Last but not least, it has a command line interface which gives you a very easy access to it and try many different things before finalizing your design. You configure U-Boot for various boot methods like MMC, USB, NFS or NAND based and it allows you to test the physical RAM of any issues.Now its upto the designer to pick what device he wants and then use U-Boot to his advantage.Stages in boot loadingFor starters, U-Boot is both a first stage and second stage bootloader. When U-Boot is compiled we get two images, first stage (MLO) and second stage (u-boot.img) images. It is loaded by the system’s ROM code (this code resides inside the SoC’s and it is already preprogrammed) from a supported boot device. The ROM code checks for the various bootable devices that is available. And starts execution from the device which is capable of booting. This can be controlled through jumpers, though some resistor based methods also exists. Since each platform is different and it is advised to look into the platforms datasheet for more details.Stage 1 bootloader is sometimes called a small SPL (Secondary Program Loader). SPL would do initial hardware configuration and load the rest of U-Boot i.e. second stage loader. Regardless of whether the SPL is used, U-Boot performs both first-stage and second-stage booting.In first stage, U-Boot initializes the memory controller and SDRAM. This is needed as rest of the execution of the code depends on this. Depending upon the list of devices supported by the platform it initializes the rest. For example, if your platform has the capability to boot through USB and there is no support for network connectivity, then U-Boot can be programmed to do exactly the same.If you are planning to use linux kernel, then setting up of the memory controller is the only mandatory thing expected by linux kernel. If memory controller is not initialized properly then linux kernel won’t be able to boot.Block Diagram of the targetThe above is the block diagram of AM335X SoC.Downloading the sourceU-Boot source code is maintained using git revision control. Using git we can clone the latest source code from the repo.kasi@kasi-desktop:~/git$ git clone git://git.denx.de/u-boot.gitBrief about the directories and the functionality it providesarch –> Contains architecture specific code. This is the piece of code which initializes the CPU and board specific peripheral devices.board → Source in both arch and board directory work in tandem to initialize the memory and other devices.cmd –> Contains code which adds command line support to carry out different activity depending on the developer’s requirement. For example, command line utilities are provided to erase NAND flash and reprogram it. We will be using similar commands in the next blog.configs –> Contains the platform level configuration details. This is very much platform specific. The configs are much like a static mapping with reference to the platform’s datasheet.drivers –> This directory needs a special mention as it has support for a lot of devices:Each subdirectory under the driver directory corresponds to a particular device type. This structure is followed in accordance with the linux kernel. For example, network drivers are all accumulated inside the net directory:kasi@kasi-desktop:~/git/u-boot$ ls drivers/net/ -l total 2448 -rw-rw-r-- 1 kasi kasi 62315 Nov 11 15:05 4xx_enet.c -rw-rw-r-- 1 kasi kasi 6026 Nov 11 15:05 8390.hThis makes sure the code is not bloated and it is much easier for us to navigate and make the needed changes.fs –> Contains the code which adds support for filesystems. As mentioned earlier, U-Boot has a rich filesystem support. It supports both read only file system like cramfs and also journalling file system like jffs2 which is used on NAND flash based devices.include –> This is a very important directory in U-Boot. It not only contains the header files but also the files which define platform specific information like supported baud rates, starting RAM address, stack size, default command line arguments etc.lib –> Contains support for library files. They provide the needed helper functions used by U-Boot.net –> Contains support for networking protocols like ARP, TFTP, Ping, Bootp and etc.scripts and tools –> Contains helper scripts to create images and binaries. It contains scripts to create a patch file (hopefully with some useful fixes) in the correct format if we are planning to send it the development community.With the source code available and some understanding about the directory structure let us do what we actually want to do i.e. create a bootloaderSince the target board which we are using is based on ARM processor we will need a cross compiler which helps us to create binaries to run on that processor. There are a lot of options for this. Linaro provides the latest cross tool for ARM based processors and it is very easy to get. For this reason, we have chosen to go for the cross tool provided by linaro.Cross compiling bootloader for ARM based target platformFor cross compiling we can need to download the toolchain from the linaro website using the below link:kasi@kasi-desktop:~/git$ wget https://releases.linaro.org/components/toolchain/binaries/latest/arm-linux-gnueabihf/gcc-linaro-6.1.1-2016.08-x86_64_arm-linux-gnueabihf.tar.xzThe toolchain comes compressed as tar file and we can unzip it using the below command:kasi@kasi-desktop:~/git$ tar xf gcc-linaro-6.1.1-2016.08-x86_64_arm-linux-gnueabihf.tar.xzSetup the environment variablesWith the pre build tool chain, we need to set up a few environment variables like path of the toolchain before proceeding to compile U-Boot. Below are the shell commands that we need to issue.export PATH=/gcc-linaro-6.1.1-2016.08-x86_64_arm-linux-gnueabihf/bin:$PATH; export CROSS_COMPILE=arm-linux-gnueabihf- export ARCH=arm;In our work space points to /home/kasi/git/ as this is the workspace which we are using.The exact command from our machine is:kasi@kasi-desktop:~/git/u-boot$ export PATH=/home/kasi/git/gcc-linaro-6.1.1-2016.08-x86_64_arm-linux-gnueabihf/bin:$PATH kasi@kasi-desktop:~/git/u-boot$ export CROSS_COMPILE=arm-linux-gnueabihf- kasi@kasi-desktop:~/git/u-boot$ export ARCH=arm;Please double check the above commands so that it suits your workspace.Config FileWith everything setup it’s time to choose the proper config file and start the compilation.The board which we are using is beagle bone black which is based on TI’s AM3358 SoC.So we need to look for similar kind of name in include/configs. The file which corresponds to this board is “am335x_evm.h”So from command line we need to execute the below command:kasi@kasi-desktop:~/git/u-boot$ make am335x_evm_defconfig  HOSTCC  scripts/basic/fixdep  HOSTCC  scripts/kconfig/conf.o  SHIPPED scripts/kconfig/zconf.tab.c  SHIPPED scripts/kconfig/zconf.lex.c  SHIPPED scripts/kconfig/zconf.hash.c  HOSTCC  scripts/kconfig/zconf.tab.o  HOSTLD  scripts/kconfig/conf # # configuration written to .config # kasi@kasi-desktop:~/git/u-boot$There are a lot of things that happened in the background when the above command was executed. We don’t want to go much deeper into that as that could be another blog altogether…!We have created the config file which is used by U-Boot in the build process. For those who want to know more, please open the “.config” file and check it. Modifications can be done here to the config file directly but we shall discuss about this later.Start the buildTo start the build we need to give the most used/abused command in the embedded linux programmer’s life which is make.kasi@kasi-desktop:~/git/u-boot$ make scripts/kconfig/conf  --silentoldconfig Kconfig  CHK  include/config.h  UPD  include/config.h  CC   examples/standalone/hello_world.o  LD   examples/standalone/hello_world  OBJCOPY examples/standalone/hello_world.srec  OBJCOPY examples/standalone/hello_world.bin  LDS  u-boot.lds  LD  u-boot  OBJCOPY u-boot-nodtb.bin ./scripts/dtc-version.sh: line 17: dtc: command not found ./scripts/dtc-version.sh: line 18: dtc: command not found *** Your dtc is too old, please upgrade to dtc 1.4 or newer Makefile:1383: recipe for target 'checkdtc' failed make: *** [checkdtc] Error 1 kasi@kasi-desktop:~/git/u-boot$If you are compiling U-Boot for the first time, then there are chances that you may get the above error. Since the build machine which we are using didn’t have the device-tree-compiler package installed we got the above error.Dependency installation (if any)kasi@kasi-desktop:~/git/u-boot$ sudo apt-cache search dtc [sudo] password for kasi: device-tree-compiler - Device Tree Compiler for Flat Device Trees kasi@kasi-desktop:~/git/u-boot$ sudo apt install device-tree-compiler Again makekasi@kasi-desktop:~/git/u-boot$ make  CHK include/config/uboot.release  CHK include/generated/version_autogenerated.hSimple ls -l will show the first stage bootloader and second stage bootloader.kasi@kasi-desktop:~/git/u-boot$ ls -l total 9192 drwxrwxr-x   2 kasi kasi   4096 Nov 11 15:05 api drwxrwxr-x  18 kasi kasi   4096 Nov 11 15:05 arch drwxrwxr-x 220 kasi kasi   4096 Nov 11 15:05 board drwxrwxr-x   3 kasi kasi   12288 Nov 14 13:02 cmd drwxrwxr-x   5 kasi kasi   12288 Nov 14 13:02 common -rw-rw-r--   1 kasi kasi   2260 Nov 11 15:05 config.mk drwxrwxr-x   2 kasi kasi   65536 Nov 11 15:05 configs drwxrwxr-x   2 kasi kasi   4096 Nov 14 13:02 disk drwxrwxr-x   8 kasi kasi   12288 Nov 11 15:05 doc drwxrwxr-x  51 kasi kasi   4096 Nov 14 13:02 drivers drwxrwxr-x   2 kasi kasi   4096 Nov 14 13:03 dts drwxrwxr-x   4 kasi kasi   4096 Nov 11 15:05 examples drwxrwxr-x  12 kasi kasi   4096 Nov 14 13:03 fs drwxrwxr-x  29 kasi kasi   12288 Nov 11 18:48 include -rw-rw-r--   1 kasi kasi   1863 Nov 11 15:05 Kbuild -rw-rw-r--   1 kasi kasi   12416 Nov 11 15:05 Kconfig drwxrwxr-x  12 kasi kasi   4096 Nov 14 13:03 lib drwxrwxr-x   2 kasi kasi   4096 Nov 11 15:05 Licenses -rw-rw-r--   1 kasi kasi   11799 Nov 11 15:05 MAINTAINERS -rw-rw-r--   1 kasi kasi   54040 Nov 11 15:05 Makefile -rw-rw-r--   1 kasi kasi   79808 Nov 14 13:03 MLO -rw-rw-r--   1 kasi kasi   79808 Nov 14 13:03 MLO.byteswap drwxrwxr-x   2 kasi kasi   4096 Nov 14 13:03 net drwxrwxr-x   6 kasi kasi   4096 Nov 11 15:05 post -rw-rw-r--   1 kasi kasi  223974 Nov 11 15:05 README drwxrwxr-x   5 kasi kasi   4096 Nov 11 15:05 scripts -rw-rw-r--   1 kasi kasi  17 Nov 11 15:05 snapshot.commit drwxrwxr-x  12 kasi kasi   4096 Nov 14 13:03 spl -rw-rw-r--   1 kasi kasi   75282 Nov 14 13:03 System.map drwxrwxr-x  10 kasi kasi   4096 Nov 14 13:03 test drwxrwxr-x  15 kasi kasi   4096 Nov 14 13:02 tools -rwxrwxr-x   1 kasi kasi 3989228 Nov 14 13:03 u-boot -rw-rw-r--   1 kasi kasi  466702 Nov 14 13:03 u-boot.bin -rw-rw-r--   1 kasi kasi   0 Nov 14 13:03 u-boot.cfg.configs -rw-rw-r--   1 kasi kasi   36854 Nov 14 13:03 u-boot.dtb -rw-rw-r--   1 kasi kasi  466702 Nov 14 13:03 u-boot-dtb.bin -rw-rw-r--   1 kasi kasi  628808 Nov 14 13:03 u-boot-dtb.img -rw-rw-r--   1 kasi kasi  628808 Nov 14 13:03 u-boot.img -rw-rw-r--   1 kasi kasi   1676 Nov 14 13:03 u-boot.lds -rw-rw-r--   1 kasi kasi  629983 Nov 14 13:03 u-boot.map -rwxrwxr-x   1 kasi kasi  429848 Nov 14 13:03 u-boot-nodtb.bin -rwxrwxr-x   1 kasi kasi 1289666 Nov 14 13:03 u-boot.srec -rw-rw-r--   1 kasi kasi  147767 Nov 14 13:03 u-boot.sym kasi@kasi-desktop:~/git/u-boot$MLO is the first stage bootloader and u-boot.img is the second stage bootloader.With the bootloader available it’s time to partition the Micro SD card, load these images and test it on the target.PartitionWe are using an 8GB Micro SD card and using “gparted” (gui based partition tool) to partition it. It is a much easier approach to use gparted and create the filesystems. We have created two partitions:1. FAT16 of size 80MB with boot flag enabled.2. EXT4 of size more than 4GB.Choosing the size of the partition is an availability as well as a personal choice. One important thing to note here is that the FAT16 partition has the boot flag set. This is needed for us to boot the device using the Micro SD card. Please see the image below to get a clear picture of the partitions in the Micro SD card.After the creating the partitions in the Micro SD card, remove the card from the build machine and insert it again. In most of the modern distro’s partitions in the Micro SD card get auto mounted which will confirm us that the partitions are created correctly. It will help us to cross verify the created partitions.Copy the imagesNow it’s time to copy the builded images into the Micro SD card. When the Micro SD card is inserted into the build machine it was automatically mounted in /media/kasi/BOOT directory.kasi@kasi-desktop:~/git/u-boot$ mount /dev/sdc2 on /media/kasi/fs type ext4 (rw,nosuid,nodev,relatime,data=ordered,uhelper=udisks2) /dev/sdc1 on /media/kasi/BOOT type vfat (rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,showexec,utf8,flush,errors=remount-ro,uhelper=udisks2) We need to copy just the MLO and u-boot.img file into the BOOT partition of the micro SD card. kasi@kasi-desktop:~/git/u-boot$ cp MLO /media/kasi/BOOT/ kasi@kasi-desktop:~/git/u-boot$ cp u-boot.img /media/kasi/BOOT/ kasi@kasi-desktop:~/git/u-boot$With the above commands we have loaded the first stage bootloader as well as the second stage bootloader into the bootable Micro SD card.Micro SD card Booting procedure in beaglebone blackSince the target board has both eMMC and MicroSD card slot, on power up it tries to boot from both the places. To make sure it boots from MicroSD card we need to keep the button near the MicroSD card slot pressed while providing power to the device. This makes sure that the board sees the MicroSD card first and loads the first stage and second stage bootloader which we just copied there.Above flowchart shows the booting procedure of the target.Serial HeaderThe above diagram shows the close up of the serial port header details of that target. You should connect your pinouts from USB to TTL Serial Cable (if you are using one) to these pins in the target to see the below log.Below is the output from the serial port while loading U-Boot which was compiled by us.U-Boot SPL 2016.11-rc3-00044-g38cacda-dirty (Nov 14 2016 - 13:02:35) ############################ ##### AZIRO Technologies #### #####   We were here #### ############################ Trying to boot from MMC1 reading uboot.env ** Unable to read "uboot.env" from mmc0:1 ** Using default environment reading u-boot.img reading u-boot.img reading u-boot.img reading u-boot.img U-Boot 2016.11-rc3-00044-g38cacda-dirty (Nov 14 2016 - 13:02:35 +0530) CPU  : AM335X-GP rev 2.0 Model: TI AM335x BeagleBone Black DRAM:  512 MiB NAND:  0 MiB MMC:   OMAP SD/MMC: 0, OMAP SD/MMC: 1 reading uboot.env ** Unable to read "uboot.env" from mmc0:1 ** Using default environment not set. Validating first E-fuse MAC Net:   eth0: ethernet@4a100000 Hit any key to stop autoboot:  0 => =>As you can clearly see this is the U-Boot which we compiled and loaded into the target (Check for string AZIRO technologies in the log, second line of the output.First stage bootloader checks and loads the u-boot.img into RAM and hands over the control to it which is the second stage bootloader. As shared before U-Boot also provides us with a cli which can be used to set up various parameters like IP address, load addresses and a lot more which the developer can use for tweaking and testing purposes.To the second stage bootloader we need to provide proper kernel image to load and proceed with the next step of bootstrapping. We shall discuss about this in next blog..filledCheckboxes input[type="checkbox"]{opacity:0;display :none;}.multiStepFormBody button.close{z-index:99;}

Aziro Marketing

blogImage

How Disk Access Path Has Evolved In The Last Decade

AbstractThis blog article discusses the evolution of disk access path from bygone years to currently trending Non Volatile Memory Express (NVMe). All engineers should be well aware of the steep latency increase from a few nanoseconds for internal cache hits to a few hundred nanoseconds for RAM, and eventually all the way up to a few hundred milliseconds to seconds for mechanical hard disk access. Latency of external disk access was a severe bottleneck that limited eventual performance, until recently.With the advent of solid state memory architectures like NAND/NOR Flash, the access times and power requirement were dramatically cut down. This brought even storage to the well-known Moore’s performance curve. Newer SSD hard disks were made that replaced the storage media from traditional, mechanical, rotational magnetic media to solid state memories but kept the disk access protocol the same for backward compatibility.Soon the realization dawned that with solid state storage media, the bottleneck still existed with these traditional disk access protocols.In this blog article, let us see how computer designs and disk access protocols have evolved over the years to give us today’s high bandwidth, low-latency disk IO path, called NVMe.Evolution of computer design to achieve current high performance, low-latency disk accessLet us rollback a decade earlier and look at how computers were designed. A computer would contain a CPU with two external chipsets, Northbridge and Southbridge. Please see Figure 1 for such a design.A Northbridge chipset, also called a Memory controller Hub (MCH), provides high-speed access to external memory and graphics controller, directly connecting to the CPU. And then the Southbridge, also called an IO hub, would connect all the low speed IO peripherals. It was a given that spinning hard disks are low-performance components and connected to Southbridge.Figure 1: Computer design with Northbridge/Southbridge chipsetsFigure 2: Anatomy of Disk Access – source: SNIABut with each generation of CPU, faster processors appeared resulting in any external data access out of the CPU, to severely affect its performance because of ever-increasing IO delay. Larger caches helped to an extent, but it soon became obvious that spinning CPU to higher speeds every generation will definitely not get the best performance, unless the external disk delay path scales in comparison to the CPU performance. The wide gap between processor performance and disk performance is captured in Fig 2.As the first step towards addressing the high latency for external storage access, memory controller got integrated into the CPU directly; in other words, the Northbridge chipset got subsumed totally within the CPU. So, there was one fewer bridge for IO access to external disks. But still, hard disk access latency did really hurt the overall performance of the CPU. The capacity scale and data persistence with hard disks cannot be achieved just with RAMs, and so they were critical components that cannot be ignored. Figure 3 essentially captures this performance gap.Figure 3: Disk Access Performance Gap – Source: SNIAFigure 4: Typical SAS drive access pathThe computer industry did have another significant evolution in embracing serial protocols again for high speed interfaces. Storage access protocols changed serial (e.g., SAS), and computer buses followed suit (e.g., PCI Express).AHCI protocol standardized ATA disk access, SAS/FC drives took over SCSI, and serial protocol began to dominate. Each of these protocols had higher speeds and other networked storage features, but the drives were still mechanical. All of these storage protocols needed a dedicated Host Bus Controller (HBA) connected to the CPU local bus that translated requests from/to local CPU (over PCI/PCI-X/PCIe) to/from storage protocol (SAS/SATA/FC). As one could see in the Figure 4, a SAS disk drive could be reached only through a dedicated HBA (host bus adapters).Computer local buses, not to be left behind, followed serializing and came out with PCI Express. PCI Express protocol came into its own; although physically they are different from earlier parallel bus PCI/PCI-X based designs, software interfaces remained the same. Southbridge chipsets carried PCI Express, and there was mass adoption to PCI Express with added performance benefits. The high point of PCI Express was its integration directly into the CPU, thus totally avoiding any external bridge chipset for interfacing to hard disks. With PCI Express becoming the de facto high-speed peripheral interface directly out of the CPU, bandwidth and performance of external peripheral components could be scaled to match CPU directly.Another significant technology improvement delivered solid-state disks. Initial designs only tried to create a niche market for solid state drives. Backward compatibility of new SSDs was an absolute requirement making these SSDs carry the same disk access protocols as traditional hard disks, SAS/SATA, for instance. Initial SSD disks were expensive, with capacities limited to really challenge traditional hard disks. But with each generation, capacity and life (durability) were addressed. It became evident that solid-state disks were here to stay. Figure 5 captures a typical SSD with legacy disk access protocols like SAS/SATA. Now, the storage media became solid state and were no longer mechanical; hence, power requirements and latency got dramatically reduced. But inefficiencies that existed in the disk access protocol got exposed.Figure 5: Legacy IO Path with FlashFigure 6: PCIe SSD through IOHLet us pause a moment here to understand the inefficiencies that were mentioned. If the CPU were to perform a disk access, then driver software in the CPU submits requests to the device over PCIe. The requests are carried over PCIe as payloads and reach the HBA, which decodes the payloads and prepares the same request. Only this time, the request is signaled through another serial storage protocol (e.g., SAS). Eventually the request reaches the disk controller, which performs the operation on the storage media and responds. This response, now initiated in a storage protocol, is received by the HBA, which is again converted to PCIe to hand over the response to the CPU. The role of an HBA was seriously questioned in the whole topology.The full potential of the solid-state disks have yet not been realized because of limitations discussed earlier. Industry responded, removing all the intervening protocol conversions, by avoiding the HBA and legacy disk access protocols, but directly interfacing them over PCIe using proprietary protocols. Refer to Figure 6. Fusion IO PCIe SSD drives were one such successful product that changed the performance profile of disks forever.Finally, everyone could sense the unlimited performance available to the CPU, with solid state Flash storage on Moore’s curve, and the disk IO performance in microseconds from traditional milliseconds to seconds range. This was a moment of reckoning, and standardization had to happen for it to be in the mainstream. Thus was born NVMe. NVMe did have competitors initially through SCSI Express and SATA Express provided backward compatibility to existing AHCI based SATA disks. But NVMe did not have to carry any old baggage, its software stack is lean (though the software stack had to be written from scratch), it became abundantly clear that its advantages far outweighed the additional effort involved. And thus, the CPU vs disk performance curve, which was ever diverging, has been tamed for now. But we can rest assured and look forward to several other significant innovations in storage, networking and processor design to try taming the disk access latency beast completely.References:[1] Southbridge computing, https://en.wikipedia.org/wiki/Southbridge_(computing)[2] Northbridge computing,https://en.wikipedia.org/wiki/Northbridge_(computing)[3] Flash-Plan for the disruption, SNIA – Advancing Storage and InformationTechnology[4] A high performance driver ecosystem for NVM Express[5] NVM Express-Delivering Breakthrough PCIe SSD performance and scalabil-ity, Storage Developer Conference, SNIA 2012.[6] Stephen, Why Are PCIe SSDs So Fast?, http://blog.fosketts.net/2013/06/12/pcie-ssds-fast/

Aziro Marketing

blogImage

4 emerging Data Storage Technologies to Watch

Many companies are facing the big data problem: spates of data waiting to be assorted, stored, and managed. When it comes to large IT corporations, such as Google, Apple, Facebook, and Microsoft, data is always on the rise. Today, the entire digital infrastructure of the world holds over 2.7 zetabytes of data—that’s over 2.7 billion terabytes. Such magnitude of data is stored using magnetic recording technologies used on high-density hard drives, SAN, NAS, cloud storage, object storage, and such other technologies. How is this achieved? What are the magnetic and optical recording technologies rising in popularity these days? Let’s find out.The oldest magnetic recording technology, which is still in use, is known as perpendicular magnetic recording (PMR) that made its appearance way back in 1976. This is the widespread recording technology used by most of the hard drives available today. May it be Western Digital, HGST, or Seagate, the technology used is PMR. The technology has the capability to store up to a density of 1 TB per square inch. But the data is still flowing in relentlessly, and that’s the reason why companies are investing in R&D to come up with higher-density hard drives.1. Shingled Magnetic RecordingLast year, Seagate announced their hard disks using a new magnetic recording technology known as SMR (shingled magnetic recording). This achieves about 25 percent increase in the data per square inch of a hard disk. That’s quite a whopping jump, one might say. This, according to Seagate, is achieved by overlapping the data tracks on a hard drive quite like shingles on a roof.By the first quarter of this year, Seagate was shipping SMR hard drives to select customers. Some of these drives come with around 8 TB of storage capacity. Not only Seagate but other companies such as HGST will be offering SMR drives in the next two years.2. Heat-Assisted Magnetic Recording (HAMR) aka Thermally Assisted Magnetic Recording (TAMR)When it comes to HAMR, learn about a phenomenon known as superparamagnetic effect. As hard drives become denser and the data access becomes faster, there is ample possibility of data corruption. In order to avoid the data corruption, density of hard drives has to be limited. Older longitudinal magnetic recording (LMR) devices (tape drives) have a limit of 100 to 200 Gb per square inch; PMR drives have a limit of about 1 TB per square inch.When it comes to HAMR, a small laser heats up the part of hard disk being written, thus eliminating the superparamagnetic effect temporarily. This technology allows recording on much smaller scales, so as to increase the disk densities by ten or hundred times. For a long time, HAMR was considered a theoretical technology, quite difficult, if not impossible, to realize. However, now several companies including Western Digital, TDK, HGST, and Seagate are conducting research on HAMR technology. Some demonstrations of working hard drives using HAMR have happened since 2012. By 2016, you may be able to see several HAMR hard drives in the market.3. Tunnel Magnetoresistance (TMR)Using tunnel magnetoresistance technologies, hard disk manufacturers can achieve higher densities with greater signal output providing higher signal-to-noise ratio (SNR). This technology works closely with ramp load/unload technology, which is an improvement over traditional contact start-stop (CSS) technology used on magnetic write heads. These technologies provide benefits like greater disk density, lower power usage, enhanced shock tolerance, and durability. Several companies, including WD and HGST, will be providing storage devices based on this technology in the coming days.4. Holographic Data StorageHolographic data storage technology existed as far back as 2002. However, not much research was done on this desirable data storage technology. In theory, the advantages of holographic data storage are manifold: hundreds of terabytes of data can be stored in a medium as small as a sugar cube; parallel data reading makes data reading hundreds of times faster; data can also be stored without corruption for many years. However, this technology is far from perfect. In the coming years, you may get to see quite a bit of research and development in this area, resulting in high-density storage devices.ConclusionGartner reports that over 4.4 million IT jobs will be created by the big data surge by 2015. A huge number of IT professionals and data storage researchers will be working on technologies to improve the storage in the coming years. Without enhancing our storage technologies, it will become difficult to improve the gadgets we have today.Research Sources:http://www.computerworld.com/article/2495700/data-center/new-storage-technologies-to-deal-with-the-data-deluge.php2. http://en.wikipedia.org/wiki/Heat-assisted_magnetic_recording3. http://www.in.techradar.com/news/computing-components/storage/Whatever-happened-to-holographic-storage/articleshow/38985412.cms4. http://asia.stanford.edu/events/spring08/slides402s/0410-dasher.pdf5. https://www.nhk.or.jp/strl/publica/bt/en/ch0040.pdf6. http://physicsworld.com/cws/article/news/2014/feb/27/data-stored-in-magnetic-holograms7. http://searchstorage.techtarget.com/feature/Holographic-data-storage-pushes-into-the-third-dimension8. http://en.wikipedia.org/wiki/Magnetic_data_storage9. http://www.cap.ca/sites/cap.ca/files/article/1714/jan11-offprint-plumer.pdf

Aziro Marketing

EXPLORE ALL TAGS
2019 dockercon
Advanced analytics
Agentic AI
agile
AI
AI ML
AIOps
Amazon Aws
Amazon EC2
Analytics
Analytics tools
AndroidThings
Anomaly Detection
Anomaly monitor
Ansible Test Automation
apache
apache8
Apache Spark RDD
app containerization
application containerization
applications
Application Security
application testing
artificial intelligence
asynchronous replication
automate
automation
automation testing
Autonomous Storage
AWS Lambda
Aziro
Aziro Technologies
big data
Big Data Analytics
big data pipeline
Big Data QA
Big Data Tester
Big Data Testing
bitcoin
blockchain
blog
bluetooth
buildroot
business intelligence
busybox
chef
ci/cd
CI/CD security
cloud
Cloud Analytics
cloud computing
Cloud Cost Optimization
cloud devops
Cloud Infrastructure
Cloud Interoperability
Cloud Native Solution
Cloud Security
cloudstack
cloud storage
Cloud Storage Data
Cloud Storage Security
Codeless Automation
Cognitive analytics
Configuration Management
connected homes
container
Containers
container world 2019
container world conference
continuous-delivery
continuous deployment
continuous integration
Coronavirus
Covid-19
cryptocurrency
cyber security
data-analytics
data backup and recovery
datacenter
data protection
data replication
data-security
data-storage
deep learning
demo
Descriptive analytics
Descriptive analytics tools
development
devops
devops agile
devops automation
DEVOPS CERTIFICATION
devops monitoring
DevOps QA
DevOps Security
DevOps testing
DevSecOps
Digital Transformation
disaster recovery
DMA
docker
dockercon
dockercon 2019
dockercon 2019 san francisco
dockercon usa 2019
docker swarm
DRaaS
edge computing
Embedded AI
embedded-systems
end-to-end-test-automation
FaaS
finance
fintech
FIrebase
flash memory
flash memory summit
FMS2017
GDPR faqs
Glass-Box AI
golang
GraphQL
graphql vs rest
gui testing
habitat
hadoop
hardware-providers
healthcare
Heartfullness
High Performance Computing
Holistic Life
HPC
Hybrid-Cloud
hyper-converged
hyper-v
IaaS
IaaS Security
icinga
icinga for monitoring
Image Recognition 2024
infographic
InSpec
internet-of-things
investing
iot
iot application
iot testing
java 8 streams
javascript
jenkins
KubeCon
kubernetes
kubernetesday
kubernetesday bangalore
libstorage
linux
litecoin
log analytics
Log mining
Low-Code
Low-Code No-Code Platforms
Loyalty
machine-learning
Meditation
Microservices
migration
Mindfulness
ML
mobile-application-testing
mobile-automation-testing
monitoring tools
Mutli-Cloud
network
network file storage
new features
NFS
NVMe
NVMEof
NVMes
Online Education
opensource
openstack
opscode-2
OSS
others
Paas
PDLC
Positivty
predictive analytics
Predictive analytics tools
prescriptive analysis
private-cloud
product sustenance
programming language
public cloud
qa
qa automation
quality-assurance
Rapid Application Development
raspberry pi
RDMA
real time analytics
realtime analytics platforms
Real-time data analytics
Recovery
Recovery as a service
recovery as service
Retail
rsa
rsa 2019
rsa 2019 san francisco
rsac 2018
rsa conference
rsa conference 2019
rsa usa 2019
SaaS Security
san francisco
SDC India 2019
SDDC
security
Security Monitoring
Selenium Test Automation
selenium testng
serverless
Serverless Computing
Site Reliability Engineering
smart homes
smart mirror
SNIA
snia india 2019
SNIA SDC 2019
SNIA SDC INDIA
SNIA SDC USA
software
software defined storage
software-testing
software testing trends
software testing trends 2019
SRE
STaaS
storage
storage events
storage replication
Storage Trends 2018
storage virtualization
support
Synchronous Replication
technology
tech support
test-automation
Testing
testing automation tools
thought leadership articles
trends
tutorials
ui automation testing
ui testing
ui testing automation
vCenter Operations Manager
vCOPS
virtualization
VMware
vmworld
VMworld 2019
vmworld 2019 san francisco
VMworld 2019 US
vROM
Web Automation Testing
web test automation
WFH

Real People, Real Replies.
No Bots, No Black Holes.

Big things at Aziro often start small - a message, an idea, a quick hello. A real human reads every enquiry, and a simple conversation can turn into a real opportunity.
Start yours with us.

Phone

Talk to us

+1 844 415 0777

Email

Drop us a line at

info@aziro.com

Got a Tech Challenge? Let’s Talk