Automation Updates

Uncover our latest and greatest product updates
blogImage

Data Security and Privacy in the Age of Automation and AI

Hey there, fellow data enthusiasts! In today’s automated world, let’s dive deep into the murky waters of data security and privacy. As we ride the wave of automation and AI, staying afloat amidst the challenges of managing and protecting our precious data is crucial. So, grab your snorkel and explore the trends shaping data management services! First, let’s address the elephant in the room – data breaches. Yes, the nightmares of every IT professional. With hackers lurking in the shadows like mischievous gremlins, it’s no wonder we’re all a little paranoid about our data’s safety. But fear not! With the rise of automation and AI, we’ve got some nifty tools to fend off those pesky cyber attackers. Predictive Analytics In the dynamic landscape of data security, one trend stands out as a beacon of innovation: predictive analytics. Envision your data security infrastructure as a highly sophisticated crystal ball, adept at preemptively identifying and neutralizing potential threats long before they materialize. It’s akin to possessing a personal data psychic, minus the mystique of crystal balls and eerie background music. Through predictive analytics, we leverage advanced algorithms to meticulously analyze intricate patterns and detect subtle anomalies in real-time data streams. This proactive approach enables us to maintain a formidable defense posture, staying one step ahead of cyber adversaries and safeguarding critical assets with unparalleled precision and efficacy. Data Privacy Now, let’s delve into the intricate realm of data privacy, akin to safeguarding precious secrets within a fortress amidst a bustling neighborhood. With stringent regulations like GDPR and CCPA looming over businesses like a watchful sentinel, the imperative to shield users’ privacy has never been more paramount. Enter encryption, the stalwart guardian of data privacy, akin to encasing your data within an impregnable digital fortress, impeding prying eyes from breaching its sanctity. With AI-driven advancements, the encryption process undergoes a transformative evolution, enabling automated encryption protocols to operate with unprecedented swiftness and efficiency. Rest assured, as your data traverses the digital landscape, it remains ensconced behind multiple layers of virtual locks and keys, impervious to the probing gaze of potential intruders. Blockchain Blockchain technology, a disruptive data security and privacy force, has garnered significant momentum in recent years. This innovative technology isn’t just about cryptocurrencies; it holds the potential to revolutionize data authentication and integrity. Picture blockchain as a digital ledger, where each data transaction is cryptographically sealed, creating an immutable record akin to a digital fingerprint – but with a distinct aura of sophistication. With blockchain, we transcend traditional data security paradigms, fostering an environment where transparency and trust reign supreme. By leveraging its decentralized architecture, we establish a trust network among participants, ensuring that data transactions remain tamper-proof and verifiable. It’s akin to entrusting your data to a diligent guardian, vigilant in its duty to safeguard against any nefarious activity. Furthermore, blockchain isn’t just about fortifying the perimeter; it’s about instilling confidence in the very fabric of our digital interactions. We forge a path toward accountability and authenticity through blockchain’s immutable records, mitigating the risk of data manipulation or unauthorized access. The Future of Data Security and Privacy The endless possibilities with automation and AI becoming increasingly integrated into our daily lives. As these technologies evolve, they usher in a wave of transformative advancements poised to revolutionize the landscape of data security and privacy. Consider the following technological innovations and their potential impact. Intelligent Threat Detection Systems: Utilizing advanced machine learning algorithms, these systems analyze vast volumes of data in real time to identify and preemptively mitigate potential security threats. Self-healing Security Protocols: Leveraging automation, self-healing security protocols autonomously detect and remediate security vulnerabilities and breaches, ensuring continuous protection of data assets. Blockchain-based Data Integrity: By leveraging blockchain technology, organizations can establish immutable ledgers to store and authenticate data transactions securely, safeguarding against tampering and unauthorized access. Quantum Encryption: Quantum encryption techniques leverage the principles of quantum mechanics to create cryptographic keys that are theoretically unbreakable, providing an unprecedented level of security for sensitive data. Zero Trust Architecture: Zero Trust Architecture (ZTA) redefines traditional security paradigms by adopting a “never trust, always verify” approach, ensuring granular access controls and continuous monitoring to prevent unauthorized access. But amidst all the technological advancements, let’s not forget the human element. After all, we’re behind the keyboards, making the decisions that shape the digital landscape. So, let’s raise a virtual toast to data security and privacy – may we continue to innovate, adapt, and protect our data for years to come. Conclusion Navigating the intricacies of data security and privacy amidst the complexities of automation and AI resembles traversing through a labyrinthine digital landscape. However, we can navigate the challenges with precision and confidence, armed with advanced tools, robust strategies, and a steadfast commitment to technical excellence. Therefore, let us persist in our endeavors, leveraging encryption and other formidable security measures to fortify our data defenses and emerge triumphant in the face of adversities.

Aziro Marketing

blogImage

Propel Efficiency to New Heights with Advanced Infrastructure Automation Services

In today’s fast-paced digital landscape, businesses constantly seek ways to increase efficiency, reduce costs, and deliver exceptional customer service. One area that holds immense potential for organizations is infrastructure automation services.Gartner Survey Finds 85% of Infrastructure and Operations Leaders Without Full Automation Expect to Increase Automation Within Three Years.Gone are the days when manual configuration and IT infrastructure management were the norms. With the advent of automation technologies, businesses can now streamline their operations, improve productivity, and drive operational excellence. This blog post will explore how infrastructure automation services can significantly impact an organization’s efficiency while reducing costs.What is Infrastructure Automation?Infrastructure automation refers to automating IT infrastructure configuration, deployment, and management using software tools and technologies. This approach eliminates manual intervention in day-to-day operations, freeing valuable resources and enabling IT teams to focus on more strategic initiatives.Infrastructure automation encompasses various aspects, including server provisioning, network configuration, application deployment, and security policy enforcement. These tasks, which traditionally required manual effort and were prone to errors, can now be automated, increasing speed, accuracy, and reliability.The Benefits of Infrastructure Automation ServicesInfrastructure automation services offer numerous benefits to organizations. Gartner Predicts 70% of Organizations to Implement Infrastructure Automation by 2025. They enhance operational efficiency, help in cost reduction by optimizing resource utilization, and enable scalability and flexibility, allowing businesses to adapt to changing demands quickly. Infrastructure automation services deliver significant advantages, empowering organizations to achieve operational excellence.1. Enhanced EfficiencyOne of the primary benefits of infrastructure automation services is the significant enhancement in operational efficiency. Organizations can accelerate their processes, reduce human errors, and achieve faster time-to-market by automating repetitive and time-consuming tasks. Whether deploying new servers, configuring network devices, or scaling applications, automation allows for swift and seamless execution, ultimately improving productivity and customer satisfaction.2. Cost ReductionInfrastructure automation also offers substantial cost savings for businesses. By eliminating manual interventions and optimizing resource utilization, organizations can reduce labor costs and minimize the risk of human errors. Moreover, automation enables better capacity planning, ensuring that resources are allocated effectively, preventing over-provisioning, and avoiding unnecessary expenses. Overall, infrastructure automation streamlines operations, reduces downtime, and optimizes costs, resulting in significant financial benefits.3. Increased Scalability and FlexibilityScaling IT infrastructure to meet changing demands can be a complex and time-consuming process. With infrastructure automation services, organizations can seamlessly scale their resources up or down based on real-time requirements. Automated provisioning, configuration management, and workload orchestration enable businesses to adapt to fluctuations in demand quickly, ensuring the availability of resources when needed. This scalability and flexibility allow organizations to optimize their infrastructure utilization, avoid underutilization, and respond dynamically to evolving business needs.4. Enhanced Security and ComplianceSecurity and compliance are critical concerns for today’s digital landscape businesses. Infrastructure automation services are vital in ensuring robust security measures and regulatory compliance. Organizations can enforce consistent security controls across their infrastructure by automating security policies, reducing the risk of vulnerabilities and unauthorized access. Moreover, automation enables regular compliance checks, ensuring adherence to industry standards and regulations, and simplifying audit processes.5. Improved Collaboration and DevOps PracticesInfrastructure automation promotes collaboration and fosters DevOps practices within organizations. By automating tasks, teams can work together seamlessly, share knowledge, and collaborate on delivering high-quality products and services. Automation tools facilitate version control, automated testing, and continuous integration and delivery (CI/CD), enabling faster and more reliable software releases. Integrating development and operations allows for an agile and iterative approach, reducing time-to-market and enhancing customer satisfaction.Implementing Infrastructure Automation ServicesA strategic approach combined with a keen understanding of organizational requirements is crucial to implementing infrastructure automation services successfully. Here are some key technical considerations to keep in mind:Assess Current Infrastructure: Evaluate your existing infrastructure landscape to identify opportunities for automation. Determine which components, processes, and workflows can benefit the most from automation, aligning with specific goals and desired outcomes.Choose the Right Tools: Select appropriate automation tools and technologies that align with your organization’s requirements and objectives. Consider tools such as Ansible, Chef, Puppet, and Terraform, which provide robust capabilities for different aspects of infrastructure automation.Define Automation Workflows: Design and document automation workflows and processes, including provisioning, configuration management, and application deployment. Define standardized templates, scripts, and policies that reflect best practices and align with industry standards.Test and Validate: Conduct comprehensive testing and validation of your automation workflows to ensure correct operation, security, and compliance. Iterate, refine, and verify automation processes in staging or test environments before rolling them out to production.Train and Educate: Provide extensive training and education to your IT teams, ensuring they have the knowledge and skills to utilize automation tools effectively. Encourage cross-functional collaboration and share best practices to maximize the benefits of infrastructure automation across the organization.Monitor and Optimize: Establish effective monitoring mechanisms to gather data and insights on the performance and efficiency of your automated workflows. Continuously analyze this data to identify bottlenecks, improvement areas, and optimization opportunities. Iterate and refine your automation processes to drive ongoing operational excellence.Embracing Infrastructure AutomationAutomation is revolutionizing the way organizations manage their IT infrastructure. By embracing infrastructure automation services, businesses can streamline operations, enhance efficiency, and reduce costs. The benefits of automation are vast, from accelerated deployment and increased scalability to improved security and collaboration. As organizations strive for operational excellence, infrastructure automation services emerge as a crucial enabler. Embrace automation and pave the way for a more efficient and cost-effective future.

Aziro Marketing

blogImage

What is Chef Automate?

Introduction to Chef AutomateChef Automate provides a full suite of enterprise capabilities for workflow, node visibility, and compliance. Chef Automate integrates with the open-source products Chef, InSpec, and Habitat. It comes with comprehensive 24×7 support services for the entire platform, including open source components.These capabilities include the ability to build, deploy, manage, and collaborate across all aspects of software production: infrastructure, applications, and compliance. Each capability represents a set of collective actions and the resulting artifacts.Collaborate:As software deployment speed increases across your organization, the need for fast real-time collaboration becomes critical. Different teams may use different tools to accomplish various tasks. The ability to integrate a variety of third-party products is necessary in support of continuous deployment of infrastructure and applications. Chef Automate provides tools for local development, several integration points including APIs and SDKs, in addition to deployment pipelines that support a common workflow.Build:Practicing continuous integration and following proper deployment workflows that methodically test all proposed changes help you to build code for production use. Packaging code into a reusable artifact ensures that you are testing, approving, and promoting use of an atomic change that is consistent across multiple environments and prevents configuration drift.Deploy:Deployment pipelines increase the speed and efficiency of your software deployments by simplifying the number of variables and removing the unpredictable nature of manual steps. Deployment pipelines have a specific beginning, a specific end, and a predictable way of working each time; thereby removing complexity, reducing risk, and improving efficiency. Establishing standard workflows that utilize deployment pipelines give your operations and development teams a common platform.Manage:With increased speed comes an increased demand to understand the current state of your underlying software automation. Organizations cannot ship software quickly, yet poorly, and still manage to outperform their competitors. The ability to visualize fleetwide status and ensure security and compliance requirements act as risk mitigation techniques to resolve errors quickly and easily. Removing manual processes and checklist requirements means that shifting management capabilities becomes a key component of moving to continuous automation.OSS Automation Engines:Chef Automate is powered by three open source engines: Chef, Habitat and InSpec.Chef is the engine for infrastructure automation.Habitat automates modern applications such as those that are in containers and composed of microservices.InSpec lets you specify compliance and security requirements as executable code.Automate Setup Steps1: You must have an ACC account2: Download open VPN (https://chef-vpn.chef.co/?src=connect)3: Download client.ovpn (After login above last link)4: Install Docker5: Install Docker-Compose6: Install Vagrant7: Install Virtual-Box8: Download and install the ChefDK. This will give you the Delivery CLI tool, which will allow you to clone the Workflow project from delivery.shd.chef.co. Remember to log into the VPN to access this site.9: Add your SSH keyOn the Admin page, add your public ssh key (usually found in ~/.ssh/id_rsa.pub) to your account. This will be necessary in a few minutes.>10: Setup deliverydelivery setup --ent=chef --org=products --user=pawasthi --server=automate.chef.co -f master11: Setup tokendelivery token --ent=chef --org=products --user=pawasthi --server=automate.chef.co12: Copy the token from browser and validate.13: Clone automate via deliverydelivery clone automate --ent=chef --org=products --user=pawasthi --server=automate.chef.co14: Goto Automate (cd automate) then run `make`Note: Before make add Hook after That.1:`apt-get update`2: `apt-get install direnv`3: run `direnv hook bash` and put what it prints in your `~/.bashrc` file4: then `source ~/bashrc`Note for error unhealthy cluster: Check the cluster is created first `docker-compose ps -a` then clean all project `make clean` then run `make` try to avoid `sudo` to minimise your error.Note for port: If you get any port as used, try to release that portLike: `netstat -tunlp | grep :port` if this return process is running on your required port then kill that process `kill -9 process_id`Visibility Web UI:Developing for Visibility UI follows the same pattern as Workflow UI: a local file-system watcher builds and syncs changes into visibility_ui container that Nginx redirects to.Before developing, you will need to get the docker-compose environment at the root of this repository running.cd .. && docker-compose upThe visibility_ui container should Exit 0 indicating the JavaScript bundle was built successfully.You can run some operations locally. Make sure your version of Node matches what’s defined in .nvmrc.We recommend you use nvm to install ‘node’ if you don’t have it already. To install node first install nvm with the below linecurl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.1/install.sh | bashThen install node by going into the /visibility-web directory and running the below commandnvm installTo ensure that node is running with the correct version compare the output of this command node -v to the file /visibility-web/.nvmrc.make install – will install the Node modules.make unit – will run the unit tests locally.make e2e – will run the end-to-end tests in the Docker Compose test environment with the functional test suite in ../test/functional.sh, andmake startdev – will start a watch process that’ll rebuild the bundle whenever you make changes. (Reload the browser to see them.)make beforecommit – will run typescript linting, sass linting, and unit tests References:https://learn.chef.io/automate/

Aziro Marketing

blogImage

How to setup a bootloader for an embedded linux machine

This is a three part series of blogs which explains the complete procedure to cross compileBootloaderKernel/O.SFile systemThis will be done for ARM processor based development platform.In short, this blog series explains how to setup an embedded linux machine that suits your needsDevelopment environment prerequisitesLinux machine running any flavour of ubuntu, fedora or arch linux.Internet connection.Hardware needed1.ARM based development board.a.This is very important as the build process and the cross compiler we choose depends on the type of processor. For this blog series we are using beaglebone black development which is based on ARMv7 architecture.2.4/8 GB Micro SD Card.3.USB to Serial adaptor.Topics discussed in this documentWhat is Bootloader?Das U-Boot — the Universal Boot LoaderStages in boot loadingDownloading the sourceBrief about the directories and the functionality it providesCross compiling bootloader for ARM based target platformSetup the environment variablesStart the buildMicro SD card Booting procedure in beaglebone blackWhat is Bootloader?There are so many answers to this question, but if you look at the core of all the answers it would contain some kind of initialization. In short this is the piece of software which is executed as soon as you turn on your hardware device. The hardware device can be anything, from your mobile phones, routers, microwave ovens, smart tv, and to the world’s fastest supercomputer. After all, everything has a beginning right?The reason I said there are so many ways to answer this question, is because the use case of each device is different, and we need to choose the bootloader carefully, which initializes the device. So much research and decision making time is spent on this to make sure that the devices which are initialized are absolutely needed. Everyone likes their devices to boot up fast.In embedded systems the bootloader is a special piece of software whose main purpose is to load the kernel and hand over the control to it. To achieve this, it needs to initialize the required peripherals which helps the device to carry out its intended functionality. In other words, it initializes the absolutely needed peripherals alone and hands over the control to the O.S aka kernel.Das U-Boot — the Universal Boot LoaderU-Boot is the most popular boot loader in linux based embedded devices. It is released as open source under the GNU GPLv2 license. It supports a wide range of microprocessors like MIPS, ARM, PPC, Blackfin, AVR32 and x86. It even supports FPGA based nios platforms. If your hardware design is based out of any of these processors and if you are looking for a bootloader the best bet is to try U-Boot first. It also supports different methods of booting which is pretty much needed on fallback situations.For example, it has support to boot from USB, SD Card, NOR and NAND flash (non volatile memory). It also has the support to boot linux kernel from the network using TFTP. The list of filesystems supported by U-Boot is huge. So you are covered in all aspects that is needed from a bootloader and more so.Last but not least, it has a command line interface which gives you a very easy access to it and try many different things before finalizing your design. You configure U-Boot for various boot methods like MMC, USB, NFS or NAND based and it allows you to test the physical RAM of any issues.Now its upto the designer to pick what device he wants and then use U-Boot to his advantage.Stages in boot loadingFor starters, U-Boot is both a first stage and second stage bootloader. When U-Boot is compiled we get two images, first stage (MLO) and second stage (u-boot.img) images. It is loaded by the system’s ROM code (this code resides inside the SoC’s and it is already preprogrammed) from a supported boot device. The ROM code checks for the various bootable devices that is available. And starts execution from the device which is capable of booting. This can be controlled through jumpers, though some resistor based methods also exists. Since each platform is different and it is advised to look into the platforms datasheet for more details.Stage 1 bootloader is sometimes called a small SPL (Secondary Program Loader). SPL would do initial hardware configuration and load the rest of U-Boot i.e. second stage loader. Regardless of whether the SPL is used, U-Boot performs both first-stage and second-stage booting.In first stage, U-Boot initializes the memory controller and SDRAM. This is needed as rest of the execution of the code depends on this. Depending upon the list of devices supported by the platform it initializes the rest. For example, if your platform has the capability to boot through USB and there is no support for network connectivity, then U-Boot can be programmed to do exactly the same.If you are planning to use linux kernel, then setting up of the memory controller is the only mandatory thing expected by linux kernel. If memory controller is not initialized properly then linux kernel won’t be able to boot.Block Diagram of the targetThe above is the block diagram of AM335X SoC.Downloading the sourceU-Boot source code is maintained using git revision control. Using git we can clone the latest source code from the repo.kasi@kasi-desktop:~/git$ git clone git://git.denx.de/u-boot.gitBrief about the directories and the functionality it providesarch –> Contains architecture specific code. This is the piece of code which initializes the CPU and board specific peripheral devices.board → Source in both arch and board directory work in tandem to initialize the memory and other devices.cmd –> Contains code which adds command line support to carry out different activity depending on the developer’s requirement. For example, command line utilities are provided to erase NAND flash and reprogram it. We will be using similar commands in the next blog.configs –> Contains the platform level configuration details. This is very much platform specific. The configs are much like a static mapping with reference to the platform’s datasheet.drivers –> This directory needs a special mention as it has support for a lot of devices:Each subdirectory under the driver directory corresponds to a particular device type. This structure is followed in accordance with the linux kernel. For example, network drivers are all accumulated inside the net directory:kasi@kasi-desktop:~/git/u-boot$ ls drivers/net/ -l total 2448 -rw-rw-r-- 1 kasi kasi 62315 Nov 11 15:05 4xx_enet.c -rw-rw-r-- 1 kasi kasi 6026 Nov 11 15:05 8390.hThis makes sure the code is not bloated and it is much easier for us to navigate and make the needed changes.fs –> Contains the code which adds support for filesystems. As mentioned earlier, U-Boot has a rich filesystem support. It supports both read only file system like cramfs and also journalling file system like jffs2 which is used on NAND flash based devices.include –> This is a very important directory in U-Boot. It not only contains the header files but also the files which define platform specific information like supported baud rates, starting RAM address, stack size, default command line arguments etc.lib –> Contains support for library files. They provide the needed helper functions used by U-Boot.net –> Contains support for networking protocols like ARP, TFTP, Ping, Bootp and etc.scripts and tools –> Contains helper scripts to create images and binaries. It contains scripts to create a patch file (hopefully with some useful fixes) in the correct format if we are planning to send it the development community.With the source code available and some understanding about the directory structure let us do what we actually want to do i.e. create a bootloaderSince the target board which we are using is based on ARM processor we will need a cross compiler which helps us to create binaries to run on that processor. There are a lot of options for this. Linaro provides the latest cross tool for ARM based processors and it is very easy to get. For this reason, we have chosen to go for the cross tool provided by linaro.Cross compiling bootloader for ARM based target platformFor cross compiling we can need to download the toolchain from the linaro website using the below link:kasi@kasi-desktop:~/git$ wget https://releases.linaro.org/components/toolchain/binaries/latest/arm-linux-gnueabihf/gcc-linaro-6.1.1-2016.08-x86_64_arm-linux-gnueabihf.tar.xzThe toolchain comes compressed as tar file and we can unzip it using the below command:kasi@kasi-desktop:~/git$ tar xf gcc-linaro-6.1.1-2016.08-x86_64_arm-linux-gnueabihf.tar.xzSetup the environment variablesWith the pre build tool chain, we need to set up a few environment variables like path of the toolchain before proceeding to compile U-Boot. Below are the shell commands that we need to issue.export PATH=/gcc-linaro-6.1.1-2016.08-x86_64_arm-linux-gnueabihf/bin:$PATH; export CROSS_COMPILE=arm-linux-gnueabihf- export ARCH=arm;In our work space points to /home/kasi/git/ as this is the workspace which we are using.The exact command from our machine is:kasi@kasi-desktop:~/git/u-boot$ export PATH=/home/kasi/git/gcc-linaro-6.1.1-2016.08-x86_64_arm-linux-gnueabihf/bin:$PATH kasi@kasi-desktop:~/git/u-boot$ export CROSS_COMPILE=arm-linux-gnueabihf- kasi@kasi-desktop:~/git/u-boot$ export ARCH=arm;Please double check the above commands so that it suits your workspace.Config FileWith everything setup it’s time to choose the proper config file and start the compilation.The board which we are using is beagle bone black which is based on TI’s AM3358 SoC.So we need to look for similar kind of name in include/configs. The file which corresponds to this board is “am335x_evm.h”So from command line we need to execute the below command:kasi@kasi-desktop:~/git/u-boot$ make am335x_evm_defconfig  HOSTCC  scripts/basic/fixdep  HOSTCC  scripts/kconfig/conf.o  SHIPPED scripts/kconfig/zconf.tab.c  SHIPPED scripts/kconfig/zconf.lex.c  SHIPPED scripts/kconfig/zconf.hash.c  HOSTCC  scripts/kconfig/zconf.tab.o  HOSTLD  scripts/kconfig/conf # # configuration written to .config # kasi@kasi-desktop:~/git/u-boot$There are a lot of things that happened in the background when the above command was executed. We don’t want to go much deeper into that as that could be another blog altogether…!We have created the config file which is used by U-Boot in the build process. For those who want to know more, please open the “.config” file and check it. Modifications can be done here to the config file directly but we shall discuss about this later.Start the buildTo start the build we need to give the most used/abused command in the embedded linux programmer’s life which is make.kasi@kasi-desktop:~/git/u-boot$ make scripts/kconfig/conf  --silentoldconfig Kconfig  CHK  include/config.h  UPD  include/config.h  CC   examples/standalone/hello_world.o  LD   examples/standalone/hello_world  OBJCOPY examples/standalone/hello_world.srec  OBJCOPY examples/standalone/hello_world.bin  LDS  u-boot.lds  LD  u-boot  OBJCOPY u-boot-nodtb.bin ./scripts/dtc-version.sh: line 17: dtc: command not found ./scripts/dtc-version.sh: line 18: dtc: command not found *** Your dtc is too old, please upgrade to dtc 1.4 or newer Makefile:1383: recipe for target 'checkdtc' failed make: *** [checkdtc] Error 1 kasi@kasi-desktop:~/git/u-boot$If you are compiling U-Boot for the first time, then there are chances that you may get the above error. Since the build machine which we are using didn’t have the device-tree-compiler package installed we got the above error.Dependency installation (if any)kasi@kasi-desktop:~/git/u-boot$ sudo apt-cache search dtc [sudo] password for kasi: device-tree-compiler - Device Tree Compiler for Flat Device Trees kasi@kasi-desktop:~/git/u-boot$ sudo apt install device-tree-compiler Again makekasi@kasi-desktop:~/git/u-boot$ make  CHK include/config/uboot.release  CHK include/generated/version_autogenerated.hSimple ls -l will show the first stage bootloader and second stage bootloader.kasi@kasi-desktop:~/git/u-boot$ ls -l total 9192 drwxrwxr-x   2 kasi kasi   4096 Nov 11 15:05 api drwxrwxr-x  18 kasi kasi   4096 Nov 11 15:05 arch drwxrwxr-x 220 kasi kasi   4096 Nov 11 15:05 board drwxrwxr-x   3 kasi kasi   12288 Nov 14 13:02 cmd drwxrwxr-x   5 kasi kasi   12288 Nov 14 13:02 common -rw-rw-r--   1 kasi kasi   2260 Nov 11 15:05 config.mk drwxrwxr-x   2 kasi kasi   65536 Nov 11 15:05 configs drwxrwxr-x   2 kasi kasi   4096 Nov 14 13:02 disk drwxrwxr-x   8 kasi kasi   12288 Nov 11 15:05 doc drwxrwxr-x  51 kasi kasi   4096 Nov 14 13:02 drivers drwxrwxr-x   2 kasi kasi   4096 Nov 14 13:03 dts drwxrwxr-x   4 kasi kasi   4096 Nov 11 15:05 examples drwxrwxr-x  12 kasi kasi   4096 Nov 14 13:03 fs drwxrwxr-x  29 kasi kasi   12288 Nov 11 18:48 include -rw-rw-r--   1 kasi kasi   1863 Nov 11 15:05 Kbuild -rw-rw-r--   1 kasi kasi   12416 Nov 11 15:05 Kconfig drwxrwxr-x  12 kasi kasi   4096 Nov 14 13:03 lib drwxrwxr-x   2 kasi kasi   4096 Nov 11 15:05 Licenses -rw-rw-r--   1 kasi kasi   11799 Nov 11 15:05 MAINTAINERS -rw-rw-r--   1 kasi kasi   54040 Nov 11 15:05 Makefile -rw-rw-r--   1 kasi kasi   79808 Nov 14 13:03 MLO -rw-rw-r--   1 kasi kasi   79808 Nov 14 13:03 MLO.byteswap drwxrwxr-x   2 kasi kasi   4096 Nov 14 13:03 net drwxrwxr-x   6 kasi kasi   4096 Nov 11 15:05 post -rw-rw-r--   1 kasi kasi  223974 Nov 11 15:05 README drwxrwxr-x   5 kasi kasi   4096 Nov 11 15:05 scripts -rw-rw-r--   1 kasi kasi  17 Nov 11 15:05 snapshot.commit drwxrwxr-x  12 kasi kasi   4096 Nov 14 13:03 spl -rw-rw-r--   1 kasi kasi   75282 Nov 14 13:03 System.map drwxrwxr-x  10 kasi kasi   4096 Nov 14 13:03 test drwxrwxr-x  15 kasi kasi   4096 Nov 14 13:02 tools -rwxrwxr-x   1 kasi kasi 3989228 Nov 14 13:03 u-boot -rw-rw-r--   1 kasi kasi  466702 Nov 14 13:03 u-boot.bin -rw-rw-r--   1 kasi kasi   0 Nov 14 13:03 u-boot.cfg.configs -rw-rw-r--   1 kasi kasi   36854 Nov 14 13:03 u-boot.dtb -rw-rw-r--   1 kasi kasi  466702 Nov 14 13:03 u-boot-dtb.bin -rw-rw-r--   1 kasi kasi  628808 Nov 14 13:03 u-boot-dtb.img -rw-rw-r--   1 kasi kasi  628808 Nov 14 13:03 u-boot.img -rw-rw-r--   1 kasi kasi   1676 Nov 14 13:03 u-boot.lds -rw-rw-r--   1 kasi kasi  629983 Nov 14 13:03 u-boot.map -rwxrwxr-x   1 kasi kasi  429848 Nov 14 13:03 u-boot-nodtb.bin -rwxrwxr-x   1 kasi kasi 1289666 Nov 14 13:03 u-boot.srec -rw-rw-r--   1 kasi kasi  147767 Nov 14 13:03 u-boot.sym kasi@kasi-desktop:~/git/u-boot$MLO is the first stage bootloader and u-boot.img is the second stage bootloader.With the bootloader available it’s time to partition the Micro SD card, load these images and test it on the target.PartitionWe are using an 8GB Micro SD card and using “gparted” (gui based partition tool) to partition it. It is a much easier approach to use gparted and create the filesystems. We have created two partitions:1. FAT16 of size 80MB with boot flag enabled.2. EXT4 of size more than 4GB.Choosing the size of the partition is an availability as well as a personal choice. One important thing to note here is that the FAT16 partition has the boot flag set. This is needed for us to boot the device using the Micro SD card. Please see the image below to get a clear picture of the partitions in the Micro SD card.After the creating the partitions in the Micro SD card, remove the card from the build machine and insert it again. In most of the modern distro’s partitions in the Micro SD card get auto mounted which will confirm us that the partitions are created correctly. It will help us to cross verify the created partitions.Copy the imagesNow it’s time to copy the builded images into the Micro SD card. When the Micro SD card is inserted into the build machine it was automatically mounted in /media/kasi/BOOT directory.kasi@kasi-desktop:~/git/u-boot$ mount /dev/sdc2 on /media/kasi/fs type ext4 (rw,nosuid,nodev,relatime,data=ordered,uhelper=udisks2) /dev/sdc1 on /media/kasi/BOOT type vfat (rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,showexec,utf8,flush,errors=remount-ro,uhelper=udisks2) We need to copy just the MLO and u-boot.img file into the BOOT partition of the micro SD card. kasi@kasi-desktop:~/git/u-boot$ cp MLO /media/kasi/BOOT/ kasi@kasi-desktop:~/git/u-boot$ cp u-boot.img /media/kasi/BOOT/ kasi@kasi-desktop:~/git/u-boot$With the above commands we have loaded the first stage bootloader as well as the second stage bootloader into the bootable Micro SD card.Micro SD card Booting procedure in beaglebone blackSince the target board has both eMMC and MicroSD card slot, on power up it tries to boot from both the places. To make sure it boots from MicroSD card we need to keep the button near the MicroSD card slot pressed while providing power to the device. This makes sure that the board sees the MicroSD card first and loads the first stage and second stage bootloader which we just copied there.Above flowchart shows the booting procedure of the target.Serial HeaderThe above diagram shows the close up of the serial port header details of that target. You should connect your pinouts from USB to TTL Serial Cable (if you are using one) to these pins in the target to see the below log.Below is the output from the serial port while loading U-Boot which was compiled by us.U-Boot SPL 2016.11-rc3-00044-g38cacda-dirty (Nov 14 2016 - 13:02:35) ############################ ##### AZIRO Technologies #### #####   We were here #### ############################ Trying to boot from MMC1 reading uboot.env ** Unable to read "uboot.env" from mmc0:1 ** Using default environment reading u-boot.img reading u-boot.img reading u-boot.img reading u-boot.img U-Boot 2016.11-rc3-00044-g38cacda-dirty (Nov 14 2016 - 13:02:35 +0530) CPU  : AM335X-GP rev 2.0 Model: TI AM335x BeagleBone Black DRAM:  512 MiB NAND:  0 MiB MMC:   OMAP SD/MMC: 0, OMAP SD/MMC: 1 reading uboot.env ** Unable to read "uboot.env" from mmc0:1 ** Using default environment not set. Validating first E-fuse MAC Net:   eth0: ethernet@4a100000 Hit any key to stop autoboot:  0 => =>As you can clearly see this is the U-Boot which we compiled and loaded into the target (Check for string AZIRO technologies in the log, second line of the output.First stage bootloader checks and loads the u-boot.img into RAM and hands over the control to it which is the second stage bootloader. As shared before U-Boot also provides us with a cli which can be used to set up various parameters like IP address, load addresses and a lot more which the developer can use for tweaking and testing purposes.To the second stage bootloader we need to provide proper kernel image to load and proceed with the next step of bootstrapping. We shall discuss about this in next blog..filledCheckboxes input[type="checkbox"]{opacity:0;display :none;}.multiStepFormBody button.close{z-index:99;}

Aziro Marketing

EXPLORE ALL TAGS
2019 dockercon
Advanced analytics
Agentic AI
agile
AI
AI ML
AIOps
Amazon Aws
Amazon EC2
Analytics
Analytics tools
AndroidThings
Anomaly Detection
Anomaly monitor
Ansible Test Automation
apache
apache8
Apache Spark RDD
app containerization
application containerization
applications
Application Security
application testing
artificial intelligence
asynchronous replication
automate
automation
automation testing
Autonomous Storage
AWS Lambda
Aziro
Aziro Technologies
big data
Big Data Analytics
big data pipeline
Big Data QA
Big Data Tester
Big Data Testing
bitcoin
blockchain
blog
bluetooth
buildroot
business intelligence
busybox
chef
ci/cd
CI/CD security
cloud
Cloud Analytics
cloud computing
Cloud Cost Optimization
cloud devops
Cloud Infrastructure
Cloud Interoperability
Cloud Native Solution
Cloud Security
cloudstack
cloud storage
Cloud Storage Data
Cloud Storage Security
Codeless Automation
Cognitive analytics
Configuration Management
connected homes
container
Containers
container world 2019
container world conference
continuous-delivery
continuous deployment
continuous integration
Coronavirus
Covid-19
cryptocurrency
cyber security
data-analytics
data backup and recovery
datacenter
data protection
data replication
data-security
data-storage
deep learning
demo
Descriptive analytics
Descriptive analytics tools
development
devops
devops agile
devops automation
DEVOPS CERTIFICATION
devops monitoring
DevOps QA
DevOps Security
DevOps testing
DevSecOps
Digital Transformation
disaster recovery
DMA
docker
dockercon
dockercon 2019
dockercon 2019 san francisco
dockercon usa 2019
docker swarm
DRaaS
edge computing
Embedded AI
embedded-systems
end-to-end-test-automation
FaaS
finance
fintech
FIrebase
flash memory
flash memory summit
FMS2017
GDPR faqs
Glass-Box AI
golang
GraphQL
graphql vs rest
gui testing
habitat
hadoop
hardware-providers
healthcare
Heartfullness
High Performance Computing
Holistic Life
HPC
Hybrid-Cloud
hyper-converged
hyper-v
IaaS
IaaS Security
icinga
icinga for monitoring
Image Recognition 2024
infographic
InSpec
internet-of-things
investing
iot
iot application
iot testing
java 8 streams
javascript
jenkins
KubeCon
kubernetes
kubernetesday
kubernetesday bangalore
libstorage
linux
litecoin
log analytics
Log mining
Low-Code
Low-Code No-Code Platforms
Loyalty
machine-learning
Meditation
Microservices
migration
Mindfulness
ML
mobile-application-testing
mobile-automation-testing
monitoring tools
Mutli-Cloud
network
network file storage
new features
NFS
NVMe
NVMEof
NVMes
Online Education
opensource
openstack
opscode-2
OSS
others
Paas
PDLC
Positivty
predictive analytics
Predictive analytics tools
prescriptive analysis
private-cloud
product sustenance
programming language
public cloud
qa
qa automation
quality-assurance
Rapid Application Development
raspberry pi
RDMA
real time analytics
realtime analytics platforms
Real-time data analytics
Recovery
Recovery as a service
recovery as service
Retail
rsa
rsa 2019
rsa 2019 san francisco
rsac 2018
rsa conference
rsa conference 2019
rsa usa 2019
SaaS Security
san francisco
SDC India 2019
SDDC
security
Security Monitoring
Selenium Test Automation
selenium testng
serverless
Serverless Computing
Site Reliability Engineering
smart homes
smart mirror
SNIA
snia india 2019
SNIA SDC 2019
SNIA SDC INDIA
SNIA SDC USA
software
software defined storage
software-testing
software testing trends
software testing trends 2019
SRE
STaaS
storage
storage events
storage replication
Storage Trends 2018
storage virtualization
support
Synchronous Replication
technology
tech support
test-automation
Testing
testing automation tools
thought leadership articles
trends
tutorials
ui automation testing
ui testing
ui testing automation
vCenter Operations Manager
vCOPS
virtualization
VMware
vmworld
VMworld 2019
vmworld 2019 san francisco
VMworld 2019 US
vROM
Web Automation Testing
web test automation
WFH

Real People, Real Replies.
No Bots, No Black Holes.

Big things at Aziro often start small - a message, an idea, a quick hello. A real human reads every enquiry, and a simple conversation can turn into a real opportunity.
Start yours with us.

Phone

Talk to us

+1 844 415 0777

Email

Drop us a line at

info@aziro.com

Got a Tech Challenge? Let’s Talk