DevOps Updates

Uncover our latest and greatest product updates
blogImage

Building Package Using Omnibus

Omnibus is a tool for creating full-stack installers for multiple platforms. In general, it simplifies the installation of any software by including all of the dependencies for that piece of software. It was written by the people at Chef, who use it to package Chef. Omnibus consists of two pieces- omnibus and omnibus software. omnibus – the framework, created by Chef Software, by which we create full-stack, cross-platform installers for software. The project is on GitHub at chef/omnibus. omnibus-software – Chef Software’s open source collection of software definitions that are used to build the Chef Client, the Chef Server, and other Chef Software products. The software definitions can be found on GitHub at chef/omnibus-software. Omnibus provides both, a DSL for defining Omnibus projects for your software, as well as a command-line tool for generating installer artifacts from that definition. Omnibus has minimal prerequisites. It requires Ruby 2.0.0+ and Bundler. Getting Started To get started install omnibus > gem install omnibus You can now create an omnibus project inside your current directory using project generator feature > omnibus new demo This will generate a complete project skeleton in the directory as following: create omnibus-demo/Gemfile create omnibus-demo/.gitignore create omnibus-demo/README.md create omnibus-demo/omnibus.rb create omnibus-demo/config/projects/demo.rb create omnibus-demo/config/software/demo-zlib.rb create omnibus-demo/.kitchen.local.yml create omnibus-demo/.kitchen.yml create omnibus-demo/Berksfile create omnibus-demo/package-scripts/demo/preinst chmod omnibus-demo/package-scripts/demo/preinst create omnibus-demo/package-scripts/demo/prerm chmod omnibus-demo/package-scripts/demo/prerm create omnibus-demo/package-scripts/demo/postinst chmod omnibus-demo/package-scripts/demo/postinst create omnibus-demo/package-scripts/demo/postrm chmod omnibus-demo/package-scripts/demo/postrm It creates the omnibus-demo directory inside your current directory and this directory has all omnibus package build related files. It is easy to build an empty project without doing any change run > bundle install --binstubs bundle install installs all Omnibus dependencies bundle install installs all Omnibus dependencies The above command will create the installer inside pkg directory. Omnibus determines the platform for which to build an installer based on the platform it is currently running on. That is, you can only generate a .deb file on a Debian-based system. To alleviate this caveat, the generated project includes a Test Kitchen setup suitable for generating a series of Omnibus projects. Back to the Omnibus DSL. Though bin/omnibus build demo will build the package for you, it will not do anything exciting. For that, you need to use the Omnibus DSL to define the specifics of your application. 1) Config If present, Omnibus will use a top-level configuration file name omnibus.rb at the root of your repository. This file is loaded at runtime and includes number of configurations. For e.g.- omnibus.rb # Build locally (instead of /var) # ------------------------------- base_dir './local' # Disable git caching # ------------------------------ use_git_caching false # Enable S3 asset caching # ------------------------------ use_s3_caching true s3_access_key ENV['S3_ACCESS_KEY'] s3_secret_key ENV['S3_SECRET_KEY'] s3_bucket ENV['S3_BUCKET'] Please see config doc for more information. You can use different configuration file by using –config option using command line $ bin/omnibus --config /path/to/config.rb 2) Project DSL When you create an omnibus project, it creates a project DSL file inside config/project with the name which you used for creating project for above example it will create config/project/demo.rb. It provides means to define the dependencies of the project and metadata of the project. We will look at some contents of project DSL file name "demo" maintainer "YOUR NAME" homepage "http://yoursite.com" install_dir "/opt/demo" build_version "0.1.0" # Creates required build directories dependency "preparation" # demo dependencies/components dependency "harvester" ‘install_dir’ option is the location where package will be installed. There are more DSL methods available which you can use in this file. Please see the Project Doc for more information. 3) Software DSL Software DSL defines individual software components that go into making your overall package. The Software DSL provides a way to define where to retrieve the software sources, how to build them, and what dependencies they have. Now let’s edit a config/software/demo.rb name "demo" default_version "1.0.0" dependency "ruby" dependency "rubygems" build do #vendor the gems required by the app bundle “install –path vendor/bundle” end In the above example, consider that we are building a package for Ruby on Rails application, hence we need to include ruby and rubygems dependency. The definition for ruby and rubygems dependency comes from the omnibus-software. Chef has introduced omnibus-software, it is a collection of software definitions used by chef while building their products. To use omnibus-software definitions you need to include the repo path in Gemfile. You can also write your own software definitions. Inside build block you can define how to build your installer. Omnibus provide Build DSL which you can use inside build block to define your build essentials. You can run ruby script and copy and delete files using Build DSL inside build block. Apart from all these DSL file omnibus also created ‘package-script’ directory which consist of pre install and post install script files. You can write a script which you want to run before and after the installation of package and also before and after the removal of the package inside these files. You can use the following references for more examples https://github.com/chef/omnibus https://www.chef.io/blog/2014/06/30/omnibus-a-look-forward/

Aziro Marketing

blogImage

Making DevOps Sensible with Assembly Lines

DevOps heralded an era of cutting edge practices in software development and delivery via Continuous Integration (CI) Pipelines. CI made DevOps an epitome of software development and automation, entailing the finest agile methodologies. But, the need for quicker development, testing, and deployment is a never-ending process. This need is pushing back the CI and creating a space for a sharper automation practice, which runs beyond the usual bits and pieces automation. This concept is known as DevOps Assembly Lines.Borrowing inspiration from Automobile IndustryThe concept of assembly lines was first started at Ford Plant in the early 20th century – the idea improved continuously and today is powered via automation. Initially, the parts of the automobiles were manufactured and assembled manually. This was followed by automation in manufacturing, while the assembly was manual. So, there were gaps to be addressed for efficiency, workflow optimization, and speed. The gaps were addressed by automating the assembly of parts. Something similar is happening in the SDLC via DevOps Assembly Lines.Organizations that implement advanced practices of DevOps follow a standardized and methodological process throughout the teams. As a result, these organization experiences fast-flowing CI pipelines, rapid delivery, and top quality.A silo approach that blurs transparencyFollowing the DevOps scheme empowers employees to deliver their tasks efficiently and contribute to the desired output of their team. Many such teams within a software development process are leveraging automation principles. The only concern is that this teamwork is in silos hindering overall visibility into other teams’ productivity, performance, and quality. Therefore, the end product falls shorts of desired expectations – often leaving teams perplexed and demotivated. This difference in DevOps maturity within different teams in a software development environment calls for a uniform Assembly Line.Assembly Lines – triggering de-silo of fragmented teamsCI pipelines consist of a host of automated activities that are relevant to individual stages in the software lifecycle. Which means there are a number of CI pipelines operating simultaneously; but, it is fragmented within SDLC. Assembly Lines is an automated conflation of such CI pipelines towards accelerating a software product’s development and deployment time. DevOps Assembly Line automates activities like continuous integration in the production environment, configuration management and server patching for infrastructure managers, reusable automation scripts in the testing environment, and code as monitoring scripts for security purposes.Bridging the gap between workflows, tools and platformsDevOps Assembly Lines creates a perfect bridge, finely binding standalone workflows, and automated tools and platforms. This way, it establishes a smoothly integrated chain of deployment pipeline optimized for the efficient delivery of software products. The good part is it creates an island of connected and automated tools and platforms; these platforms belong to different vendors and are that gel together easily. Assembly Lines eliminates the gap between manual and automated tasks. It brings QAs, developers, operations teams, SecOps, release management teams, etc. on a single plane to enable a streamlined and uni-directional strategy for product delivery.Managed platform as a service approach for managementDevOps Assembly Lines exhibits an interconnected web of multiple CI pipelines, which entail numerous automated workflows. This makes the management of Assembly Lines a bit tricky. Therefore, Organizations can leverage a managed services portal that streamlines all the activities across the DevOps Assembly Lines.Installing a DevOps platform will centralize the activities of Assembly Lines and streamline a host of workflows. It will offer a unified experience to multiple DevOps teams and also help operate a low cost and fast-paced Assembly Lines. A DevOps platform would also entail different tools from multiple vendors that could work in tandem.The whole idea behind installing Assembly Lines is to establish a collaborative auto-mode within diverse activities of SDLC. A centralized, on-demand platform could help get started with pre-integrated tools, that could manage automated deployment.A team of operators, either in-house or via a support partner, could handle this platform. This way, there will be smooth functioning across groups, and on-demand requests for any issues that could be addressed immediately. The platform will invariably help DevOps architects to concentrate on productive parts – while maintenance is taken care of behind the scenes. Further, it would allow teams to look beyond their core activities (a key goal of Assembly Lines) and absorb the status of overall team productivity. The transparency will give them an idea of existing hindrances, performances, productivity, and expected quality. In accordance, they could take corrective measures.Future AheadCI pipelines are helpful for rapid product development and deployment. But, considering the graph of rising expectation in quality and feature enablement and considering the time-to-market requirement, the CI pipelines do not fit the bill. Further, the issue of configuration management is too complicated for CI pipelines to handle. Therefore, the next logical step is to embrace DevOps Assembly Lines. And the importance of a centralized management platform to drive consistency, scalability, and transparency via Assembly Lines should not be undermined.

Aziro Marketing

blogImage

Chef Knife Plugin for Windows Azure (IAAS)

Chef is an open-source systems management and cloud infrastructure automation framework created by Opscode. It helps in managing your IT infrastructure and applications as code. It gives you a way to automate your infrastructure and processes. Knife is a CLI to create, update, search and delete the entities or manage actions on entities in your infrastructure like node (hosts), cloud resources, metadata (roles, environments) and code for infrastructure (recipes, cookbooks), etc. A Knife plug-in is a set of one (or more) subcommands that can be added to Knife to support additional functionality that is not built-in to the base set of Knife subcommands. The knife azure is a knife plugin which helps you automate virtual machine provisioning in Windows Azure and bootstrapping it. This article talks about using Chef and knife-azure plugin to provision Windows/Linux virtual machines in Windows Azure and bootstrapping the virtual machine. Understanding Windows Azure (IaaS): To deploy a Virtual Machine in a region (or service location) in Azure, all the components shown described above have to be created; A Virtual Machine is associated with a DNS (or cloud service). Multiple Virtual Machines can be associated with a single DNS with load-balancing enabled on certain ports (eg. 80, 443 etc). A Virtual Machine has a storage account associated with it which storages OS and Data disks A X509 certificate is required for password-less SSH authentication on Linux VMs and HTTPS-based WinRM authentication for Windows VMs. A service location is a geographic region in which to create the VMs, Storage accounts etc The Storage Account The storage account holds all the disks (OS as well as data). It is recommended that you create a storage account in a region and use it for the VMs in that region. If you provide the option –azure-storage-account, knife-azure plugin creates a new storage account with that name if it doesnt already exist. It uses this storage account to create your VM. If you do not specify the option, then the plugin checks for an existing storage account in the service location you have mentioned (using option –service-location). If no storage account exists in your location, then it creates a new storage with name prefixed with the azure-dns-name and suffixed with a 10 char random string. Azure Virtual Machine This is also called as Role(specified using option –azure-vm-name). If you do not specify the VM name, the default VM name is taken from the DNS name( specified using option –azure-dns-name). The VM name should be unique within a deployment. An Azure VM is analogous to the Amazon EC2 instance. Like an instance in Amazon is created from an AMI, you can create an Azure VM from the stock images provided by Azure. You can also create your own images and save them against your subscription. Azure DNS This is also called as Hosted Service or Cloud Service. It is a container for your application deployments in Azure( specified using option –azure-dns-name). A cloud service is created for each azure deployment. You can have multiple VMs(Roles) within a deployment with certain ports configured as load-balanced. OS Disk A disk is a VHD that you can boot and mount as a running version of an operating system. After an image is provisioned, it becomes a disk. A disk is always created when you use an image to create a virtual machine. Any VHD that is attached to virtualized hardware and that is running as part of a service is a disk. An existing OS Disk can be used (specified using option –azure-os-disk-name ) to create a VM as well. Certificates For SSH login without password, an X509 Certificate needs to be uploaded to the Azure DNS/Hosted service. As an end user, simply specify your private RSA key using –identity-file option and the knife plugin takes care of generating a X509 certificate. The virtual machine which is spawned then contains the required SSH thumbprint. I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo. Gem Install Run the command gem install knife-azure Install from Source Code To get the latest changes in the knife azure plugin, download the source code, build and install the plugin: 1. Uninstall any existing versions $ gem uninstall knife-azure Successfully uninstalled knife-azure-1.2.0 2. Clone the git repo and build the code $ git clone https://github.com/opscode/knife-azure $ cd knife-azure $ gem build knife-azure.gemspec WARNING: description and summary are identical Successfully built RubyGem Name: knife-azure Version: 1.2.0 File: knife-azure-1.2.0.gem 3. Install the gem $ gem install knife-azure-1.2.0.gem Successfully installed knife-azure-1.2.0 1 gem installed Installing ri documentation for knife-azure-1.2.0... Building YARD (yri) index for knife-azure-1.2.0... Installing RDoc documentation for knife-azure-1.2.0... 4. Verify your installation $ gem list | grep azure knife-azure (1.2.0) To provision a VM in Windows Azure and bootstrap using knife, Firstly, create a new windows azure account: at this link and secondly, download the publish settings file fromhttps://manage.windowsazure.com/publishsettings The publish settings file contains certificates used to sign all the HTTP requests (REST APIs). Azure supports two modes to create virtual machines – quick create and advanced. Azure VM Quick Create You can create a server with minimal configuration. On the Azure Management Portal, this corresponds to the “Quick Create – Virtual Machine” workflow. The corresponding sample command for quick create for a small Windows instance is: knife azure server create --azure-publish-settings-file '/path/to/your/cert.publishsettingsfile' --azure-dns-name 'myservice' --azure-source-image 'windows-image-name' --winrm-password 'jetstream@123' --template-file 'windows-chef-client-msi.erb' --azure-service-location "West US" Azure VM Advanced Create You can set various other options in the advanced create including service location or region, storage-account, VM name etc. The corresponding command to create a Linux instance with advanced options is: knife azure server create --azure-publish-settings-file "path/to/your/publish/settings/file" --azure-vm-size Medium --azure-dns-name "HelloAzureDNS" --azure-service-location "West US" --azure-vm-name 'myvm01' --azure-source-image "b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-13_04-amd64-server-20130423-en-us-30GB" --azure-storage-account "helloazurestorage1" --ssh-user "helloazure" --identity-file "path/to/your/rsa/pvt/key" To create a VM and connect it to an existing DNS/service, you can use a command as below: knife azure server create --azure-publish-settings-file "path/to/your/publish/settings/file" --azure-connect-to-existing-dns --azure-dns-name 'myservice' --azure-vm-name 'myvm02' --azure-service-location 'West US' --azure-source-image 'source-image-name' --ssh-user 'jetstream' --ssh-password 'jetstream@123' List available Images: knife azure image list List currently available Virtual Machines: knife azure server list Delete and Clean up a Virtual Machine: knife azure server delete --azure-dns-name myvm02 'myservice' --chef-node-name 'myvm02' --purge This post is meant to explain the basics and usage for knife-azure.

Aziro Marketing

blogImage

Jenkins vs. GitLab vs. CircleCI – Which CI/CD Tool Is Right for You?

In today’s fast-evolving software development landscape, delivering high-quality applications quickly and efficiently is a top priority for development teams. The traditional approach of manually building, testing, and deploying code is no longer viable for teams aiming for agility, scalability, and reliability. This is where CI/CD tools come into play, automating critical aspects of the software development lifecycle (SDLC) to ensure faster releases, reduced errors, and seamless collaboration. Top CI/CD Tools: Jenkins, GitLab CI/CD, and CircleCIAmong the vast array of CI/CD tools, Jenkins, GitLab CI/CD, and CircleCI emerge as some of the most popular choices, each catering to different needs and preferences. Jenkins, a widely adopted open-source automation server, is known for its extensive plugin ecosystem and deep customization options. GitLab CI/CD, integrated seamlessly into GitLab’s DevOps platform, offers an all-in-one solution for teams leveraging version control systems. Meanwhile, CircleCI, a cloud-native platform, is designed for speed and simplicity, making it ideal for modern agile development workflows.Choosing the Right CI/CD Tool for Your Development TeamBut with so many variables at play—such as ease of use, scalability, hosting options, and integration support—how do you determine the best fit for your team enabling developers to streamline workflows and ensure automated deployment? Given that continuous deployment relies heavily on robust automation and seamless integrations, selecting the right CI/CD tool becomes crucial for maintaining efficiency and reliability.Understanding CI/CD and Why It MattersContinuous integration (CI) and continuous deployment (CD) streamline the software development process by automating code integration, automated testing, and deployment automation. This ensures faster releases, fewer bugs, and greater efficiency. The right CI/CD tools can significantly impact development workflows, making it essential to choose wisely. Additionally, continuous delivery ensures that every change is deployable anytime, enhancing software delivery.Jenkins: The Customizable PowerhouseJenkins is an open-source automation server that has been a pioneer in CI/CD for years. It supports thousands of plugins and provides flexibility for development workflows and deployment pipelines.ProsHighly Customizable: With over 1,800 plugins, Jenkins can adapt to almost any CI/CD pipeline, offering unparalleled flexibility. This allows development teams to fine-tune their continuous integration tool to meet specific project needs, whether for a simple build or a complex deployment pipeline. However, managing these plugins efficiently requires strong expertise in configuration management and automation tools to avoid conflicts and performance issues.Open-Source and Free: Jenkins is free to use, making it a cost-effective solution for organizations looking to implement CI/CD tools without additional licensing expenses. However, since it is self-hosted, teams must take full responsibility for managing the automation process, which includes infrastructure setup, security patches, and maintenance. This means investing in configuration management tools and skilled personnel to ensure smooth operation, scalability, and security compliance.Strong Community Support: Jenkins has one of the largest and most active CI/CD communities, ensuring comprehensive documentation, forums, and troubleshooting resources. This broad support network allows teams to integrate multiple tools seamlessly and resolve challenges quickly when implementing complex workflows. Additionally, frequent community contributions result in continuous updates, improved key features, and a growing ecosystem of third-party integrations.Scalability: Jenkins is designed to accommodate a wide range of users, from multiple developers in small teams to enterprise-scale development and operations teams. Its distributed build system enables continuous integration across various nodes, optimizing the development lifecycle while supporting workloads of varying complexity. By leveraging cloud-based or on-premises infrastructures, Jenkins ensures flexibility in scaling resources dynamically to meet the demands of modern software development.Integration Testing: Jenkins provides built-in support for various testing frameworks, ensuring seamless integration testing within the CI/CD pipeline. Through automated workflows, teams can validate code changes before deployment, reducing the risk of bugs and performance issues in production environments. Additionally, Jenkins supports continuous testing, allowing for early detection of errors, improving software quality, and enabling a more reliable software development process.ConsSteep Learning Curve: Jenkins requires significant configuration and maintenance, making it challenging for teams unfamiliar with configuration management and CI/CD best practices. Setting up pipelines often involves scripting in Groovy, managing dependencies, and troubleshooting complex integrations. Without prior experience, teams may struggle with optimizing performance, security, and development workflows, increasing the time required for adoption.Complex Workflows: Setting up a Jenkins pipeline requires scripting expertise, making it difficult for teams without experience in development lifecycle automation. Defining deployment pipelines and integrating with external automation tools often involves writing custom scripts and managing multiple configurations. This complexity can lead to longer setup times, increased maintenance efforts, and a higher risk of misconfigurations affecting the software development process.Performance Overhead: As a self-hosted solution, Jenkins demands high system resources and careful tuning to ensure optimal performance. Large-scale deployments may experience bottlenecks without proper configuration management tools, especially when running parallel builds. Teams must monitor resource utilization, optimize worker nodes, and scale efficiently to avoid performance degradation in modern development environments.Requires Configuration Management Tools: Jenkins relies on configuration management tools to simplify administration and enhance automation efficiency. Teams often need tools like Ansible, Puppet, or Chef to manage infrastructure, handle deployment automation, and ensure consistency across environments. While these tools add flexibility, they also increase operational complexity, requiring additional expertise to maintain a stable and scalable CI/CD environment.Best ForJenkins is ideal for development and operations teams that need deep customization and flexibility in their CI/CD pipelines. Its extensive plugin ecosystem allows teams to tailor workflows, but managing complex workflows requires strong scripting and configuration management expertise. Organizations with skilled DevOps professionals can leverage Jenkins to optimize software development lifecycle automation while maintaining complete control over their infrastructure.GitLab CI/CD: A Unified DevOps PlatformGitLab CI/CD is a built-in feature of GitLab, providing seamless integration with version control systems and DevOps functionalities. It enables automated code integration, making the development cycle more efficient.ProsIntegrated with GitLab: GitLab CI/CD seamlessly integrates with GitLab’s version control systems, eliminating the need for third-party tools and simplifying development workflows. Developers can manage code integration, automated testing, and continuous deployment all within a single platform. This streamlined approach reduces overhead, improves efficiency, and ensures a smooth software development process without additional dependencies.Auto DevOps: GitLab’s feature provides pre-configured CI/CD pipelines, significantly reducing setup time and manual configuration efforts. It automates key stages like code commit, continuous integration, security scanning, and deployment, allowing teams to focus on feature development rather than infrastructure management. This built-in automation simplifies the development cycle, making it easier for teams to implement DevOps best practices without deep expertise.Enhanced Security: GitLab CI/CD includes secure software scanning, compliance tools, and integration testing to identify vulnerabilities before deployment. These features help enforce security policies, reduce human errors, and prevent breaches throughout the software development lifecycle. By integrating security checks into the CI/CD pipeline, teams can achieve enhanced security without slowing down software delivery.Cloud and Self-Hosted Options: GitLab CI/CD provides both cloud environments and self-hosted deployment options, offering flexibility based on an organization’s infrastructure and security requirements. Teams can choose cloud providers for scalability and ease of management or opt for self-hosting to maintain greater control over sensitive data. This adaptability makes GitLab suitable for businesses of all sizes, ensuring compatibility with different environments and compliance needs.Advanced Features: GitLab CI/CD includes continuous delivery pipeline tools that streamline deployments without requiring additional automation tools. Built-in deployment automation ensures smooth releases by reducing manual intervention and minimizing deployment failures. These advanced features make GitLab a powerful option for teams looking to automate and optimize their software development lifecycle.Supports Multiple Languages: GitLab CI/CD supports multiple programming languages, ensuring flexibility for teams working on diverse software projects. Whether developers use Python, Java, Go, or other languages, GitLab’s native integration with various build and testing frameworks enables smooth code integration. This broad compatibility makes it a valuable tool for organizations with multiple developers working across different stacks.ConsLearning Curve for New Users: GitLab CI/CD offers a comprehensive set of DevOps features, but its wide range of functionalities can overwhelm beginners. New users, especially those unfamiliar with CI/CD automation, may require additional training to utilize the platform’s capabilities fully. This learning curve can slow down adoption for operations teams and increase the initial setup time.Limited Plugin Ecosystem: Unlike Jenkins, GitLab CI/CD lacks a vast plugin ecosystem, restricting developers from integrating custom development tools. While it offers built-in CI/CD pipelines and essential integrations, advanced customization may be challenging for teams with unique requirements. This limitation makes it less flexible for organizations that rely on third-party automation tools to optimize their workflows.Higher Cost for Premium Features: While GitLab CI/CD provides a free tier, many advanced key features, such as compliance tools, high concurrency builds, and advanced security scanning, require a paid plan. These premium plans can be costly for startups and small teams that need enterprise-grade functionalities but have budget constraints. The pricing model may make GitLab less attractive for companies seeking a fully-featured CI/CD solution without high recurring costs.Best ForGitLab CI/CD is a powerful option for development teams that already rely on GitLab’s version control systems, enabling seamless code integration and deployment. Its CI/CD tools are built directly into the platform, eliminating the need for external third-party tools and simplifying the software development lifecycle. With deployment automation and built-in continuous delivery pipelines, GitLab CI/CD ensures efficient, secure, and scalable software releases across different environments.CircleCI:Speed and SimplicityCircleCI is a cloud-native CI/CD tool known for its ease of use and fast execution times. It supports automated builds and continuous testing in different environments.ProsQuick Setup: Minimal configuration is required to get started, making it an excellent choice for teams wanting to implement CI/CD tools quickly. With pre-configured settings and easy onboarding, teams can focus on software development rather than complex initial setups. This reduces downtime and accelerates the development cycle, allowing faster deployment of new features.Optimized for Speed: Parallelism and caching significantly speed up the build and code integration process, reducing bottlenecks. Multiple developers can push changes without long delays by running tests and builds simultaneously. This efficiency enhances continuous integration workflows, ensuring a smooth and rapid software development process.Managed Service: Since CircleCI is a cloud-native solution, no infrastructure maintenance is needed, reducing operational overhead. This allows development teams to concentrate on writing and testing new code rather than handling server configurations. As a result, organizations benefit from automated scalability and reduced maintenance costs.Good Integration with Cloud Services: CircleCI works seamlessly with cloud providers like AWS, Google Cloud Build, and Azure, ensuring flexible and scalable deployments. Its compatibility with different environments allows smooth migration and deployment automation across platforms. This makes it easier for teams to leverage cloud environments for efficient software delivery.Native Integration: CircleCI supports multiple tools and development workflows, making it a versatile option for diverse software projects. Its seamless integration with repositories, databases, and monitoring tools ensures a well-connected CI/CD pipeline. This built-in connectivity reduces reliance on third-party tools, enhancing productivity and reliability.ConsLimited Self-Hosting Options: Although CircleCI offers a self-hosted solution, it lacks the full feature set available in its cloud-based version. This limitation makes it less suitable for enterprises requiring complete control over their CI/CD tools and infrastructure. Organizations with strict security and compliance needs may find their self-hosted capabilities insufficient for their deployment automation strategies.Pricing Based on Usage: CircleCI’s pricing is based on build frequency and concurrency levels, which can lead to unpredictable costs. Expenses can rise significantly as development teams scale their projects and increase pipeline runs. This makes budgeting challenging, particularly for startups or teams with fluctuating software development demands.Fewer Customization Options: Unlike Jenkins, which provides extensive plugin support, CircleCI offers fewer customization features. This restriction can be a drawback for teams needing tailored continuous integration workflows and specialized automation tools. Organizations with highly complex workflows may find CircleCI’s configurability lacking compared to other CI/CD tools.Manual Processes in Some Cases: While CircleCI automates many aspects of the software development process, some configurations still require manual setup. This can slow down deployment pipelines, especially for teams aiming for fully automated continuous deployment. Without built-in automation for all scenarios, developers may need to perform additional scripting or integration work.Best ForCircleCI is an excellent choice for startups and modern development teams that need a fast and efficient CI/CD solution with minimal setup. Its cloud-native approach and automated workflows enable rapid code integration and continuous deployment without extensive manual configuration. While it may not offer deep customization like Jenkins, its ease of use and optimized performance make it ideal for teams focused on speed and scalability in their software development lifecycle.Head-to-Head ComparisonWhen selecting the right CI/CD tool, it’s essential to compare them across key aspects that impact software development workflows. Below is a detailed head-to-head comparison of Jenkins, GitLab CI/CD, and CircleCI, focusing on their capabilities in different areas.1. Ease of Setup and ConfigurationJenkins requires significant manual setup, including installing and configuring plugins to tailor the pipeline to project needs. While this provides flexibility, it also demands deep technical expertise to ensure a smooth setup.GitLab CI/CD offers built-in CI/CD functionality with predefined pipeline templates, making it easier to get started without extensive configuration. However, advanced customization may require YAML scripting knowledge.CircleCI is cloud-native, requiring minimal setup, making it the easiest to configure out of the three. It provides pre-built Docker images and automated environment provisioning to accelerate onboarding.2. Customization and FlexibilityJenkins is the most customizable CI/CD tool, with over 1,800 plugins available to integrate with various tools, testing frameworks, and deployment environments. However, maintaining these plugins can add operational overhead.GitLab CI/CD provides moderate customization, allowing users to define their pipelines with YAML configurations. It integrates tightly with GitLab repositories, limiting flexibility for external version control systems.CircleCI offers fewer customization options compared to Jenkins but provides native support for popular DevOps tools, cloud services, and container-based builds, making it ideal for agile teams that prioritize speed over complexity.3. Performance and ScalabilityJenkins is highly scalable but requires careful infrastructure management to handle large workloads. Performance tuning is necessary to optimize self-hosted environments and avoid slow pipeline execution.GitLab CI/CD provides auto-scaling capabilities in its cloud-based offering, making it efficient for teams that need dynamic resource allocation. However, scaling on self-hosted instances requires additional configuration.CircleCI is designed for high-speed execution, leveraging parallelism and caching to reduce build times. While it scales well for cloud deployments, self-hosted options have limited flexibility for large enterprise needs.4. Integration with Development and Deployment ToolsJenkins supports a vast range of integrations, but most require plugins to connect with cloud services, deployment platforms, and monitoring tools. This makes it versatile but dependent on third-party support.GitLab CI/CD is built into GitLab, making it the best choice for teams using GitLab repositories. It lacks an extensive plugin marketplace, limiting external tool integrations outside of GitLab’s ecosystem.CircleCI offers native integrations with popular cloud providers like AWS, Google Cloud, and Azure, making it ideal for cloud-first teams. It also supports Docker and Kubernetes, streamlining container-based deployments.5. Cost and Maintenance OverheadJenkins is free and open source, but organizations must account for infrastructure costs, plugin maintenance, and dedicated personnel to manage configuration and security updates.GitLab CI/CD offers a free tier, but advanced features, such as security scanning and premium support, require a paid plan. Self-hosting requires additional server management costs.CircleCI follows a usage-based pricing model, making it cost-effective for small teams but potentially expensive for high-frequency builds. The fully managed service reduces maintenance costs but requires budgeting for concurrent jobs.Choosing the Right CI/CD Tool for Your NeedsThe right CI/CD tools depend on your team’s requirements:Jenkins: Maximum Customization and Full ControlChoose Jenkins if your team requires extensive customization and complete control over your CI/CD pipelines. With its vast plugin ecosystem and support for configuration management tools, Jenkins allows fine-tuning automation processes to fit complex development workflows. However, it requires manual setup and maintenance, making it ideal for teams with the technical expertise to manage self-hosted environments and optimize performance overhead.GitLab CI/CD: Seamless Integration with Version ControlOpt for GitLab CI/CD if you want a fully integrated CI/CD solution that works natively with your version control system. As a part of GitLab’s DevOps platform, it simplifies software development workflows by eliminating the need for third-party CI/CD tools while offering Auto DevOps for streamlined automation. This makes it an excellent choice for teams looking for end-to-end automation, enhanced security, and a unified development experience.CircleCI: Fast, Cloud-Native Deployment with Minimal HassleGo with CircleCI if your priority is speed, simplicity, and cloud-native flexibility. With optimized parallel execution, caching mechanisms, and pre-configured cloud integrations, it accelerates build and deployment processes. Ideal for startups and agile teams, CircleCI offers minimal setup requirements, enabling rapid adoption without the complexity of manual infrastructure management.ConclusionEach tool excels in different areas, so your project’s complexity, development lifecycle, and budget are the best choices. No matter which tool you choose, integrating a solid CI/CD pipeline will enhance your software development lifecycle, ensuring efficient and reliable software delivery. Which CI/CD tool is your favorite? Let us know in the comments!

Aziro Marketing

blogImage

Know about Libstorage – Storage Framework for Docker Engine (Demo)

This article captures the need for storage framework within Docker engine. It further details our libstorage framework integration to Docker engine and its provision for a clean, pluggable storage management framework for Docker engines. Libstorage design is loosely modeled on libnetwork for Docker engine. Libstorage framework and current functionality are discussed in detail. Finally, future extensions and considerations are suggested. As of today, Docker has acquired Infinit https://blog.docker.com/2016/12/docker-acquires-infinit/ to overcome this shortcoming. So I wish to see most of this gap being addressed in forthcoming docker engine releases.1 IntroductionDocker engine is the opensource tool that provides container lifecycle management. The tool has been great and helps everyone understand, appreciate and deploy applications over containers a breeze. While working with Docker engine, we found shortcomings, especially with volume management. The communities major concern with Docker engine had always been provisioning volumes for containers. Volume lifecycle management for containers seemed to have not been thought through well with various proposals that were floated over. We believe there is more to it, and thus was born Libstorage. Currently docker expects application deployers to choose the volume driver. This is plain ugly. It is the cluster administrator who decides which volume drivers are deployed. Application developers need just storage and should never know and neither do they care for the underlying storage stack.2 Libstorage StackLibstorage as a framework works on defining a complete Volume lifecycle management methods for Containers. Docker daemon will interact with Volume Manager to complete the volume management functionality. Libstorage standardizes the necessary interfaces to be made available from any Volume Manager. There can be only one Volume Manager active in the cluster. Libstorage is integrated with distributed key-value store to ensure volume configuration is synced across the cluster. So any Node part of the cluster, shall know about all volumes and its various states.Volume Controller is the glue from any storage stack to docker engine. There can be many Volume Controllers that can be enabled under top level Volume Manager. Libstorage Volume Manager shall directly interact with either Volume Controller or with Volumes to complete the intended functionality.3 Cluster SetupSaltstack forms the underlying technology for bringing up the whole cluster setup. Customized flavor of VMs with necessary dependencies were pre-baked as (Amazon Machine Images) AMI and DigitalOcean Snapshots previously. Terraform scripts bootstrap the cloud cluster with few parameters and targeted cloud provider or private hosting along with needed credentials to kick start the process.Ceph cloud storage (Rados block devices) provisioning and management was married to Docker engine volume management framework. It can be extended easily to other cloud storage solutions like GlusterFs and CephFS easily. Ceph Rados Gateway and Amazon S3 was used for object archival and data migration seamlessly.4 Volume ManagerVolume Manager is the top level module from Libstorage that directly interacts with Docker daemon and external distributed keyvalue store. Volume Manager ensures Volume Configuration is consistent across all Nodes in the cluster. Volume Manager defines a consistent interface for Volume management for both Docker daemon to connect to it, and the many Volume Controllers within Libstorage that can be enabled in the cluster. A standard set of policies are also defined that Volume Controllers can expose.4.1 Pluggable Volume ManagerPluggable Volume Manager is an implementation of the interface and needed functionality. The top level volume manager is by itself a pluggable module to docker engine.5 Volume ControllersVolume Controllers are pluggable modules to Volume Managers. Each Volume Controller exports one more policy that it supports and users target Volume Controller by exported policies. For example, if policy is distributed, then volume is available at any Node in the cluster. If policy is local, although the volume is available on any node in the cluster, volume data is held local on the host filesystem. Volume Controllers can use any storage stack underneath and provide a standard view of volume management through toplevel Volume Manager.5.1 Pluggable Volume ControllerDolphinstor implements Ceph, Gluster, Local and RAM volume controllers. Upon volume creation, the volumes are visible across all the nodes in the cluster. Whether the volume is available for containers to mount (because of sharing properties configured during volume creation), or whether the volume data is available from other Nodes (only if volume is distributed) are controllable attributes during volume creation. Ceph Volume Controller implements distributed policy, guaranteeing any volume created with it, shall be available across any Node in the cluster. Local Volume Controller implements local policy, which guarantees that volume data are present only on host machines on which the container is scheduled. Containers scheduled on any host shall see the volume, but is held as a local copy. And RAM Volume Controller defines two policies, ram and secure. Volume data is held on RAM and so is volatile. A secure policy volume cannot be shared even across containers in the same host.6 CLI ExtensionsBelow are the list of CLI extensions provided and managed by Libstorage.docker dsvolume create [-z|--size=MB] [-p|--policy=distributed|distributed-fs|local|ram|secure] [-s|--shared=true|false] [-m|--automigrate=true|false] [-f|--fstype=raw,ext2,ext3,btrfs,xfs] [-o|--opt=[]] VOLUME }If volumes have backing block device, they are mounted within volume as well. Specifying raw for fstype during volume creation does not format the volume for any filesystem. The volume is presented as a raw block device for containers to use.• docker dsvolume create [-z|--size=MB] [-p|--policy=distributed|distributed-fs|local|ram|secure] [-s|--shared=true|false] [-m|--automigrate=true|false] [-f|--fstype=raw,ext2,ext3,btrfs,xfs] [-o|--opt=[]] VOLUME If volumes have backing block device, they are mounted within volume as well. Specifying raw for fstype during volume creation does not format the volume for any filesystem. The volume is presented as a raw block device for containers to use. • docker dsvolume rm VOLUME • docker dsvolume info VOLUME [VOLUME...] • docker dsvolume ls • docker dsvolume usage VOLUME [VOLUME...] • docker dsvolume rollback VOLUME@SNAPSHOT • docker dsvolume snapshot create -v|--volume=VOLUME SNAPSHOT • docker dsvolume snapshot rm VOLUME@SNAP • docker dsvolume snapshot ls [-v|--volume=VOLUME] • docker dsvolume snapshot info VOLUME@SNAPSHOT [VOLUME@SNAPSHOT...] • docker dsvolume snapshot clone srcVOLUME@SNAPSHOT NEWVOL- UME • docker dsvolume qos {create|edit} [--read-iops=100] [--read-bw=10000] [--write-iops=100] [--write-bw=10000] [--weight=500] PROFILE • docker dsvolume qos rm PROFILE • docker dsvolume qos ls • docker dsvolume qos info PROFILE [PROFILE...] • docker dsvolume qos {enable|disable} [-g|--global] VOLUME [VOLUME...] • docker dsvolume qos apply -p=PROFILE VOLUME [VOLUME...]7 Console Logs[lns@dolphinhost3 bins]$ ./dolphindocker dsvolume ls NAME Created Type/Fs Policy Size(MB) Shared Inuse Path [lns@dolphinhost3 bins]$ [lns@dolphinhost3 bins]$ [lns@dolphinhost3 bins]$ [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume create --help Usage: ./dolphindocker dsvolume create [OPTIONS] VOLUME-NAME Creates a new dsvolume with a name specified by the user -f, --filesys=xfs       volume size --help=false            Print usage -z, --size=             volume size [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume create -f=ext4 \ -z=100 -m -p=distributed demovol1 2015/10/08 02:30:23 VolumeCreate(demovol1) with opts map[name:demovol1 policy:distributed m dsvolume create acked response {"Name":"demovol1","Err":""} [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume create -z=100 -p=local demolocal1 2015/10/08 02:30:53 VolumeCreate(demolocal1) with opts map[shared:true fstype:xfs automigra dsvolume create acked response {"Name":"demolocal1","Err":""} [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume create -z=100 -p=ram demoram 2015/10/08 02:31:04 VolumeCreate(demoram) with opts map[shared:true fstype:xfs automigrate: dsvolume create acked response {"Name":"demoram","Err":""} [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume create -z=100 -p=secure demosecure 2015/10/08 02:31:17 VolumeCreate(demosecure) with opts map[name:demosecure policy:secure mb dsvolume create acked response {"Name":"demosecure","Err":""} [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume ls NAME Created Type/Fs Policy Size(MB) Shared Inuse Path demosecure dolphinhost3 ds-ram/tmpfs secure 100 false - demovol1 dolphinhost3 ds-ceph/ext4 distributed 100 true - demolocal1 dolphinhost3 ds-local/ local 0 true - demoram dolphinhost3 ds-ram/tmpfs ram 100 true - [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume info demosecure demolocal1 volume info on demosecure [ { "Name": "demosecure", "Voltype": "ds-ram", "CreatedBy": "dolphinhost3", "CreatedAt": "Thu Oct 8 02:31:17 EDT 2015", "Policy": "secure", "Fstype": "tmpfs", "MBSize": 100, "AutoMigrate": false, "Shared": false, "Mountpoint": "", "Inuse": null, "Containers": null, "LastAccessTimestamp": "Mon Jan 1 00:00:00 UTC 0001", "IsClone": false, "ParentSnapshot": "", "QoSState": false, "QoSProfile": "" }, { "Name": "demolocal1", "Voltype": "ds-local", "CreatedBy": "dolphinhost3", "CreatedAt": "Thu Oct 8 02:30:53 EDT 2015", "Policy": "local", "Fstype": "", "MBSize": 0, "AutoMigrate": false, "Shared": true, "Mountpoint": "", "Inuse": null, "Containers": null, "LastAccessTimestamp": "Mon Jan 1 00:00:00 UTC 0001", "IsClone": false, "ParentSnapshot": "", "QoSState": false, "QoSProfile": "" } ] [lns@dolphinhost3 bins]$ [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume usage demovol1 volume usage on demovol1 [ { "Name": "demovol1", "Usage": [ { "file": "/var/lib/docker/volumes/_dolphinstor/demovol1/lost+found", "size": "12K" }, { "file": "/var/lib/docker/volumes/_dolphinstor/demovol1", "size": "15K" }, { "file": "total", "size": "15K" } ], "Size": "100M", "Err": "" } ] [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume usage -s=false demovol1 volume usage on demovol1 [ { "Name": "demovol1", "Usage": [ { "file": "/var/lib/docker/volumes/_dolphinstor/demovol1/hosts", "size": "1.0K" }, { "file": "/var/lib/docker/volumes/_dolphinstor/demovol1/lost+found", "size": "12K" }, { "file": "/var/lib/docker/volumes/_dolphinstor/demovol1/1", "size": "0" }, { "file": "/var/lib/docker/volumes/_dolphinstor/demovol1/hostname", "size": "1.0K" }, { "file": "/var/lib/docker/volumes/_dolphinstor/demovol1", "size": "15K" }, { "file": "total", "size": "15K" } ], "Size": "100M", "Err": "" } ] [lns@dolphinhost3 bins]$ [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume info demovol1 volume info on demovol1 [ { "Name": "demovol1", "Voltype": "ds-ceph", "CreatedBy": "dolphinhost3", "CreatedAt": "Thu Oct 8 02:30:23 EDT 2015", "Policy": "distributed", "Fstype": "ext4", "MBSize": 100, "AutoMigrate": true, "Shared": true, "Mountpoint": "", "Inuse": [ "dolphinhost3" ], "Containers": [ "5000b791e0c78e7c8f3b43b72b42206d0eaed3150a825e1f055637b31676a77f@dolphinhost1" "0c8a9d483a63402441185203b0262f7f3b8d761a8a58145ed55c93835ba83538@dolphinhost2" ], "LastAccessTimestamp": "Thu Oct 8 03:46:51 EDT 2015", "IsClone": false, "ParentSnapshot": "", "QoSState": false, "QoSProfile": "" } ] [lns@dolphinhost3 bins]$ [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume qos ls Global QoS Enabled Name             ReadIOPS ReadBW WriteIOPS WriteBW Weight default          200 20000 100 10000 600 demoprofile      256 20000 100 10000 555 myprofile        200 10000 100 10000 555 newprofile       200 2000 100 1000 777 dsvolume qos list acked response [lns@dolphinhost3 bins]$ ./dolphindocker run -it -v demovol1:/opt/demo ubuntu:latest bash root@1dba3c87ca04:/# dd if=/dev/rbd0 of=/dev/null bs=1M count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0625218 s, 16.8 MB/s root@1dba3c87ca04:/# exit [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume info demovol1 volume info on demovol1 [ { "Name": "demovol1", "Voltype": "ds-ceph", "CreatedBy": "dolphinhost3", "CreatedAt": "Thu Oct 8 02:30:23 EDT 2015", "Policy": "distributed", "Fstype": "ext4", "MBSize": 100, "AutoMigrate": true, "Shared": true, "Mountpoint": "", "Inuse": [], "Containers": [ "5000b791e0c78e7c8f3b43b72b42206d0eaed3150a825e1f055637b31676a77f@dolphinhost3" "0c8a9d483a63402441185203b0262f7f3b8d761a8a58145ed55c93835ba83538@dolphinhost3" "87c7a2462879103fd3376be4aae352568e5e36659820b92d567829c0b8375255@dolphinhost3" "f3feb1f15ed614618c02321e7739e0476f23891aa7bb1b2d5211ba1e2641c643@dolphinhost3" "76ab5182082ac30545725c843177fa07d06e3ec76a2af41b1e8e1dee42670759@dolphinhost3" "c6226469aa036f277f237643141d4d168856692134cea91f724455753c632533@dolphinhost3" "426b57492c7c05220b75d05a13ad144742b92fa696611465562169e1cb74ea6b@dolphinhost3" "2419534dd70ba2775ca1880fb71d196d31a167579d0ee85d5203be3cc0ff574e@dolphinhost3" "c3afeac73b389a69a856eeccf3098e778d1b0087a7a543705d6bfbba4f5c6803@dolphinhost3" "7bd28eed915c450459bd1a27d49325548d0791cbbaac670dcdae1f8d97596c7e@dolphinhost3" "0fc0217b6cda2f02ef27dca9d6dd3913bda7a871012d1073f29a864ae77bc61f@dolphinhost3" ], "LastAccessTimestamp": "Thu Oct 8 05:16:26 EDT 2015", "IsClone": false, "ParentSnapshot": "", "QoSState": false, "QoSProfile": "" } ] [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume qos apply -p=newprofile demovol1 2015/10/08 05:17:04 QoSApply(demovol1) with opts {Name:newprofile Opts:map[name:newprofile dsvolume QoS apply response [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume info demovol1             volume info [ { "Name": "demovol1", "Voltype": "ds-ceph", "CreatedBy": "dolphinhost3", "CreatedAt": "Thu Oct 8 02:30:23 EDT 2015", "Policy": "distributed", "Fstype": "ext4", "MBSize": 100, "AutoMigrate": true, "2419534dd70ba2775ca1880fb71d196d31a167579d0ee85d5203be3cc0ff574e@dolphinhost3" "c3afeac73b389a69a856eeccf3098e778d1b0087a7a543705d6bfbba4f5c6803@dolphinhost3" "7bd28eed915c450459bd1a27d49325548d0791cbbaac670dcdae1f8d97596c7e@dolphinhost3" "0fc0217b6cda2f02ef27dca9d6dd3913bda7a871012d1073f29a864ae77bc61f@dolphinhost3" ], "LastAccessTimestamp": "Thu Oct 8 05:16:26 EDT 2015", "IsClone": false, "ParentSnapshot": "", "QoSState": false, "QoSProfile": "newprofile" } ] [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume qos enable -g demovol1 2015/10/08 05:17:22 QoSEnable with opts {Name: Opts:map[global:true volume:demovol1]} dsvolume QoS enable response [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume qos ls Global QoS Enabled Name             ReadIOPS ReadBW WriteIOPS WriteBW Weight default          200 20000 100 10000 600 demoprofile      256 20000 100 10000 555 myprofile        200 10000 100 10000 555 newprofile       200 2000 100 1000 777 dsvolume qos list acked response [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume info demovol1 volume info on demovol1 [ { "Name": "demovol1", "Voltype": "ds-ceph", "CreatedBy": "dolphinhost3", "CreatedAt": "Thu Oct 8 02:30:23 EDT 2015", "Policy": "distributed", "Fstype": "ext4", "MBSize": 100, "AutoMigrate": true, "2419534dd70ba2775ca1880fb71d196d31a167579d0ee85d5203be3cc0ff574e@dolphinhost3" "c3afeac73b389a69a856eeccf3098e778d1b0087a7a543705d6bfbba4f5c6803@dolphinhost3" "7bd28eed915c450459bd1a27d49325548d0791cbbaac670dcdae1f8d97596c7e@dolphinhost3" "0fc0217b6cda2f02ef27dca9d6dd3913bda7a871012d1073f29a864ae77bc61f@dolphinhost3" ], "LastAccessTimestamp": "Thu Oct 8 05:16:26 EDT 2015", "IsClone": false, "ParentSnapshot": "", "QoSState": true, "QoSProfile": "newprofile" } ] [lns@dolphinhost3 bins]$ [lns@dolphinhost3 bins]$ ./dolphindocker run -it -v demovol1:/opt/demo ubuntu:latest bash root@9048672839d6:/# dd if=/dev/rbd0 of=/dev/null count=1 bs=1M 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 522.243 s, 2.0 kB/s root@9048672839d6:/# exit [lns@dolphinhost3 bins]$ [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume create -z=100 newvolume 2015/10/08 05:48:13 VolumeCreate(newvolume) with opts map[name:newvolume policy:distributed dsvolume create acked response {"Name":"newvolume","Err":""} [lns@dolphinhost3 bins]$ ./dolphindocker run -it -v newvolume:/opt/vol ubuntu:latest bash root@2b1e11bc2d45:/# cd /opt/vol/ root@2b1e11bc2d45:/opt/vol# touch 1 root@2b1e11bc2d45:/opt/vol# cp /etc/hosts . root@2b1e11bc2d45:/opt/vol# cp /etc/hostname . root@2b1e11bc2d45:/opt/vol# ls 1 hostname hosts root@2b1e11bc2d45:/opt/vol# exit [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume snapshot ls Volume@Snapshot CreatedBy Size NumChildren demovol1@demosnap1 dolphinhost3 104857600 [0] [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume snapshot create -v=newvolume newsnap 2015/10/08 05:49:09 SnapshotCreate(newsnap) with opts {Name:newsnap Volume:newvolume Type:d dsvolume snapshot create response {"Name":"newsnap","Volume":"newvolume","Err":""} [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume snapshot ls Volume@Snapshot CreatedBy Size NumChildren demovol1@demosnap1 dolphinhost3 104857600 [0] newvolume@newsnap dolphinhost3 104857600 [0] [lns@dolphinhost3 bins]$ ./dolphindocker run -it -v newvolume:/opt/vol ubuntu:latest bash root@f54ec93290c0:/# root@f54ec93290c0:/# root@f54ec93290c0:/# root@f54ec93290c0:/# cd /opt/vol/ root@f54ec93290c0:/opt/vol# ls 1 hostname hosts root@f54ec93290c0:/opt/vol# rm 1 hostname hosts root@f54ec93290c0:/opt/vol# touch 2 root@f54ec93290c0:/opt/vol# cp /var/log/alternatives.log . root@f54ec93290c0:/opt/vol# exit [lns@dolphinhost3 bins]$ [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume snapshot ls Volume@Snapshot CreatedBy Size NumChildren demovol1@demosnap1 dolphinhost3 104857600 [0] newvolume@newsnap dolphinhost3 104857600 [0] [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume snapshot newvolume@newsnap firstclone Usage: ./dolphindocker dsvolume snapshot [OPTIONS] COMMAND [OPTIONS] [arg...] Commands: create               Create a volume snapshot rm                   Remove a volume snapshot ls                   List all volume snapshots info                 Display information of a volume snapshot clone                clone snapshot to create a volume rollback             rollback volume to a snapshot Run ’./dolphindocker dsvolume snapshot COMMAND --help’ for more information on a command. --help=false    Print usage invalid command : [newvolume@newsnap firstclone] [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume snapshot clone --help Usage: ./dolphindocker dsvolume snapshot clone [OPTIONS] VOLUME@SNAPSHOT CLONEVOLUME clones a dsvolume snapshot and creates a new volume with a name specified by the user --help=false    Print usage -o, --opt=map[]  Other driver options for volume snapshot [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume snapshot clone newvolume@newsnap firstclo 2015/10/08 05:56:37 clone source: newvolume@newsnap, dest: firstclone 2015/10/08 05:56:37 clone source: volume newvolume, snapshot newsnap 2015/10/08 05:56:37 CloneCreate(newvolume@newsnap) with opts {Name:newsnap Volume:newvolume dsvolume snapshot clone response {"Name":"newsnap","Volume":"","Err":""} [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume ls NAME Created Type/Fs Policy Size(MB) Shared Inuse Path demosecure dolphinhost3 ds-ram/tmpfs secure 100 false - demovol1 dolphinhost3 ds-ceph/ext4 distributed 100 true - newvolume dolphinhost3 ds-ceph/xfs distributed 100 true - firstclone dolphinhost3 ds-ceph/xfs distributed 100 true - demolocal1 dolphinhost3 ds-local/ local 0 true - demoram dolphinhost3 ds-ram/tmpfs ram 100 true - [lns@dolphinhost3 bins]$ ./dolphindocker run -it -v firstclone:/opt/clone ubuntu:latest bas root@3970a269caa5:/# cd /opt/clone/ root@3970a269caa5:/opt/clone# ls 1 hostname hosts root@3970a269caa5:/opt/clone# exit [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume info firstclone volume info on firstclone [ { "Name": "firstclone", "Voltype": "ds-ceph", "CreatedBy": "dolphinhost3", "CreatedAt": "Thu Oct 8 05:56:37 EDT 2015", "Policy": "distributed", "Fstype": "xfs", "MBSize": 100, "AutoMigrate": false, "Shared": true, "Mountpoint": "", "Inuse": [], "Containers": [], "LastAccessTimestamp": "Thu Oct 8 05:59:04 EDT 2015", "IsClone": true, "ParentSnapshot": "newvolume@newsnap", "QoSState": false, "QoSProfile": "" } ] [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume snapshot info newvolume@newsnap 2015/10/08 05:59:33 Get snapshots info newvolume - newsnap [ { "Name": "newsnap", "Volume": "newvolume", "Type": "default", "CreatedBy": "dolphinhost3", "CreatedAt": "Thu Oct 8 05:49:10 EDT 2015", "Size": 104857600, "Children": [ "firstclone" ] } ] volume snapshot info on newvolume@newsnap [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume snapshot rm -v=newvolume newsnap 2015/10/08 05:59:47 snapshot rm {Name:newsnap Volume:newvolume Type: Opts:map[]} Error response from daemon: {"Name":"newsnap","Volume":"newvolume","Err":"Volume snapshot i [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume rm newvolume Error response from daemon: {"Name":"newvolume","Err":"exit status 39"} [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume rollback newvolume@newsnap 2015/10/08 06:00:22 SnapshotRollback(newvolume@newsnap) with opts {Name:newsnap Volume:newv dsvolume rollback response {"Name":"newsnap","Volume":"newvolume","Err":""} [lns@dolphinhost3 bins]$ ./dolphindocker run -it -v newvolume:/opt/rollback ubuntu:latest b root@1545fee295af:/# cd /opt/rollback/ root@1545fee295af:/opt/rollback# ls 1 hostname hosts root@1545fee295af:/opt/rollback# exit [lns@dolphinhost3 bins]$ }8 Libstorage Events[lns@dolphinhost3 bins]$ ./dolphindocker events 2015-10-08T05:47:16.675882847-04:00 demovol1: (from libstorage) Snapshot[demovol1@demosnap1 create success 2015-10-08T05:48:14.413457724-04:00 newvolume: (from libstorage) Volume create success 2015-10-08T05:48:37.341001897-04:00 2b1e11bc2d45fe26b1b3082ce1a1123bd65ef1ebb61b1a0a0244a10 (from ubuntu:latest) create 2015-10-08T05:48:37.447786698-04:00 2b1e11bc2d45fe26b1b3082ce1a1123bd65ef1ebb61b1a0a0244a10 (from ubuntu:latest) attach 2015-10-08T05:48:38.118070084-04:00 newvolume: (from libstorage) Mount success 2015-10-08T05:48:38.118897857-04:00 2b1e11bc2d45fe26b1b3082ce1a1123bd65ef1ebb61b1a0a0244a10 (from ubuntu:latest) start 2015-10-08T05:48:38.235199874-04:00 2b1e11bc2d45fe26b1b3082ce1a1123bd65ef1ebb61b1a0a0244a10 (from ubuntu:latest) resize 2015-10-08T05:48:50.463620278-04:00 2b1e11bc2d45fe26b1b3082ce1a1123bd65ef1ebb61b1a0a0244a10 (from ubuntu:latest) die 2015-10-08T05:48:50.723378247-04:00 newvolume: (from libstorage) Unmount[newvolume] container 2b1e11bc success 2015-10-08T05:49:10.341208906-04:00 newvolume: (from libstorage) Snapshot[newvolume@newsnap create success 2015-10-08T05:49:22.165250102-04:00 ef49217deb4f6b121b09d6ee714d7546dad5875129b20719a36df82 (from ubuntu:latest) create 2015-10-08T05:49:22.177473380-04:00 ef49217deb4f6b121b09d6ee714d7546dad5875129b20719a36df82 (from ubuntu:latest) attach 2015-10-08T05:49:22.861275198-04:00 newvolume: (from libstorage) Mount success 2015-10-08T05:49:22.862213412-04:00 ef49217deb4f6b121b09d6ee714d7546dad5875129b20719a36df82 (from ubuntu:latest) start 2015-10-08T05:49:23.036122376-04:00 newvolume: (from libstorage) Unmount[newvolume] container ef49217d success 2015-10-08T05:49:23.439618024-04:00 newvolume: (from libstorage) Unmount[newvolume] failed exit status 32 2015-10-08T05:49:23.439675043-04:00 ef49217deb4f6b121b09d6ee714d7546dad5875129b20719a36df82 (from ubuntu:latest) die 2015-10-08T05:49:25.223243216-04:00 f54ec93290c0a714a79007d928788e4aa96fed504a39890b3f9a308 (from ubuntu:latest) create 2015-10-08T05:49:25.327953586-04:00 f54ec93290c0a714a79007d928788e4aa96fed504a39890b3f9a308 (from ubuntu:latest) attach 2015-10-08T05:49:25.504156400-04:00 newvolume: (from libstorage) Mount success 2015-10-08T05:49:25.504872335-04:00 f54ec93290c0a714a79007d928788e4aa96fed504a39890b3f9a308 (from ubuntu:latest) start 2015-10-08T05:49:25.622608684-04:00 f54ec93290c0a714a79007d928788e4aa96fed504a39890b3f9a308 (from ubuntu:latest) resize 2015-10-08T05:50:26.119006635-04:00 f54ec93290c0a714a79007d928788e4aa96fed504a39890b3f9a308 (from ubuntu:latest) die 2015-10-08T05:50:26.380619881-04:00 newvolume: (from libstorage) Unmount[newvolume] container f54ec932 success 2015-10-08T05:56:37.285999505-04:00 firstclone: (from libstorage) Clone volume newvolume@ne success 2015-10-08T05:58:58.731584155-04:00 3970a269caa59a2e64d665702946ce269f534764b5c25a396f7c2df (from ubuntu:latest) create 2015-10-08T05:58:58.837915799-04:00 3970a269caa59a2e64d665702946ce269f534764b5c25a396f7c2df (from ubuntu:latest) attach 2015-10-08T05:59:00.094099907-04:00 firstclone: (from libstorage) Mount success 2015-10-08T05:59:00.095190081-04:00 3970a269caa59a2e64d665702946ce269f534764b5c25a396f7c2df (from ubuntu:latest) start 2015-10-08T05:59:00.238547428-04:00 3970a269caa59a2e64d665702946ce269f534764b5c25a396f7c2df (from ubuntu:latest) resize 2015-10-08T05:59:04.432485014-04:00 3970a269caa59a2e64d665702946ce269f534764b5c25a396f7c2df (from ubuntu:latest) die 2015-10-08T05:59:04.772842691-04:00 firstclone: (from libstorage) Unmount[firstclone] container 3970a269 success 2015-10-08T05:59:47.016443142-04:00 newvolume: (from libstorage) Snapshot[newvolume@newsnap delete failed Volume snapshot inuse 2015-10-08T06:00:03.254380587-04:00 newvolume: (from libstorage) Volume destroy failed exit 2015-10-08T06:00:22.505840283-04:00 newvolume: (from libstorage) VolumeRollback newvolume@newsnap success 2015-10-08T06:00:43.861918486-04:00 1545fee295afac7fd8e743a2811b3c3f8ad0e027e9ca482695e77ce (from ubuntu:latest) create 2015-10-08T06:00:43.968121844-04:00 1545fee295afac7fd8e743a2811b3c3f8ad0e027e9ca482695e77ce (from ubuntu:latest) attach 2015-10-08T06:00:47.125238229-04:00 newvolume: (from libstorage) Mount success 2015-10-08T06:00:47.126041470-04:00 1545fee295afac7fd8e743a2811b3c3f8ad0e027e9ca482695e77ce (from ubuntu:latest) start 2015-10-08T06:00:47.237933994-04:00 1545fee295afac7fd8e743a2811b3c3f8ad0e027e9ca482695e77ce (from ubuntu:latest) resize 2015-10-08T06:00:52.135643720-04:00 1545fee295afac7fd8e743a2811b3c3f8ad0e027e9ca482695e77ce (from ubuntu:latest) die 2015-10-08T06:00:52.873037212-04:00 newvolume: (from libstorage) Unmount[newvolume] container 1545fee2 success9 Work in ProgressNew Volume Controller for GlusterFs is being integratedMigration is being worked on.docker dsvolume migrate {--tofar|--tonear} -v|--volume=VOLUME S3OBJECTLocal volumes needs to use thinpools on dm. Refer to convoy. https://github.com/rancher/convoy/blob/master/docs/devicemapper.md 10 Related TechnologiesThis section describes and tracks all related technologies for cloud container management10.1 Kubernetes vs Docker ComposeKubernetes in short is awesome. Kubernetes design comes with great design fundamentals based on Google’s decade long container management framework. Docker Compose is very primitive, understands container lifecycle’s well. But Kubernetes understands application lifecycle over containers better. And we deploy applications and not containers.10.2 MesosKubernetes connects and understands only containers so far. But there are other workloads like mapreduce, batch processing and MPI cloud applications that do not necessarily fit in the container framework. Mesos is great in this class of applications. It presents a pluggable Frameworks for extending Mesos to any kind of applications. Mesos natively understands docker containerizer. So for managing a datacenter/cloud that can be used for varied application types, Mesos is great.10.3 Mesos + Docker Engine + Swarm + Docker Compose vs Mesos + Docker Engine + KubernetesSwarm is Docker’s way of extending Docker Engine to be cluster aware. Kubernetes is doing this great over plain Docker engine. And as already mentioned Docker Compose is very primitive and is no match for the flexibility of Kubernetes. Mesos + Docker Engine + Kubernetes is Mesosphere. Mesosphere theme would be to provide a consistent Kubernetes like interface to schedule and manage any application class workloads over a cluster.11 ConclusionLibstorage fundamentals are strong. It can be integrated with Docker Engine as is today. Its functionality will definitely enhance Docker engine capabilities and may be needed with Mesos as well. The community and Mesosphere is driving complete ecosystem over Kubernetes which understands cluster, and brings in the needed functionality inclusive of volume management. The basic architecture treats docker engine as per Node functionality, Kubernetes works over a cluster. But Docker, is extending Libnetwork and has Swarm, that extends Docker engine to be cluster aware. So Libstorage within Docker framework is more suited, than elsewhere.

Aziro Marketing

blogImage

Kubernetes – Bridging the Gap between 5G and Intelligent Edge Computing

PrologueIn the era of digital transformation, the 5G network is a leap forward. But frankly, the tall promises of the 5G network are cornering the edge computing technology to democratize data at a granular level. To add to the vows, 5G also demands that edge computing enhances performance and latency while slashing the cost. Kubernetes – an open-source container-orchestration is a dealmaker between 5G and edge computing.In this blog, you will read:A decade defined by the cloudThe legend of cloud-native ContainersThe rise of Container Network Functions (CNFs)Edge computing must reinvent the wheelKubernetes – powering 5G at the edgeKubeEdge – giving an edge to KubernetesA decade defined by the cloudWhat oil is to the automobile industry, the cloud is to Information Technology (IT) industry. Cloud revolutionized the tech space by making data available at your fingertips. Amazon’s Elastic Compute Cloud (EC2) planted the seed of the cloud somewhere in the early 2000s. Google Cloud and Microsoft Azure followed this. However, the real growth of cloud technology skyrocketed only after 2010-2012.Numbers underlining the future trends– Per Cisco, cloud computing will process more than 90 percent of the workloads in 2021– PerRightScale, the business run around 41 percent workloads in private cloud and 38 percent in the public cloud– Per Cisco, 75 percent of all compute instance and cloud workloads will be SaaS by the end of 2021The legend of cloud-native ContainersThe advent of cloud-native is a hallmark of evolutionary development in the cloud ecosystem. The fundamental nature of the architecture of cloud-native is the abstraction of multiple layers of the infrastructure. This means a cloud-native architect has to define those layers via code. And when coding, one gets a chance to include top functionalities to maximize the value of the business. Cloud-native also empowers coders to create scripts for infrastructure scalability.Cloud-native container tech is making a noteworthy contribution to the future growth of the cloud-native ecosystem. It is playing a more significant role in enabling capabilities of the 5G architecture in real-time. With container-focused web services, 5G network companies can achieve resource isolation and reproducibility to drive resiliency and faster deployment. Containers make the process of deployment less intricate, which powers the 5G infrastructure to scale data requirements spanning cloud networks. Organizations can leverage Containers to process data and compute it on a massive scale.A conflation of Containers and DevOps work magic for 5G. Bringing these loosely coupled services will help 5G providers to automate application deployment, receive feedback swiftly, eliminate bottlenecks, and achieve a self-paced continuous improvement mechanism. They can provision resources on-demand with unified management across a hybrid cloud.The fire of cloud-native is ignited in the telecom sector. The coming decade – 2021-2030, will witness it spread like wildfire.The rise of the Container Network Functions (CNFs)We witnessed the rise of Container Network Functions (CNFs), while network providers were using containers with VMware and virtual network functions (VNF). CNFs are functions of a network that can run on Kubernetes across multi-cloud and/or hybrid cloud infrastructure. CNFs are ultra-lightweight compared to VNFs, which traditionally operate in the VMware environment. This makes CNFs super portable and scalable. But, the underlining factor in the CNF architecture is that it is deployable over a bare metal server that brings down the cost dramatically.5G – the next wave in the telecom sector promises to offer next-gen services entailing automation, elasticity, and transparency. Looking at the requirement micro-segmented architectures, VNF (VMware environment) would not be an ideal choice for 5G providers. Logically, the adoption of CNFs is a natural step forward. Of course, doing away entirely with VMware isn’t anytime on the board. Therefore, a hybrid model of VNF and CNF sounds good.Recently, Intel, in collaboration with Red Hat, created a cloud-based onboarding service and test bed to conflate CNF (containerized environment) and VNF (VMware environment). The test bed is expected to enhance compatibility between CNF and VNF and slash the deployment time. The architecture looks like the image below.Edge computing must reinvent the wheelMultiple devices generate a massive amount of data concurrently. To enable cloud centers to process such data is a herculean task. Edge computing architecture puts infrastructure close to data devices within a distributed environment that results in faster response time and lower latency. Edge computing’s local processing of data simplifies the process and reduces the overall costs. Edge computing has been working as a catalyst for the telecommunication industry to date. However, with 5G in the picture, the boundaries are all set to push.The rising popularity of the 5G network is putting a thrust on intuitive experiences in real-time. 5G catapults the speed of the broadband by up to 10x and plummets the device density by around a million devices/sq.km. For this, 5G requires ultra-low latency, which can be created by a digital infrastructure powered by edge computing.Honestly, edge computing must start flapping its wings for the success of the 5G network. It must ensure– Better device management– Lesser resource utilization– More lightweight capabilities– Ultra-low latency– Increased security blanket and data transfer reliabilityKubernetes – powering 5G at the edge“Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.” Kubernetes.ioKubernetes streamlines the underlying compute spanning distributed environment and imparts consistency at the edge. Kubernetes helps network providers maximize the value of Containers at the edge by automation and swift deployment with a broader security blanket. Kubernetes for edge computing will eliminate most of the labor-intensive workloads, thereby, driving better productivity and quality.Kubernetes has an unquestionable role to play in unleashing the commercial value of 5G, at least for now. The only alternative to Kubernetes is VMware, which does not make the cut due to space and cost issues. Kubernetes architecture has proved to accelerate the automation of mission-critical workloads and reduce the overall cost of 5G deployment.A Microservices architecture is required to support non-real-time components of 5G. Kubernetes can create a self-controlled closed loop, which ensures a required number of Microservices are hosted and controlled at the desired level. Further, the Horizontal Pod Autoscaler of Kubernetes can release new container instances depending on the workload at the edge.Last year, AT&T signed an eight-figure and multi-year deal with Mirantis to roll out 5G leveraging OpenStack and Kubernetes. Ryan Van Wyk, AT&T Associate VP of the Network, had quoted, “There really isn’t much of an alternative. Your alternative is VMware. We’ve done the assessments, and VMware doesn’t check boxes we need.”KubeEdge – giving an edge to KubernetesKubeEdge is an open-source project built on Kubernetes. The latest version, KubeEdge v1.3, hones the capabilities of Kubernetes to power intelligent orchestration of containerized application at the edge. KubeEdge streamlines communication between edge and cloud data center by infrastructure support for network, app. deployment, and metadata. The best part is that it allows coders to create a customized logic script to enable resource-constrained device communication at the edge.Future aheadGartner quotes, “Around 10 percent of enterprise-generated data is created and processed outside a traditional centralized data center or cloud. By 2025, this figure will reach 75 percent.”The proliferation of devices due to IoT, Big Data, and AI will generate data of mammoth amount. For the success of 5G, it is essential that edge computing handles these complex workloads and maintains data elasticity. Therefore, Kubernetes will be the functional backbone of edge computing imparting resiliency in orchestrating containerized applications.

Aziro Marketing

blogImage

Kubernetes storage validation by Ansible test automation framework

Ansible is mainly used for software provisioning, configuration management, and application-deployment tool. We have used it for developing test automation framework to validate Kubernetes storage. How we used ansible as a test automation tool to validate Kubernetes storage is explained in this post.Why we used ansible?Kubernetes is a clustered environment where we will have 2 or more worker nodes and one or more master node. We have to create CSI driver volumes in it to validate our storage box.So, the test environment will consist of multiple hosts. The volume may be mount on any of the pod created in any of the worker nodes. So dynamically, we need to validate any of the worker nodes. If we use any programming/scripting languages, then we need to handle remote code execution. We worked in a couple of automation projects using PowerShell and python. But remote code execution library needs a lot of work. But in ansible, the heavy lifting of remote execution is taken care of by itself. So, we can only concentrate only on core test validation logicHow ansible is used?As part of the Kubernetes storage validation, there are many features to be validated.Features such as Volume group, Snapshot group, Volume mutator, Volume resize need to be validated. Each feature will have many test cases.For each feature, we created a role. Each test is covered in tasks file under role.In main.yml in roles will call all the test tasks file.Structure of ansible automation framework rolesroles Feature_test  volumegroup_provision        Tasks          Test1.yml          Test2.yml          Main.yml  volumesnapshot_provision  volume_resize  basic_volume_workflow Lib  resources    (library files sc,pvc,pod and IO inside Pod) volgroup_play.yml volsnaphost_play.yml volresize_play.yml basic_volume_play.yml Hosts In the above framework, test1.yml and test2.yml are tasks file where test cases would be written. Each feature will have its own play file—for example, Volgroup_play.yml. So if we execute volgroup_play.yml, then tests reside in test1.yml and test2.yml will be executed. Below command will execute the play ansible-playbook -I hosts volgroup_play.yml -vvChallenges:Problem:In ansible, if a task is failed, then execution will be stopped. So, if 10 test cases are there in a feature, and if a second test is failed, then remaining 8 test cases will not be executed.Solution:Each test case is written inside block and rescue. So, when testing is failed, it will be handled by a rescue block. In the rescue block, we will clean up the testbed so that the next test case will be executed without any issues.Sample test file.- Block:   - include: test_header  vars: Test_file: ‘test1.yml’ Test_description: ‘volume group provision basic workflow’  < creation of SC,PVC and POD and validation logic>     - include: test_footer  vars: Test_file: “test1.yml” Test_description: ‘volume group provision basic workflow’ Test_result: “Pass” rescue:  - include: test_footer   vars: Test_file: “test1.yml” Test_description: ‘volume group provision basic workflow’ Test_result: “Fail”  < Cleanup logic>Problem:Some of the tasks which can be done easier in a programming language are tough in ansible.Solution:Write custom ansible module using python.Pros of using ansible as automation framework:Ansible is very simple to implement.It takes care of heavy lifting of remote code executionFor clustered environment, speed of automation development is considerably higher.Cons:Though it is simple, still ansible is not programming language. When straightforward commands are written, it will be easier. but when we write logic, few lines of programming language will do what 100 lines of ansible does.When multiple tasks need to be executed in nested loop passion, it will be very hard to implement that in ansible. (we have to use ‘include’ module with loops then again use ‘include’ modules. It is not very intuitive)Conclusion:Ansible can be used as a test automation framework for Kubernetes storage validation. Wherever heavy programming logic is required , it is better to use custom ansible module using python which will make life easier..filledCheckboxes input[type="checkbox"]{opacity:0;display :none;}.multiStepFormBody button.close{z-index:99;}

Aziro Marketing

blogImage

Learn How to orchestrate Your Infrastructure Fleet with Chef Provisioning

Chef Provisioning is a relatively new member in the Chef family. It can be used to build infrastructure topologies using the new machine resource. This blog post shows how this is done. You bring up and configure individual nodes with Chef all the time. Your standard workflow would be to bootstrap a node, register the node to a Chef server, and then run Chef client to install software and configure the node. You would rinse and repeat this step for every node that you want in your fleet. Maybe you have written a nice wrapper over Chef and Knife to manage your clusters using Chef. Until recently, Chef did not have any way to understand the concept of cluster or fleet. So if you were running a web application with some decent traffic,there would be a bunch of cookbooks and recipes to install and configure: web servers, DB server, background processor, load balancer, etc. Sometimes, you might have additional nodes for Redis or RabbitMQ. So let us say, your cluster consists of three web servers, one DB server, one server that does all the background processing, like generate PDFs or send emails etc., and one load balancer for the three web servers. Now if you wanted to bring such a cluster for multiple environments, say “testing”, “staging,” and “production,” you would have to repeat the steps for each environment; not to mention, your environments could possibly be powered by different providers–production and staging on AWS, Azure, etc. But testing could possible be on local infrastructure, maybe in VMs. This is not difficult, but it definitely makes you wonder whether you could do it better–if only you could describe your infrastructure as code that comes up with just one command. That is exactly what Chef Provisioning does. Chef Provisioning was introduced in Chef version 12. This helps you describe your cluster as code and build it at will as many times as you want and on various types of clouds, virtual machines, or even on bare metal. The Concepts Chef provisioning depends on two main pillars–machine resource and drivers. Machine Resource “machine” is an abstract concept of a node from your infrastructure topology. It could be an AWS EC2 instance or a node on some other cloud provider. It could be a Vagrant-based virtual machine, a Linux container, or a Docker instance. It could even be a real, physical bare-metal machine. “machine” and other related resources (like machine_batch, machine_image, etc.,) can be used to describe your cluster infrastructure. Each “machine” resource describes whatever it does using standard Chef recipes. General convention is to describe your fleet and its topologies using “machine” and other resources in a separate file. We will see this in detail soon, but for now here is how a machine is described. #setup-cluster.rb machine 'server' do recipe 'nginx' end machine 'db' do recipe 'mysql' end A recipe is one of a “machine” resource’s attributes. Later we will see a few more of these along with their examples. Drivers As mentioned earlier, with Chef Provisioning you can describe your clusters and their topologies and then deploy them across a variety of clouds, VMs, bare metal, etc. For each such cloud or machine that you would like to provision, there are drivers that do the actual heavy lifting. Drivers convert the abstract “machine” descriptions into physical reality. Drivers are responsible for acquiring the node data, connecting to them via required protocol, bootstrapping them with Chef, and running the recipes described in the “machine” resource. Provisioning drivers need to be installed separately as gems. Following shows how to install and use AWS driver via environment variables in your system. $ gem install chef-provisioning-aws $ export CHEF_DRIVER=aws Running Chef-client on the above recipe will create two instances in your AWS account referenced by your settings in “~/.aws/config.” We will see an example run later in the post. Driver can be set in your knife.rb if you so prefer. Here, we set the chef-provisioning-fog driver for AWS. driver 'fog:AWS' It is possible to set driver inline in the cluster recipe code. require 'chef/provisioning/aws_driver' with_driver 'aws' machine 'server' do recipe ‘web-server-app' end In the following example, Vagrant driver is given the driver attribute and a driver URL as the value. “/opt/vagrantfiles” will be looked up for Vagrantfiles in the following case. machine 'server' do driver 'vagrant:/opt/vagrantfiles' recipe 'web-server-app' end It’s a good practice to keep driver details and cluster code separate as it lets you use the same cluster descriptions with different provisioners by just changing the driver in the environment. It is possible to write your own custom provisioning drivers. But that is beyond the scope of this blog post. The Provisioner Node An interesting concept you need to understand is that Chef Provisioner needs a “provisioner-node” to provision all machines. This node could be a node in your infrastructure or simply your workstation. chef-client (or chef-solo / chef-zero) runs on this “provisioner node” against a recipe that defines your cluster recipe. Chef Provisioner then takes care of acquiring a node in your infrastructure, bootstrapping it with Chef, and then running the required recipes on the node. Thus, you will see that chef-client runs twice–once on the provisioner node and then on the node that is being provisioned. The Real Thing Let us dig a deeper now. Let us first bring up a single DB server. Using Chef knife you can upload your cookbooks to the Chef server (you could do it with chef-zero as well). Here I have put all my required recipes in a cookbook called “cluster” and uploaded it to a Chef server and set the “chef_server_url” in my “client.rb” and “knife.rb”. You can find all the examples here. Machine #recipes/webapp.rb require 'chef/provisioning' machine 'db' do recipe ‘database-server’ end machine 'webapp' do recipe 'web-app-stack' end To run the above recipe: sudo CHEF_DRIVER=aws chef-client -r "recipe["cluster::webapp"]" This should bring up two nodes in your infrastructure — a DB server and a web application server as defined by the web-app-stack recipe. The above command assumes that you have uploaded the cluster cookbook consisting of the required recipes to the Chef server. More Machine Goodies Like any other Chef resource, machine can have multiple actions and attributes that can be used to achieve different results. A “machine” can have a “chef_server” attribute, which means different machines can talk to different Chef servers. “from_image” attribute can be used to set a machine image that can be used to create a machine. You can read more about machine resource here. Parallelisation Using machine_batch Now if you would like to have more than one web application instances in your cluster and you need more web app servers, say 5 instances, what do you do? Run a loop over your machine resource. 1.upto(5) do |i| machine "webapp-#{i}" do recipe 'web-app-stack' end end The above code snippet, when run, should bring up and configure five instances in parallel. “machine” resource parallelizes by default. If you describe multiple “machine” resources consecutively with same actions, then Chef Provisioning combines them into a single (“machine_batch”, more about this later) resource and runs it in parallel. This is great because it saves a lot of time. The following will not parallelize because the actions are different. machine 'webapp' do action :setup end machine 'db' do action :destroy end Note: if you put other resources between “machine” resources, the automatic parallelization does not happen. machine 'webapp' do action :setup end remote_file 'somefile.tar.gz' do url 'https://example.com/somefile.tar.gz' end machine 'db' do action :setup end Also, you can explicitly turn off parallelization by setting “auto_batch_machines = false” in Chef config (knife.rb or client.rb). Using “machine_batch” explicitly, we can parallelize and speed up provisioning for multiple machines. machine_batch do action :setup machines 'web-app-stack', 'db' Machine Image It is even possible to define machine images using a “machine_image” resource which can be used to build machines by the “machine” resource. machine_image 'web_stack_image' do recipe ‘web-app-stack’ end The above code will launch a machine using your chosen driver, install and configure the node as per the given recipes, create an image from this machine, and finally destroy the machine. This is quite similar to how Packer tool launches a node, configures it, and then freezes it as image before destroying the node. machine 'web-app-stack' do from_image 'web_stack_image' end Here a machine “web-app-stack” when launched will already have everything in the recipe “web-app-stack”. This saves a lot of time when you want to spin up machines, which have common base recipes. Think of a situation where team members need machines with some common stuff installed, and different people install their own specific things as per requirement. In such a case, one could create an image with the basic packages e.g., build-essential, ruby, vim, etc., and that base image could use a source machine image for further work. Load Balancer A very common scenario is to put a bunch of machines, say web-application-servers, behind a load balancer thus achieving redundancy. Chef Provisioning has a resource specifically for load balancers, aptly called “load_balancer”. All you need to do is create the machine nodes and then pass the machines to a “load_balancer” as below. 1.upto(2) do |node_id| machine “web-app-stack-#{node_id}” end load_balancer "web-app-load-balancer" do machines %w(web-app-stack-1 web-app-stack-2) end The above code will bring up two nodes–webapp-stack-1 and webapp-stack-2 and put a load balancer in front of them. Final Thoughts If you are using the AWS driver, you can set machine_options as below. This is important if you want to use customized AMIs, users, security groups, etc. with_machine_options :ssh_username => '', :bootstrap_options => { :key_name => '', :image_id => ‘', :instance_type => ‘’, :security_group_ids => '' } If you don’t provide the AMI ID, the AWS driver defaults to a certain AMI per region. Whatever AMI you use, you have to use the correct ssh username for the respective AMI. [3] One very important thing to note would be that there exists a Fog driver (chef-provisioning-fog) for various cloud services including EC2. So, there are often different names for the parameters that you might want to use. For example, the chef-provisioning-aws driver that depends on AWS Ruby SDK uses “instance_type” where as the Fog driver uses “flavor_id”. Security Groups use the key “security_groups_ids” in the AWS driver and takes ID as value, but the Fog driver uses “groups” and takes the name of the Security Group as its value. This can at times lead to confusion if you are moving from one driver to another. At the time of writing this article, I could use the help of the documentation of various drivers. The best way to understand them would be to check the examples provided, run them and learn from them–maybe even read the source code of various drivers to understand how they work. Chef Provisioning recently got bumped to 1.0.0. I would highly recommend to keep an eye on the GitHub issues in case you face some trouble. References [1] https://docs.chef.io/provisioning.html [2] https://github.com/pradeepto/chef-provisioning-playground [3] http://alestic.com/2014/01/ec2-ssh-username [4] https://github.com/chef/chef-provisioning/issues  

Aziro Marketing

EXPLORE ALL TAGS
2019 dockercon
Advanced analytics
Agentic AI
agile
AI
AI ML
AIOps
Amazon Aws
Amazon EC2
Analytics
Analytics tools
AndroidThings
Anomaly Detection
Anomaly monitor
Ansible Test Automation
apache
apache8
Apache Spark RDD
app containerization
application containerization
applications
Application Security
application testing
artificial intelligence
asynchronous replication
automate
automation
automation testing
Autonomous Storage
AWS Lambda
Aziro
Aziro Technologies
big data
Big Data Analytics
big data pipeline
Big Data QA
Big Data Tester
Big Data Testing
bitcoin
blockchain
blog
bluetooth
buildroot
business intelligence
busybox
chef
ci/cd
CI/CD security
cloud
Cloud Analytics
cloud computing
Cloud Cost Optimization
cloud devops
Cloud Infrastructure
Cloud Interoperability
Cloud Native Solution
Cloud Security
cloudstack
cloud storage
Cloud Storage Data
Cloud Storage Security
Codeless Automation
Cognitive analytics
Configuration Management
connected homes
container
Containers
container world 2019
container world conference
continuous-delivery
continuous deployment
continuous integration
Coronavirus
Covid-19
cryptocurrency
cyber security
data-analytics
data backup and recovery
datacenter
data protection
data replication
data-security
data-storage
deep learning
demo
Descriptive analytics
Descriptive analytics tools
development
devops
devops agile
devops automation
DEVOPS CERTIFICATION
devops monitoring
DevOps QA
DevOps Security
DevOps testing
DevSecOps
Digital Transformation
disaster recovery
DMA
docker
dockercon
dockercon 2019
dockercon 2019 san francisco
dockercon usa 2019
docker swarm
DRaaS
edge computing
Embedded AI
embedded-systems
end-to-end-test-automation
FaaS
finance
fintech
FIrebase
flash memory
flash memory summit
FMS2017
GDPR faqs
Glass-Box AI
golang
GraphQL
graphql vs rest
gui testing
habitat
hadoop
hardware-providers
healthcare
Heartfullness
High Performance Computing
Holistic Life
HPC
Hybrid-Cloud
hyper-converged
hyper-v
IaaS
IaaS Security
icinga
icinga for monitoring
Image Recognition 2024
infographic
InSpec
internet-of-things
investing
iot
iot application
iot testing
java 8 streams
javascript
jenkins
KubeCon
kubernetes
kubernetesday
kubernetesday bangalore
libstorage
linux
litecoin
log analytics
Log mining
Low-Code
Low-Code No-Code Platforms
Loyalty
machine-learning
Meditation
Microservices
migration
Mindfulness
ML
mobile-application-testing
mobile-automation-testing
monitoring tools
Mutli-Cloud
network
network file storage
new features
NFS
NVMe
NVMEof
NVMes
Online Education
opensource
openstack
opscode-2
OSS
others
Paas
PDLC
Positivty
predictive analytics
Predictive analytics tools
prescriptive analysis
private-cloud
product sustenance
programming language
public cloud
qa
qa automation
quality-assurance
Rapid Application Development
raspberry pi
RDMA
real time analytics
realtime analytics platforms
Real-time data analytics
Recovery
Recovery as a service
recovery as service
rsa
rsa 2019
rsa 2019 san francisco
rsac 2018
rsa conference
rsa conference 2019
rsa usa 2019
SaaS Security
san francisco
SDC India 2019
SDDC
security
Security Monitoring
Selenium Test Automation
selenium testng
serverless
Serverless Computing
Site Reliability Engineering
smart homes
smart mirror
SNIA
snia india 2019
SNIA SDC 2019
SNIA SDC INDIA
SNIA SDC USA
software
software defined storage
software-testing
software testing trends
software testing trends 2019
SRE
STaaS
storage
storage events
storage replication
Storage Trends 2018
storage virtualization
support
Synchronous Replication
technology
tech support
test-automation
Testing
testing automation tools
thought leadership articles
trends
tutorials
ui automation testing
ui testing
ui testing automation
vCenter Operations Manager
vCOPS
virtualization
VMware
vmworld
VMworld 2019
vmworld 2019 san francisco
VMworld 2019 US
vROM
Web Automation Testing
web test automation
WFH

LET'S ENGINEER

Your Next Product Breakthrough

Book a Free 30-minute Meeting with our technology experts.

Aziro has been a true engineering partner in our digital transformation journey. Their AI-native approach and deep technical expertise helped us modernize our infrastructure and accelerate product delivery without compromising quality. The collaboration has been seamless, efficient, and outcome-driven.

Customer Placeholder
CTO

Fortune 500 company