DevOps Updates

Uncover our latest and greatest product updates
blogImage

How to Secure CI/CD Pipelines with these 5 Key DevSecOps Practice

While we understand the importance of ‘Continuous Everything’ and stress on CI/CD pipelines, we must also pay heed to its safety requirements. There are hidden security vulnerabilities in our codes that often hamper the operations and testing lifecycle phase. And on top it, vulnerabilities, which we import with third-party libraries via OSS – open-source software could make things worse. While we are building CI/CD pipelines, coders are working on plethora of codes. These codes need a thorough checking mechanism. Checking all the codes manually is a task impossible. Thus, we have DevSecOps. Continuous Everything and DevSecOps work in tandem. For the environment to have continuity, there mustn’t be any kind of threat. Because if there is, it will make the Continuous Everything to crumble down. The process of following Continuous Everything culminates into continuous delivery pipelines. These pipelines help in vetting daily committed codes. Therefore, it makes sense to patch security checks within these pipelines and run them automatically. This way any unseen vulnerabilities will be nipped in the bud. Let’s see the five key DevSecOps steps to ensure security in CI/CD pipelines. 1. Pre Source Code Commitment Analysis The DevSecOps team must check the codes thoroughly before submitting it to the source code repository. The DevSecOps team can leverage SAST – (Static Analysis Security Testing) tools for analyzing the codes. Therefore, the team can detect any kind of mismatch in coding best practices and prevent the import of third-party libraries, which are insecure. After the check, the team can fix recurring security issues before it goes to source code. This way, manual tasks can be easily automated, and productivity can be boosted. However, the DevSecOps team must ensure that the SAST tool works well with the programming language. Lack of compatibility between the two could hamper overall productivity. 2. Source Code Commitment Analysis These checks apply to any changes a coder executes in the source code repository. It is generally an automated security test to give a quick idea of changes required. Therefore, implementing a source code commitment analysis could help to create processes, which are strategically defined to ensure security checks. Further, it also assists the DevSecOps teams in debugging issues that might create unnecessary risks in the projects. Here too, you can use the SAST tool by applying certain rules, which suit your application. Also, you could identify top vulnerabilities for your applications and run checks for them automatically. These can be either XSS scripting or SQL injection. Developers also can perform extended unit testing. The unit test use cases can differ according to the application and its features. Lastly, coders must gauge results from the automated test and make necessary changes in their coding styles. 3. Advanced Security Test – Post Source Commitment Analysis On completion of the aforementioned steps, the DevSecOps team must ensure an advanced check, which is triggered automatically. This is a necessary step, in case the unit test fails, and/or the SAST test isn’t helping, there is an issue of programming language compatibility. Vulnerabilities are then detected and if a threat of grave nature is found, it needs to be resolved. The automated post source commitment analysis would typically include open source threat detection, risk-detection security tests, PGP-signed releases, and using repositories to store artifacts. 4. Staging Environment Code Analysis The staging environment is the last stage before an application is moved to production. Therefore, the security analysis of every ‘build’ from the repository becomes essential. Here, apart from SAST, the security team must also execute DAST, performance, and integration checks. The advanced rules set in SAST and DAST must be aligned to the OWASP checklist. DAST would assist security teams in testing sub-components of applications for vulnerabilities and then deploying it. Moreover, an application, which is in the operational state, can be likewise examined. This also means that DAST scanners are independent of programming languages. The test of third-party and open source components including logging, web frameworks, XML data, or parsing json is also significant. Any vulnerabilities here must be properly addressed before moving to the production stage. Pre-Production Environment Code Analysis In this step, the DevSecOps team must ensure that an application deployed to a production stage has zero errors. This is done post-deployment. An optimal way to conduct this check is by triggering continuous checks automatically once the aforementioned steps are complete. DevSecOps team can identify vulnerabilities, which possibly went unnoticed in the previous steps. Further, continuous security checks would offer real-time insight into the application performance and fathom users with unauthorized access. Conclusion The growth of DevOps as a culture and implementation of CI/CD, as a result, would ultimately create tighter security requirements. Any kind of vulnerability and its impact increases from coding, testing, deployment to the production stage. Therefore, it is important to make security an important part of DevOps, right from the start. Additionally, it is crucial to break the silo approach, and embrace DevSecOps. Security teams that implement DevSecOps in a methodological process as listed below, make it easier to integrate processes and bring consistency in the cybersecurity. a. Pre Source Code Commitment Analysis b. Source Code Commitment Analysis c. Advanced Security Test – Post Source Commitment Analysis d. Staging Environment Code Analysis e. Pre-Production Environment Code Analysis

Aziro Marketing

blogImage

5 DevSecOps Best Practices for Your Security Team

Pamela, Product Head of an ISV, envisions the transformation of her team’s Dev and Ops processes. Pamela establishes a DevOps team to facilitate ‘continuous everything.’ She intends to achieve unmatched product quality, process automation, and risk-averse digital infrastructure. Six months down the line – her team witnessed a faster development cycle. But Pamela isn’t satisfied. This is because, in the last six months, a couple of security incidents have been reported. After investigation, the cause was identified as undetected bugs, which were there right from the coding environment. Well, the fact remains that Pamela and her team aren’t only one to suffer. Per the 2019 Sonatype DevSecOps survey, every one in four companies has experienced a breach in 2018-2019. DevOps Mantra – Make Security its Core and not just a Preservative It is awesome how DevOps automates development, production, testing, and deployment environment. However, the automation chain often ignores the essential security protocols. Therefore, data, which is left unencrypted in the development environment, becomes an easy target for breaches. So, the key is to integrate security right at an earlier stage. When practicing DevOps, there are multiple changes in codes in less time. The speed often outdoes the security team’s efforts and leaves them flat-footed. This poor alignment between teams results in a lack of security disciplines – unplanned vulnerabilities, less robust codes, insecure passwords, to name a few. The Sonatype survey states that 48 percent of respondents admitted lack of time for not practicing security at an early stage of the SDCL lifecycle. An interesting thing to note is that this number hasn’t gone down since 2018. Honestly, DevSecOps completes the DevOps lifecycle by injecting security into its core. It helps companies transcend into a broader security blanket with source code analysis, vulnerability testing, penetration testing, and access management, among others. However, having in place a DevSecOps guide has been a matter of concern. Let us analyze the top two challenges experienced by organizations in implementing DevSecOps. People Neutralizing corporate mindsets to accepting the change is like untying an intricate knot. You need to bring the team on one page and show them the bigger picture. Make them realize the long-term benefits of practicing security since inception. The Sonatype survey says that only one in four respondents believe that safety and quality run parallel. Expertise A 2018-2019 survey, which was based on DevOps, showed that 58 percent of tech leaders think lack of skills hinders the embedment of security and testing within the SDCL. Lack of expertise will make the complete DevSecOps plan vulnerable. What to do is essential, but how to do is the key. Often organizations lack the skills to design an effective DevSecOps plan with defined milestones, clear operative procedures, and deliverables and project owners. Mapping DevSecOps process flow within an organization and ensuring its success requires the right mix of tools, policies, methodologies, and practices. The bottom-line remains smooth synchronization between Dev, Ops, and the Infosec team. So, let us now look at the five-pointer DevSecOps security checklist that can be included as DevSecOps best practices. 1 Embrace Automation The standard requirement for continuous testing and continuous integration is speed, which makes automation a fundamental requirement. Therefore, having essential security controls and trigger points is essential. Per the Sonatype 2019 survey, 63 percent of the respondents said to have automated their security practices. Further, it is also vital to have mindful automation is place. For example, your source code scan need not be done for the whole application daily. It can be confined to the daily codes committed only. Also, the key is to have not only static application security testing but also include dynamic application security testing. This way, we will ensure vulnerability scanning in real-time. It is equally important to have a relevant and optimal set of tools that will infuse automation to your configuration management, code analysis, patching, and access management. 2 Risk Management of Third-Party Tools & Technologies The use of open source technologies for application development is on the rise. Per the 2019 Red Hat report, 69% of respondents believe that open source technology is crucial. However, there are security concerns around the use of open source technologies that must be addressed. The Red Hat report cites – “Security is still cited as an open-source concern. Some of that fear likely stems from general security concerns since hacks and data breaches seem to be daily news. This concern may also reflect how unmanaged open source code—found across the web or brought in through dependencies—can introduce vulnerabilities in both open source and proprietary solutions.” Developers are too busy to review open-source codes. This might bring unidentified vulnerabilities and other security issues on the codes. Therefore, code dependency testing is necessary. Having an OWASP utility check will ensure that there is no vulnerability in codes, which are dependent on open-source components. 3 Uniform Security Management Process The security team will usually post the bugs report in different bug repositories. Developers don’t have the bandwidth to check all the reports. And top of it, multiple priorities result in precedence to functional testing over security issues. Therefore, it is fundamental to DevSecOps to have in place a uniform security application management system. This way any modification in codes is reflected in one place. The security team is also immediately notified of executing the authentication-testing protocol. Another critical point is to follow the ‘secure by design’ principle via the automation of security tasks. This helps to create and maintain collective software and security elements like correct authorization, control mechanisms, audit management, and safety protocol. Resultant – a transparent security culture. 4 Integrating Application Security System with Bugs Tracker The application security system should be integrated with your task management system. This will create a list of bugs tasks automatically that can be executed by the infosec team. Additionally, it will provide actionable details such as the nature of the bug, its severity and treatment required. Thus, the security team becomes empowered to fix issues before they land to the production and deployment environment. 5 Threat Modeling – The Last Key The SANS Institute advocates risk assessment before implementing DevSecOps methodology. Following threat modeling will result in risk-gap analysis – helping you identify software components, which are under threats, level of threats, and possible solutions to counter those threats. In fact, with threat modeling, the development team is equipped to locate fundamental glitches in the architecture. This way they can make necessary changes in application designs. Conclusion The ferocious rise in the competition demand reduction in time-to-market of the application. This must be supplemented with superior quality. Therefore, DevOps as a practice is only expected to increase. Rendering DevSecOps services for a while now, we have realized that imbibing security right from the early stages is only the key to maintain zero deployment downtime. Organizations must be thoughtful while shifting to Dev + Security + Operations. They should follow the idea of the People>Process>Technology. And, while doing so, the above 5 DevSecOps best practices will lay the foundation.

Aziro Marketing

blogImage

4 Things What Aziro (formerly MSys Technologies) will do at KubeCon 2019

The Kubernetes and cloud native communities have grown at a tremendous pace in the last couple of years. The buzz and the general vibe of before and after KubeCon is a testimony to this. As the storage and cloud industry veers towards cloud native technologies, events like Kubernetes are the perfect place to educate, brainstorm, and reflect on the further advancements of cloud native computing. This blog details the technologies that our cloud-native DNA digs at these events. KubeCon and CloudNativeCon are havens for technocrats, and as an active participant of the Digital Transformation epoch, you should check them out too. We have also enumerated key relevant events that you should attend at KubeCon 2019. Cloud Native Technologies 1. Cloud Native Technologies for Enterprises Today’s volatile markets expect high quality applications that are fast and agile. Enterprises need to shorten their time to market in terms of developing agile capabilities that can disarm competition and cater to the market. While critical business drivers for every enterprise may vary, business criteria such as time to market, cost reduction, and easier manageability are usually reckoned important. Containers are emerging as the default for applications across these use cases, and Kubernetes is the right choice to orchestrate these containers. With this in mind, KubeCon is the ideal venue for enterprises where they can learn and network with solution providers to strategize their Cloud Native roadmap. 2. The Cloud Native Solution to Cloud Security Risks Data Security is always a key concern for enterprises. The dynamic nature of containers exponentially increases security threats to enterprises. It is therefore important that cloud native-centric security products focus specifically on security needs of the cloud ecosystem. KubeCon 2019 has a host of talks and sessions that focus on the growing need of Kubernetes security, some of them being: The Devil in the Details: Kubernetes’ First Security Assessment – Aaron Small, Google & Jay Beale, InGuardians [Tuesday, November 19 • 10:55am – 11:30am] Securing Communication Between Meshes and Beyond with SPIFFE Federation – Evan Gilman, Scytale & Oliver Liu, Google [Thursday, November 21 • 2:25pm – 3:00pm] How Kubernetes Components Communicate Securely in Your Cluster – Maya Kaczorowski, Google [Thursday, November 21 • 11:50am – 12:25pm] How Yelp Moved Security From the App to the Mesh with Envoy and OPA – Daniel Popescu & Ben Plotnick, Yelp [Thursday, November 21 • 10:55am – 11:30am] Kuberenetes 3. Kubernetes: The Door to a Multi-Cloud World Today’s businesses are unfulfilled with applications that adhere strictly to one-track environments. Enterprises profit from applications that are versatile and can move between environments. Kubernetes and containers facilitate enterprises to run applications across environments- on-premise VMs, public cloud or multiple clouds, fostering portability, and agility. Kubernetes and containers have helped many IT leaders bridge on-premise and public cloud environments. The widespread adoption of Kubernetes and containers into the mainstream production environment is driving innovation. Kubernetes has helped companies turn the idea of multi-cloud into a reality. By being able to run the same container images across multiple cloud platforms, IT teams can maintain control over their IT and security. Despite this, businesses need to assess their cloud prowess time and again, and so require assistance to reevaluate existing strategy and to chart a new one wherever applicable. If you need to assess your serverless infrastructure or are looking to customize solutions for your business, here are some talks that you should attend: Serverless Platform for Large Scale Mini-Apps: From Knative to Production – Yitao Dong & Ke Wang, Ant Financial [Wednesday, November 20 • 5:20pm – 5:55pm] KubeFlow’s Serverless Component: 10x Faster, a 1/10 of the Effort – Orit Nissan-Messing, Iguazio [Tuesday, November 19 • 4:25pm – 5:00pm] Kubernetes Storage Cheat Sheet for VM Administrators – Manu Batra & Jing Xu, Google [Wednesday, November 20 • 4:25pm – 5:00pm] Only Slightly Bent: Uber’s Kubernetes Migration Journey for Microservices – Yunpeng Liu, Uber [Tuesday, November 19 • 10:55am – 11:30am] Growth and Design Patterns in the Extensions Ecosystem – Eric Tune, Google [Wednesday, November 20 • 11:50am – 12:25pm] 4. Application support and community Kubernetes is one of the agile-est technologies that offer a wide spectrum of workloads that sustain users and use cases. It supports multiple workloads like programming languages and frameworks, enabling stateless, stateful, and data-processing workloads. Kubernetes’ growth, support, and broad adoption justify its popularity among other container solutions. The project has gained a very large active user and developer open source community, as well as the support of global enterprises, IT market leaders, and major cloud providers. You can connect with some of the best minds in the business by participating in any of the social events at KubeCon USA 2019: Taco Tuesday Welcome Reception + Sponsor Booth Crawl, sponsored by SAIC [Tuesday, November 19 • 6:40pm – 8:40pm] Diversity Lunch + Hack – sponsored by Google Cloud [Wednesday, November 20 • 12:30pm – 2:15pm] All-Attendee Block Party (Name Badge Required to Attend) [Wednesday, November 20 • 6:00pm – 9:00pm] Meet Aziro (formerly MSys Technologies)’ architects [Click here to know how] These are some reasons why we eagerly look forward to this cloud native event. Are you as excited as us to attend KubeCon 2019? See you soon!

Aziro Marketing

blogImage

Beginners Guide to a Career in DevOps

ABSTRACTThe software development lifecycles moved from waterfall to agile models. These improvements are moving toward IT operations with evolution of Devops.DevOps primarily focuses on collaboration, communication, integration between developers and operations.AGILE EVOLUTION TO DEVOPSWaterfall model was based on a sequence starting with requirements stage, while development stage was under progress. This approach is inflexible and monolithic. In the agile process, both verification and validation execute at the same time. As developers become productive, business become more agile and respond to their customer requests more quickly and efficient.WHAT IS DEVOPSIt is a software development strategy which bridges the gap between the developers and IT Staff. It includes continuous development, continuous testing, continuous integration, continuous deployment, continuous monitoring throughout the development lifecycle.WHY DEVOPS IS IMPORTANT1.Short development cycle, faster innovation2.Reduced deployment failures, rollback and time to recover3.Improved communication4.Increased efficiencies5.Reduced costsWHAT ARE THE TECHNOLOGIES BEHIND DEVOPS?Collabration, Code Planning, Code Repository, Configuration Management, Continuous integration, Test Automation, Issue Tracking, Security, MonitoringHOW DOES DEVOPS WORKSDevOps uses a CAMS approachC=Culture, A=Automation, M=Measurement, S=SharingDEVOPS TOOLSTOP DEVOPS TESTING TOOLS IN 20191.Tricentis 2. Zephyr 3.Ranorex 4.Jenkins 5.Bamboo 6.Jmeter 7.Selenium 8.Appium 9.Soapui 10.CruiseControl 11.Vagrant 12.PagerDuty 13.Snort 14.Docker 15.Stackify Retrace 16.Puppet Enterprise 17.UpGuard 18.AppVerifyDEVOPS JOB ROLES AND RESPONSIBILITIESDevOps Evangelist – The principal officer (leader) responsible for implementingDevOps Release Manager – The one releasing new features & ensuring post-release product stabilityAutomation Expert – The guy responsible for achieving automation & orchestration of toolsSoftware Developer/ Tester – The one who develops the code and tests itQuality Assurance – The one who ensures the quality of the product confirms to its requirementSecurity Engineer – The one always monitoring the product’s security & healthDEVOPS CERITIFICATIONRet hat offers five courses with examDeveloping Containerized Applications, OpenShift Enterprise Administration, Cloud Automation with Ansible, Managing Docker Containers with RHEL Atomic Host, Configuration Management with PuppetAmazon web services offers the AWS certified DevOps EngineerSKILL THAT EVERY DEVOPS ENGINEER NEEDS FOR SUCCESS1.Soft Skills2.Broad understanding of tools and technologies2.1 Source Control (like Git, Bitbucket, Svn, VSTS etc)2.2 Continuous Integration (like Jenkins, Bamboo, VSTS )2.3 Infrastructure Automation (like Puppet, Chef, Ansible)2.4 Deployment Automation & Orchestration (like Jenkins, VSTS, Octopus Deploy)2.5 Container Concepts (LXD, Docker)2.6 Orchestration (Kubernetes, Mesos, Swarm)2.7 Cloud (like AWS, Azure, GoogleCloud, Openstack)3.Security Testing4.Experience with infrastructure automation tools5.Testing6.Customer-first mindset7.Collabration8.Flexibility9.Network awareness10.Big Picture thinking on technologiesLINKS:https://www.quora.com/How-are-DevOps-and-Agile-differenthttps://www.altencalsoftlabs.com/blog/2017/07/understanding-continuous-devops-lifecycle/https://jenkins.io/download/https://www.atlassian.com/software/bamboohttp://jmeter.apache.org/download_jmeter.cgihttp://www.seleniumhq.org/download/http://appium.io/https://www.soapui.org/downloads/download-soapui-pro-trial.htmlhttp://cruisecontrol.sourceforge.net/download.htmlhttps://www.vagrantup.com/downloads.htmlhttps://www.pagerduty.com/https://www.snort.org/downloadshttps://store.docker.com/editions/enterprise/docker-ee-trialhttps://saltstack.com/saltstack-downloads/https://puppet.com/download-puppet-enterprisehttps://www.upguard.com/demohttps://www.nrgglobal.com/regression-testing-appverify-download

Aziro Marketing

blogImage

5 Ways How DevOps Becomes a Dealmaker in Digital Transformation

The culture of DevOps-ism is a triumph for companies. DevOps has plundered the inefficiencies of the traditional model of software product release. But, there is a key to it. Companies must unlock the true DevOps tenacity by wiring it with its primary stakeholders – People and Process. A recent survey shows that most teams don’t have a flair for DevOps implementation. Another study reveals that around 78 percent of the organizations fail to implement DevOps. So, what makes the difference? Companies must underline and acclimatize the cultural shift, which erupts with DevOps. This culture is predominantly driven by automation to empower resilience, reduce costs and accelerate innovation. The atoms that make up the cultural ecosystem are people and processes. Funny story, most companies that dream of being digital savvy, still carry primitive mind-sets. Some companies have recognized this change. The question remains – are they adept at pulling things together? Are You in the Pre-DevOps Era, Still? It is archaic! Collaboration and innovation, for the most part, is theoretical. The technological proliferation coupled with cut-throat competition has put your company in a hotspot. You feel crippled embracing the disruptive wave of the digital renaissance. Also, you feel threatened by a maverick Independent Software Vendor – who is new to the software sector. If the factors above seem, relevant, it is time to move away from the legacy approach. The idea is simple – streamline and automate your software production – across the enterprise. It is similar to creating assembly lines, which operates parallel, continuous and in real-time. If you consider manufacturing, this concept is more than 150 years old. In software space, we have just realized the noble idea. Where it all started….. The IT industry experienced a radical change due to rapid consumerization and technological disruption. This created a need for companies to be more agile, intuitive and transparent in their service offerings. The digital transformation initiatives are continually pushing the boundaries to deliver convergent experiences that are insightful, social and informative. Further, the millennials who form more than 50 percent part of the overall IT decision makers globally are non-receptive to inefficient technologies and slow processes. They want their employees to work in an innovative business environment with augmented collaboration and intelligent operations. It is essential for the organization to follow an integrated approach for driving digital transformation, integrating cross-functionalities and enabling IT agility. DevOps enables enterprises to design, create, deploy and manage applications with new age software delivery principles. It also helps in creating unmatched competencies for delivering high-quality applications faster and easier; while accelerating innovation. With DevOps, organizations can divide silos facilitating collaboration, communication, and automation with better quality and reduced risk and cost. Below are the five key DevOps factors to implement for improving efficiency and accelerating innovation. 1. Automating Continuous Integration/Continuous Delivery DevOps is not confined to your departments. Nor it is just a deployment of some five-star tools. DevOps is a journey to transform your organization. It is essential to implement and assess a DevOps strategy to realize the dream of software automation. Breaking the silos, connecting isolated teams and wielding a robust interface can become taskmasters. This gets more tedious for larger companies. The initial focus must remain on integrating people in this DevOps model. The idea is to neutralize resistance, infuse confidence, and empower collaboration. Once these ideas become a reality, automation will become the protagonist. The question remains – How automation will be the game changer? This brings the lens on Continuous Integration/ Continuous Delivery (CI/CD). It works as a catalyst in channelizing automation throughout your organization. Historically, software development and delivery have been teeth-grinding. Even the traditional DevOps entails a manual cycle of writing codes, conducting tests, and deploying codes. This brings several pitfalls – multiple touchpoints, non-singular monitoring, increased dependencies on various tools, etc. How to Automate the CI/CD Pipeline? Select an automation server that provides numerous tools and interfaces for automation Select a version control and software development platform to commit codes Pull the codes in the build phase via automation server Compile codes in the build phase for various tasks Execute a series of tests for the compiled codes Release the codes in the staging environment Deploy the codes from the staging server via Docker An automated CI/CD pipeline will mitigate caveats associated with the traditional DevOps. It will result in a single, centralized view of project status, across stages. It drastically brings down the human intervention, moving you towards zero errors. But, is that all simple? Definitely no. It has its own set of challenges. Companies that are maneuvering from waterfall to DevOps, often end up automating wrong processes. How can teams avoid this? Well, have the following checklist handy. The frequency of process/workflow repetitions The time duration of the process Dependencies on people, tools, and technologies Delays resulting due to dependencies Errors in processes, if it is not automated These checklists will provide insights on the bottlenecks. It will help prioritize and automate critical tasks – starting from code compiling, testing to deployment. 2. The Holy Nexus of Cloud and DevOps You don’t buy a superbike to drive it in city traffics. You would prefer wide roads, less traffic to unleash its true speed. Then why do Cloud without DevOps? The combination of Cloud and DevOps is magical. Often, IT managers don’t realize it. Becoming a Cloud first company is not possible without a DevOps first approach. It is a case of the sum being more significant than parts. What is the point of implementing DevOps correctly, when the deployment platform is inefficient? Similarly, a scalable deployment platform loses its charm without fast and continuous software development. Cloud creates a single ecosystem, which provides DevOps with its natural playground. The centralized platform offered by Cloud enables continuous production, testing, and deployment. Most Cloud platforms come with DevOps capabilities of Continuous Integration and Continuous Delivery. This reduces the cost of DevOps in an On-Premise environment. Consider the case of Equifax – a consumer credit reporting company. They store their data on cloud and in-house data centers. In 2018, they released a document on the cyber-attack, which hit them in Sep 2017. Hackers collected around 2.4 million personally identifiable information (PII) of their customers. The company had to announce that they will provide credit file monitoring services to affected customers at no cost. Isn’t it damaging – monetarily and morally? But, what made hackers get access to such sensitive customer information? Well, per the website, there was a vulnerability Apache Struts CVE-2017-5638 to steal the data. Although the company patched this vulnerability in March 2017, it required more profound expertise and smarter process regime. If they had a DevOps strategy to redeploy software with continuous penetration testing more frequently, a cyber-attack could have averted. It is a genuine concern for any CIO to derive the value of cost, agility, security, and automation from their Cloud investment. The most common hurdle to this is the less compatible IT process. There other significant challenges too. Per a recent survey by RightScale, around 58 percent of Cloud users think saving cost is their top priority. Approximately 73 percent of the respondents believe that lack of skill expertise is a significant challenge. More than 70 percent of respondent said that governance and quality is an issue. The report also outlines integration as a challenge when moving from a legacy application to the Cloud. DevOps can standardize the processes and set the right course to leverage Cloud. DevOps in the backend and Cloud in the frontend gives a competitive edge. Cloud works well when your Infrastructure as Code (IaC) is successful. IT teams must write the right scripts and configure it in the application. Manually writing infrastructure scripts can be daunting. DevOps can automate scripts for aligning IT processes to Cloud. 3. Microservices – The Modern Architecture Microservices Without DevOps? Think Again! The sea-changes in consumer preferences have altered companies’ approach to delivering applications. Consumers want results in real-time, unique to their needs. Perhaps, this is why companies such as Netflix and Amazon have lauded the benefits of Microservices. It instills application scalability and drives product release speed. Companies also leverage Microservices to stay nimble and boost their product features. The main aim of Microservices is to shy away from the monolithic application delivery. It breaks down your application components into standalone services (Microservices). These services then must undergo development, testing, and deployment in different environments. The services’ numbers can be in 100s or 1000s. Additionally, teams can use various tools for each service. The resultant will be mammoth tasks coupled with an exponential burden on the operations. The process complexities and time-battle will also be a nightmare. Leveraging Microservices with a waterfall approach will not extract its real benefits. You must de-couple the silo approach to incubate the gems of DevOps – People>Process>Automation. Microservices without DevOps would severely jolt teams productivity. The Quality Assurance teams would experience neck-breaking pressure due to untested codes. They will become bottlenecks, hampering the process efficiencies. DevOps with its capability to trigger continuity will stitch every workflow through automation. 4. Containers –Without DevOps? Consider companies of the size and nature of Netflix that require to update data in real-time and on an on-going basis. They must keep their customers updated with new features and capabilities. This isn’t feasible without Cloud. And, on top of that, releasing multiple changes daily, will be dreadful. Thereby, for smooth product operations, Container Architecture is a must. In such a case, they must daily update their Container Services – multiple times. It entails website maintenance, releasing new services (in different locations) and responding to security threats. Even if you are a small to medium Independent Software Vendor operating in the upper echelons of the technology world, your software product requires a daily upbeat. Your developers will always be on their toes for daily security and patching updates. This a daunting task, isn’t it? DevOps is the savior. DevOps will hold back for your applications that are built in the Cloud. It will set a continuous course of monitoring through automation and ease the pressure of monitoring from developers. Without DevOps, Container Architecture won’t sustain the pressure. 5. Marrying DevOps, Lean IT, and Agile The right mix of DevOps, Lean and Agile amplifies business performance. Agile emphasizes greater collaboration for developing software. Lean focuses on eliminating wastes. DevOps wants to align software development with software delivery. The three work as positives; adding them will only augment the outcome. However, there persists a contradiction in perception towards adopting these three principles. When Agile took strides, the teams said that we already do Lean IT. When DevOps took strides, the teams said that we already do Agile. But, the three principles strive to achieve similar things in different areas of the software lifecycle. Combining DevOps, Lean and Agile can be an uphill task. Especially, for leaders that carry the traditional mindset. Organizations must revive their leadership style to align with modern business practices. The aim must be moving towards a collaborative environment for delivering value to the customers. Companies must focus on implementing a modern communication strategy at the workplace. It is necessary that they address the gaps between IT and the rest of the groups within an organization. They must be proactive in initiating mindful cross-functional relationships, backed by streamlined communications. The software development teams will then work as protagonists in embracing DevOps, Lean and Agile to survive the onslaught of competition. It is also essential to champion each of the above concept. This will ensure that we profit out of each component in the combination. Organizational leadership must relentlessly work to create a seamless workflow, while removing bottlenecks, cutting delays, and eliminating reworks. Companies haven’t yet fathomed the true benefits of DevOps-Agile-Lean combination. It needs time and the team of experts to capitalize on these three principles. Additionally, companies shy away from exploiting the agility and responsiveness of modern delivery architects – Microservices, in particular. This becomes a hindrance in reaping the full potential of the combination. The crux of driving DevOps-Agile-Lean combination is a business-driven approach. Continual feedback backed by the right analytics also plays a crucial role. It facilitates fail-fast, thereby, creating a loop of continuous improvement. Agile offers a robust platform to design software, which is tuned with the market demands. DevOps stitches the process, people and technology, ensuring efficient software delivery. Final Thoughts Adopting DevOps is a promising move. Above, we have depicted in 5 manners how DevOps is your digital transformation dealmaker. However, it can be nerve crunching. It takes patience, expertise, and experience for embodying its purest form. A half-baked DevOps strategy might give you a few immediate results. In the long run, it will deride your teams’ efforts. However, automation is the best way to sail through it.

Aziro Marketing

blogImage

DevOps Paradigm: Where Collaboration is the Key!

The growing popularity of DevOps as a strategy decision calls for an inside look at this practice. While DevOps has become a buzzword in the IT space, it comes with its own set of myths that need to be demystified. To put it short and straight, DevOps is an inclusive approach between the two most important teams in the IT industry: software development and IT operations. Let’s understand this further.According to Wikipedia:DevOps (a portmanteau of development and operations) is a software development method that stresses communication, collaboration and integration between software developers and Information Technology(IT) professionals. DevOps is a response to the interdependence of software development and IT operations. It aims to help an organization rapidly produce software products and services.DevOps aims at breaking the silos to bring some cohesiveness between Development and Operations. But the question is why is this balance needed?Well, in business, the answer pretty much boils down to efficiency. To adopt DevOps as a common practice merits a strong value-addition that it brings to the table. DevOps sits between two important workflows and exposes the gaps so that CIOs can understand the bigger picture.We all know that the entire Software Development Lifecycle comes with a set of elaborate procedures. A lot of time goes waste trying to get to and fro between Development and Operations teams. By following the agile methodology of DevOps, the organization can build in a process driven by a set of guidelines that puts the onus on everyone. Because it makes everyone accountable, DevOps leads to efficiency and a much better outcome.Myth No. 1: DevOps is a ready solutionNo. DevOps is not a tailored solution to your problems. It is a philosophy – a set of guidelines – that allow Development and Operations teams to work in a collaborative culture. It is a process that works around a continuous feedback loop so that solutions are delivered pretty much in real-time, without causing time spillovers.Myth No. 2: DevOps is automationDevOps does involve making use of tools that automate the processes. However, it not just looks at automating IT processes, but also streamlining people processes. People drive every system and bringing a degree of seamless interaction between this segment is also an important underlying aspect of a successful DevOps strategy.Myth No: 3: DevOps is the new engineering skillNot really. A DevOps guy has walked in the Developer’s shoes and understands the problems on the Operations sides. Getting the perspective on both sides is definitely a skill, which can’t be learnt from textbooks. DevOps might look like the next big thing in IT but it is equally a demanding space to be in because if you must fail, you need to fail fast and rectify yourself in time so that continuous delivery is not hampered.At the heart of DevOps lies a collaborative, cross-functional environment that is driven by people. The organization needs to adopt this cultural shift to bring in a transformation change.Business AdvantageSaves timeSaves tons of money which is the biggest value additionSaves employee man hours, which results in better resource utilizationIn short, we are talking about the ROI. By following DevOps organizations are opening doors to a higher Return on Investment. DevOps enables Development and Operations teams to focus on their competencies. It avoids pitfalls that can be caused by the lack of timely communication between these two important functions. It works on the principles of keeping the system flow uninterrupted and optimizing workflow at all times.Why organizations should adopt DevOpsAccelerate time to marketProduce better and stable solutionsReuse existing resources in a better wayEnhanced ROIFinding the right alignment between Development and Operations can be a real challenge and that’s what DevOps addresses. By integrating people, product and processes, DevOps can deliver continuous value to the customer. It is a philosophy that needs to be embraced, not imposed. Keep it simple!What’s next?While adoption of DevOps is gaining momentum, we are already looking at NoOps and Serverless Computing. To be continued…

Aziro Marketing

blogImage

Know about Libstorage – Storage Framework for Docker Engine (Demo)

This article captures the need for storage framework within Docker engine. It further details our libstorage framework integration to Docker engine and its provision for a clean, pluggable storage management framework for Docker engines. Libstorage design is loosely modeled on libnetwork for Docker engine. Libstorage framework and current functionality are discussed in detail. Finally, future extensions and considerations are suggested. As of today, Docker has acquired Infinit https://blog.docker.com/2016/12/docker-acquires-infinit/ to overcome this shortcoming. So I wish to see most of this gap being addressed in forthcoming docker engine releases.1 IntroductionDocker engine is the opensource tool that provides container lifecycle management. The tool has been great and helps everyone understand, appreciate and deploy applications over containers a breeze. While working with Docker engine, we found shortcomings, especially with volume management. The communities major concern with Docker engine had always been provisioning volumes for containers. Volume lifecycle management for containers seemed to have not been thought through well with various proposals that were floated over. We believe there is more to it, and thus was born Libstorage. Currently docker expects application deployers to choose the volume driver. This is plain ugly. It is the cluster administrator who decides which volume drivers are deployed. Application developers need just storage and should never know and neither do they care for the underlying storage stack.2 Libstorage StackLibstorage as a framework works on defining a complete Volume lifecycle management methods for Containers. Docker daemon will interact with Volume Manager to complete the volume management functionality. Libstorage standardizes the necessary interfaces to be made available from any Volume Manager. There can be only one Volume Manager active in the cluster. Libstorage is integrated with distributed key-value store to ensure volume configuration is synced across the cluster. So any Node part of the cluster, shall know about all volumes and its various states.Volume Controller is the glue from any storage stack to docker engine. There can be many Volume Controllers that can be enabled under top level Volume Manager. Libstorage Volume Manager shall directly interact with either Volume Controller or with Volumes to complete the intended functionality.3 Cluster SetupSaltstack forms the underlying technology for bringing up the whole cluster setup. Customized flavor of VMs with necessary dependencies were pre-baked as (Amazon Machine Images) AMI and DigitalOcean Snapshots previously. Terraform scripts bootstrap the cloud cluster with few parameters and targeted cloud provider or private hosting along with needed credentials to kick start the process.Ceph cloud storage (Rados block devices) provisioning and management was married to Docker engine volume management framework. It can be extended easily to other cloud storage solutions like GlusterFs and CephFS easily. Ceph Rados Gateway and Amazon S3 was used for object archival and data migration seamlessly.4 Volume ManagerVolume Manager is the top level module from Libstorage that directly interacts with Docker daemon and external distributed keyvalue store. Volume Manager ensures Volume Configuration is consistent across all Nodes in the cluster. Volume Manager defines a consistent interface for Volume management for both Docker daemon to connect to it, and the many Volume Controllers within Libstorage that can be enabled in the cluster. A standard set of policies are also defined that Volume Controllers can expose.4.1 Pluggable Volume ManagerPluggable Volume Manager is an implementation of the interface and needed functionality. The top level volume manager is by itself a pluggable module to docker engine.5 Volume ControllersVolume Controllers are pluggable modules to Volume Managers. Each Volume Controller exports one more policy that it supports and users target Volume Controller by exported policies. For example, if policy is distributed, then volume is available at any Node in the cluster. If policy is local, although the volume is available on any node in the cluster, volume data is held local on the host filesystem. Volume Controllers can use any storage stack underneath and provide a standard view of volume management through toplevel Volume Manager.5.1 Pluggable Volume ControllerDolphinstor implements Ceph, Gluster, Local and RAM volume controllers. Upon volume creation, the volumes are visible across all the nodes in the cluster. Whether the volume is available for containers to mount (because of sharing properties configured during volume creation), or whether the volume data is available from other Nodes (only if volume is distributed) are controllable attributes during volume creation. Ceph Volume Controller implements distributed policy, guaranteeing any volume created with it, shall be available across any Node in the cluster. Local Volume Controller implements local policy, which guarantees that volume data are present only on host machines on which the container is scheduled. Containers scheduled on any host shall see the volume, but is held as a local copy. And RAM Volume Controller defines two policies, ram and secure. Volume data is held on RAM and so is volatile. A secure policy volume cannot be shared even across containers in the same host.6 CLI ExtensionsBelow are the list of CLI extensions provided and managed by Libstorage.docker dsvolume create [-z|--size=MB] [-p|--policy=distributed|distributed-fs|local|ram|secure] [-s|--shared=true|false] [-m|--automigrate=true|false] [-f|--fstype=raw,ext2,ext3,btrfs,xfs] [-o|--opt=[]] VOLUME }If volumes have backing block device, they are mounted within volume as well. Specifying raw for fstype during volume creation does not format the volume for any filesystem. The volume is presented as a raw block device for containers to use.• docker dsvolume create [-z|--size=MB] [-p|--policy=distributed|distributed-fs|local|ram|secure] [-s|--shared=true|false] [-m|--automigrate=true|false] [-f|--fstype=raw,ext2,ext3,btrfs,xfs] [-o|--opt=[]] VOLUME If volumes have backing block device, they are mounted within volume as well. Specifying raw for fstype during volume creation does not format the volume for any filesystem. The volume is presented as a raw block device for containers to use. • docker dsvolume rm VOLUME • docker dsvolume info VOLUME [VOLUME...] • docker dsvolume ls • docker dsvolume usage VOLUME [VOLUME...] • docker dsvolume rollback VOLUME@SNAPSHOT • docker dsvolume snapshot create -v|--volume=VOLUME SNAPSHOT • docker dsvolume snapshot rm VOLUME@SNAP • docker dsvolume snapshot ls [-v|--volume=VOLUME] • docker dsvolume snapshot info VOLUME@SNAPSHOT [VOLUME@SNAPSHOT...] • docker dsvolume snapshot clone srcVOLUME@SNAPSHOT NEWVOL- UME • docker dsvolume qos {create|edit} [--read-iops=100] [--read-bw=10000] [--write-iops=100] [--write-bw=10000] [--weight=500] PROFILE • docker dsvolume qos rm PROFILE • docker dsvolume qos ls • docker dsvolume qos info PROFILE [PROFILE...] • docker dsvolume qos {enable|disable} [-g|--global] VOLUME [VOLUME...] • docker dsvolume qos apply -p=PROFILE VOLUME [VOLUME...]7 Console Logs[lns@dolphinhost3 bins]$ ./dolphindocker dsvolume ls NAME Created Type/Fs Policy Size(MB) Shared Inuse Path [lns@dolphinhost3 bins]$ [lns@dolphinhost3 bins]$ [lns@dolphinhost3 bins]$ [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume create --help Usage: ./dolphindocker dsvolume create [OPTIONS] VOLUME-NAME Creates a new dsvolume with a name specified by the user -f, --filesys=xfs       volume size --help=false            Print usage -z, --size=             volume size [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume create -f=ext4 \ -z=100 -m -p=distributed demovol1 2015/10/08 02:30:23 VolumeCreate(demovol1) with opts map[name:demovol1 policy:distributed m dsvolume create acked response {"Name":"demovol1","Err":""} [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume create -z=100 -p=local demolocal1 2015/10/08 02:30:53 VolumeCreate(demolocal1) with opts map[shared:true fstype:xfs automigra dsvolume create acked response {"Name":"demolocal1","Err":""} [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume create -z=100 -p=ram demoram 2015/10/08 02:31:04 VolumeCreate(demoram) with opts map[shared:true fstype:xfs automigrate: dsvolume create acked response {"Name":"demoram","Err":""} [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume create -z=100 -p=secure demosecure 2015/10/08 02:31:17 VolumeCreate(demosecure) with opts map[name:demosecure policy:secure mb dsvolume create acked response {"Name":"demosecure","Err":""} [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume ls NAME Created Type/Fs Policy Size(MB) Shared Inuse Path demosecure dolphinhost3 ds-ram/tmpfs secure 100 false - demovol1 dolphinhost3 ds-ceph/ext4 distributed 100 true - demolocal1 dolphinhost3 ds-local/ local 0 true - demoram dolphinhost3 ds-ram/tmpfs ram 100 true - [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume info demosecure demolocal1 volume info on demosecure [ { "Name": "demosecure", "Voltype": "ds-ram", "CreatedBy": "dolphinhost3", "CreatedAt": "Thu Oct 8 02:31:17 EDT 2015", "Policy": "secure", "Fstype": "tmpfs", "MBSize": 100, "AutoMigrate": false, "Shared": false, "Mountpoint": "", "Inuse": null, "Containers": null, "LastAccessTimestamp": "Mon Jan 1 00:00:00 UTC 0001", "IsClone": false, "ParentSnapshot": "", "QoSState": false, "QoSProfile": "" }, { "Name": "demolocal1", "Voltype": "ds-local", "CreatedBy": "dolphinhost3", "CreatedAt": "Thu Oct 8 02:30:53 EDT 2015", "Policy": "local", "Fstype": "", "MBSize": 0, "AutoMigrate": false, "Shared": true, "Mountpoint": "", "Inuse": null, "Containers": null, "LastAccessTimestamp": "Mon Jan 1 00:00:00 UTC 0001", "IsClone": false, "ParentSnapshot": "", "QoSState": false, "QoSProfile": "" } ] [lns@dolphinhost3 bins]$ [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume usage demovol1 volume usage on demovol1 [ { "Name": "demovol1", "Usage": [ { "file": "/var/lib/docker/volumes/_dolphinstor/demovol1/lost+found", "size": "12K" }, { "file": "/var/lib/docker/volumes/_dolphinstor/demovol1", "size": "15K" }, { "file": "total", "size": "15K" } ], "Size": "100M", "Err": "" } ] [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume usage -s=false demovol1 volume usage on demovol1 [ { "Name": "demovol1", "Usage": [ { "file": "/var/lib/docker/volumes/_dolphinstor/demovol1/hosts", "size": "1.0K" }, { "file": "/var/lib/docker/volumes/_dolphinstor/demovol1/lost+found", "size": "12K" }, { "file": "/var/lib/docker/volumes/_dolphinstor/demovol1/1", "size": "0" }, { "file": "/var/lib/docker/volumes/_dolphinstor/demovol1/hostname", "size": "1.0K" }, { "file": "/var/lib/docker/volumes/_dolphinstor/demovol1", "size": "15K" }, { "file": "total", "size": "15K" } ], "Size": "100M", "Err": "" } ] [lns@dolphinhost3 bins]$ [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume info demovol1 volume info on demovol1 [ { "Name": "demovol1", "Voltype": "ds-ceph", "CreatedBy": "dolphinhost3", "CreatedAt": "Thu Oct 8 02:30:23 EDT 2015", "Policy": "distributed", "Fstype": "ext4", "MBSize": 100, "AutoMigrate": true, "Shared": true, "Mountpoint": "", "Inuse": [ "dolphinhost3" ], "Containers": [ "5000b791e0c78e7c8f3b43b72b42206d0eaed3150a825e1f055637b31676a77f@dolphinhost1" "0c8a9d483a63402441185203b0262f7f3b8d761a8a58145ed55c93835ba83538@dolphinhost2" ], "LastAccessTimestamp": "Thu Oct 8 03:46:51 EDT 2015", "IsClone": false, "ParentSnapshot": "", "QoSState": false, "QoSProfile": "" } ] [lns@dolphinhost3 bins]$ [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume qos ls Global QoS Enabled Name             ReadIOPS ReadBW WriteIOPS WriteBW Weight default          200 20000 100 10000 600 demoprofile      256 20000 100 10000 555 myprofile        200 10000 100 10000 555 newprofile       200 2000 100 1000 777 dsvolume qos list acked response [lns@dolphinhost3 bins]$ ./dolphindocker run -it -v demovol1:/opt/demo ubuntu:latest bash root@1dba3c87ca04:/# dd if=/dev/rbd0 of=/dev/null bs=1M count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0625218 s, 16.8 MB/s root@1dba3c87ca04:/# exit [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume info demovol1 volume info on demovol1 [ { "Name": "demovol1", "Voltype": "ds-ceph", "CreatedBy": "dolphinhost3", "CreatedAt": "Thu Oct 8 02:30:23 EDT 2015", "Policy": "distributed", "Fstype": "ext4", "MBSize": 100, "AutoMigrate": true, "Shared": true, "Mountpoint": "", "Inuse": [], "Containers": [ "5000b791e0c78e7c8f3b43b72b42206d0eaed3150a825e1f055637b31676a77f@dolphinhost3" "0c8a9d483a63402441185203b0262f7f3b8d761a8a58145ed55c93835ba83538@dolphinhost3" "87c7a2462879103fd3376be4aae352568e5e36659820b92d567829c0b8375255@dolphinhost3" "f3feb1f15ed614618c02321e7739e0476f23891aa7bb1b2d5211ba1e2641c643@dolphinhost3" "76ab5182082ac30545725c843177fa07d06e3ec76a2af41b1e8e1dee42670759@dolphinhost3" "c6226469aa036f277f237643141d4d168856692134cea91f724455753c632533@dolphinhost3" "426b57492c7c05220b75d05a13ad144742b92fa696611465562169e1cb74ea6b@dolphinhost3" "2419534dd70ba2775ca1880fb71d196d31a167579d0ee85d5203be3cc0ff574e@dolphinhost3" "c3afeac73b389a69a856eeccf3098e778d1b0087a7a543705d6bfbba4f5c6803@dolphinhost3" "7bd28eed915c450459bd1a27d49325548d0791cbbaac670dcdae1f8d97596c7e@dolphinhost3" "0fc0217b6cda2f02ef27dca9d6dd3913bda7a871012d1073f29a864ae77bc61f@dolphinhost3" ], "LastAccessTimestamp": "Thu Oct 8 05:16:26 EDT 2015", "IsClone": false, "ParentSnapshot": "", "QoSState": false, "QoSProfile": "" } ] [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume qos apply -p=newprofile demovol1 2015/10/08 05:17:04 QoSApply(demovol1) with opts {Name:newprofile Opts:map[name:newprofile dsvolume QoS apply response [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume info demovol1             volume info [ { "Name": "demovol1", "Voltype": "ds-ceph", "CreatedBy": "dolphinhost3", "CreatedAt": "Thu Oct 8 02:30:23 EDT 2015", "Policy": "distributed", "Fstype": "ext4", "MBSize": 100, "AutoMigrate": true, "2419534dd70ba2775ca1880fb71d196d31a167579d0ee85d5203be3cc0ff574e@dolphinhost3" "c3afeac73b389a69a856eeccf3098e778d1b0087a7a543705d6bfbba4f5c6803@dolphinhost3" "7bd28eed915c450459bd1a27d49325548d0791cbbaac670dcdae1f8d97596c7e@dolphinhost3" "0fc0217b6cda2f02ef27dca9d6dd3913bda7a871012d1073f29a864ae77bc61f@dolphinhost3" ], "LastAccessTimestamp": "Thu Oct 8 05:16:26 EDT 2015", "IsClone": false, "ParentSnapshot": "", "QoSState": false, "QoSProfile": "newprofile" } ] [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume qos enable -g demovol1 2015/10/08 05:17:22 QoSEnable with opts {Name: Opts:map[global:true volume:demovol1]} dsvolume QoS enable response [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume qos ls Global QoS Enabled Name             ReadIOPS ReadBW WriteIOPS WriteBW Weight default          200 20000 100 10000 600 demoprofile      256 20000 100 10000 555 myprofile        200 10000 100 10000 555 newprofile       200 2000 100 1000 777 dsvolume qos list acked response [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume info demovol1 volume info on demovol1 [ { "Name": "demovol1", "Voltype": "ds-ceph", "CreatedBy": "dolphinhost3", "CreatedAt": "Thu Oct 8 02:30:23 EDT 2015", "Policy": "distributed", "Fstype": "ext4", "MBSize": 100, "AutoMigrate": true, "2419534dd70ba2775ca1880fb71d196d31a167579d0ee85d5203be3cc0ff574e@dolphinhost3" "c3afeac73b389a69a856eeccf3098e778d1b0087a7a543705d6bfbba4f5c6803@dolphinhost3" "7bd28eed915c450459bd1a27d49325548d0791cbbaac670dcdae1f8d97596c7e@dolphinhost3" "0fc0217b6cda2f02ef27dca9d6dd3913bda7a871012d1073f29a864ae77bc61f@dolphinhost3" ], "LastAccessTimestamp": "Thu Oct 8 05:16:26 EDT 2015", "IsClone": false, "ParentSnapshot": "", "QoSState": true, "QoSProfile": "newprofile" } ] [lns@dolphinhost3 bins]$ [lns@dolphinhost3 bins]$ ./dolphindocker run -it -v demovol1:/opt/demo ubuntu:latest bash root@9048672839d6:/# dd if=/dev/rbd0 of=/dev/null count=1 bs=1M 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 522.243 s, 2.0 kB/s root@9048672839d6:/# exit [lns@dolphinhost3 bins]$ [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume create -z=100 newvolume 2015/10/08 05:48:13 VolumeCreate(newvolume) with opts map[name:newvolume policy:distributed dsvolume create acked response {"Name":"newvolume","Err":""} [lns@dolphinhost3 bins]$ ./dolphindocker run -it -v newvolume:/opt/vol ubuntu:latest bash root@2b1e11bc2d45:/# cd /opt/vol/ root@2b1e11bc2d45:/opt/vol# touch 1 root@2b1e11bc2d45:/opt/vol# cp /etc/hosts . root@2b1e11bc2d45:/opt/vol# cp /etc/hostname . root@2b1e11bc2d45:/opt/vol# ls 1 hostname hosts root@2b1e11bc2d45:/opt/vol# exit [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume snapshot ls Volume@Snapshot CreatedBy Size NumChildren demovol1@demosnap1 dolphinhost3 104857600 [0] [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume snapshot create -v=newvolume newsnap 2015/10/08 05:49:09 SnapshotCreate(newsnap) with opts {Name:newsnap Volume:newvolume Type:d dsvolume snapshot create response {"Name":"newsnap","Volume":"newvolume","Err":""} [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume snapshot ls Volume@Snapshot CreatedBy Size NumChildren demovol1@demosnap1 dolphinhost3 104857600 [0] newvolume@newsnap dolphinhost3 104857600 [0] [lns@dolphinhost3 bins]$ ./dolphindocker run -it -v newvolume:/opt/vol ubuntu:latest bash root@f54ec93290c0:/# root@f54ec93290c0:/# root@f54ec93290c0:/# root@f54ec93290c0:/# cd /opt/vol/ root@f54ec93290c0:/opt/vol# ls 1 hostname hosts root@f54ec93290c0:/opt/vol# rm 1 hostname hosts root@f54ec93290c0:/opt/vol# touch 2 root@f54ec93290c0:/opt/vol# cp /var/log/alternatives.log . root@f54ec93290c0:/opt/vol# exit [lns@dolphinhost3 bins]$ [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume snapshot ls Volume@Snapshot CreatedBy Size NumChildren demovol1@demosnap1 dolphinhost3 104857600 [0] newvolume@newsnap dolphinhost3 104857600 [0] [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume snapshot newvolume@newsnap firstclone Usage: ./dolphindocker dsvolume snapshot [OPTIONS] COMMAND [OPTIONS] [arg...] Commands: create               Create a volume snapshot rm                   Remove a volume snapshot ls                   List all volume snapshots info                 Display information of a volume snapshot clone                clone snapshot to create a volume rollback             rollback volume to a snapshot Run ’./dolphindocker dsvolume snapshot COMMAND --help’ for more information on a command. --help=false    Print usage invalid command : [newvolume@newsnap firstclone] [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume snapshot clone --help Usage: ./dolphindocker dsvolume snapshot clone [OPTIONS] VOLUME@SNAPSHOT CLONEVOLUME clones a dsvolume snapshot and creates a new volume with a name specified by the user --help=false    Print usage -o, --opt=map[]  Other driver options for volume snapshot [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume snapshot clone newvolume@newsnap firstclo 2015/10/08 05:56:37 clone source: newvolume@newsnap, dest: firstclone 2015/10/08 05:56:37 clone source: volume newvolume, snapshot newsnap 2015/10/08 05:56:37 CloneCreate(newvolume@newsnap) with opts {Name:newsnap Volume:newvolume dsvolume snapshot clone response {"Name":"newsnap","Volume":"","Err":""} [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume ls NAME Created Type/Fs Policy Size(MB) Shared Inuse Path demosecure dolphinhost3 ds-ram/tmpfs secure 100 false - demovol1 dolphinhost3 ds-ceph/ext4 distributed 100 true - newvolume dolphinhost3 ds-ceph/xfs distributed 100 true - firstclone dolphinhost3 ds-ceph/xfs distributed 100 true - demolocal1 dolphinhost3 ds-local/ local 0 true - demoram dolphinhost3 ds-ram/tmpfs ram 100 true - [lns@dolphinhost3 bins]$ ./dolphindocker run -it -v firstclone:/opt/clone ubuntu:latest bas root@3970a269caa5:/# cd /opt/clone/ root@3970a269caa5:/opt/clone# ls 1 hostname hosts root@3970a269caa5:/opt/clone# exit [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume info firstclone volume info on firstclone [ { "Name": "firstclone", "Voltype": "ds-ceph", "CreatedBy": "dolphinhost3", "CreatedAt": "Thu Oct 8 05:56:37 EDT 2015", "Policy": "distributed", "Fstype": "xfs", "MBSize": 100, "AutoMigrate": false, "Shared": true, "Mountpoint": "", "Inuse": [], "Containers": [], "LastAccessTimestamp": "Thu Oct 8 05:59:04 EDT 2015", "IsClone": true, "ParentSnapshot": "newvolume@newsnap", "QoSState": false, "QoSProfile": "" } ] [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume snapshot info newvolume@newsnap 2015/10/08 05:59:33 Get snapshots info newvolume - newsnap [ { "Name": "newsnap", "Volume": "newvolume", "Type": "default", "CreatedBy": "dolphinhost3", "CreatedAt": "Thu Oct 8 05:49:10 EDT 2015", "Size": 104857600, "Children": [ "firstclone" ] } ] volume snapshot info on newvolume@newsnap [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume snapshot rm -v=newvolume newsnap 2015/10/08 05:59:47 snapshot rm {Name:newsnap Volume:newvolume Type: Opts:map[]} Error response from daemon: {"Name":"newsnap","Volume":"newvolume","Err":"Volume snapshot i [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume rm newvolume Error response from daemon: {"Name":"newvolume","Err":"exit status 39"} [lns@dolphinhost3 bins]$ ./dolphindocker dsvolume rollback newvolume@newsnap 2015/10/08 06:00:22 SnapshotRollback(newvolume@newsnap) with opts {Name:newsnap Volume:newv dsvolume rollback response {"Name":"newsnap","Volume":"newvolume","Err":""} [lns@dolphinhost3 bins]$ ./dolphindocker run -it -v newvolume:/opt/rollback ubuntu:latest b root@1545fee295af:/# cd /opt/rollback/ root@1545fee295af:/opt/rollback# ls 1 hostname hosts root@1545fee295af:/opt/rollback# exit [lns@dolphinhost3 bins]$ }8 Libstorage Events[lns@dolphinhost3 bins]$ ./dolphindocker events 2015-10-08T05:47:16.675882847-04:00 demovol1: (from libstorage) Snapshot[demovol1@demosnap1 create success 2015-10-08T05:48:14.413457724-04:00 newvolume: (from libstorage) Volume create success 2015-10-08T05:48:37.341001897-04:00 2b1e11bc2d45fe26b1b3082ce1a1123bd65ef1ebb61b1a0a0244a10 (from ubuntu:latest) create 2015-10-08T05:48:37.447786698-04:00 2b1e11bc2d45fe26b1b3082ce1a1123bd65ef1ebb61b1a0a0244a10 (from ubuntu:latest) attach 2015-10-08T05:48:38.118070084-04:00 newvolume: (from libstorage) Mount success 2015-10-08T05:48:38.118897857-04:00 2b1e11bc2d45fe26b1b3082ce1a1123bd65ef1ebb61b1a0a0244a10 (from ubuntu:latest) start 2015-10-08T05:48:38.235199874-04:00 2b1e11bc2d45fe26b1b3082ce1a1123bd65ef1ebb61b1a0a0244a10 (from ubuntu:latest) resize 2015-10-08T05:48:50.463620278-04:00 2b1e11bc2d45fe26b1b3082ce1a1123bd65ef1ebb61b1a0a0244a10 (from ubuntu:latest) die 2015-10-08T05:48:50.723378247-04:00 newvolume: (from libstorage) Unmount[newvolume] container 2b1e11bc success 2015-10-08T05:49:10.341208906-04:00 newvolume: (from libstorage) Snapshot[newvolume@newsnap create success 2015-10-08T05:49:22.165250102-04:00 ef49217deb4f6b121b09d6ee714d7546dad5875129b20719a36df82 (from ubuntu:latest) create 2015-10-08T05:49:22.177473380-04:00 ef49217deb4f6b121b09d6ee714d7546dad5875129b20719a36df82 (from ubuntu:latest) attach 2015-10-08T05:49:22.861275198-04:00 newvolume: (from libstorage) Mount success 2015-10-08T05:49:22.862213412-04:00 ef49217deb4f6b121b09d6ee714d7546dad5875129b20719a36df82 (from ubuntu:latest) start 2015-10-08T05:49:23.036122376-04:00 newvolume: (from libstorage) Unmount[newvolume] container ef49217d success 2015-10-08T05:49:23.439618024-04:00 newvolume: (from libstorage) Unmount[newvolume] failed exit status 32 2015-10-08T05:49:23.439675043-04:00 ef49217deb4f6b121b09d6ee714d7546dad5875129b20719a36df82 (from ubuntu:latest) die 2015-10-08T05:49:25.223243216-04:00 f54ec93290c0a714a79007d928788e4aa96fed504a39890b3f9a308 (from ubuntu:latest) create 2015-10-08T05:49:25.327953586-04:00 f54ec93290c0a714a79007d928788e4aa96fed504a39890b3f9a308 (from ubuntu:latest) attach 2015-10-08T05:49:25.504156400-04:00 newvolume: (from libstorage) Mount success 2015-10-08T05:49:25.504872335-04:00 f54ec93290c0a714a79007d928788e4aa96fed504a39890b3f9a308 (from ubuntu:latest) start 2015-10-08T05:49:25.622608684-04:00 f54ec93290c0a714a79007d928788e4aa96fed504a39890b3f9a308 (from ubuntu:latest) resize 2015-10-08T05:50:26.119006635-04:00 f54ec93290c0a714a79007d928788e4aa96fed504a39890b3f9a308 (from ubuntu:latest) die 2015-10-08T05:50:26.380619881-04:00 newvolume: (from libstorage) Unmount[newvolume] container f54ec932 success 2015-10-08T05:56:37.285999505-04:00 firstclone: (from libstorage) Clone volume newvolume@ne success 2015-10-08T05:58:58.731584155-04:00 3970a269caa59a2e64d665702946ce269f534764b5c25a396f7c2df (from ubuntu:latest) create 2015-10-08T05:58:58.837915799-04:00 3970a269caa59a2e64d665702946ce269f534764b5c25a396f7c2df (from ubuntu:latest) attach 2015-10-08T05:59:00.094099907-04:00 firstclone: (from libstorage) Mount success 2015-10-08T05:59:00.095190081-04:00 3970a269caa59a2e64d665702946ce269f534764b5c25a396f7c2df (from ubuntu:latest) start 2015-10-08T05:59:00.238547428-04:00 3970a269caa59a2e64d665702946ce269f534764b5c25a396f7c2df (from ubuntu:latest) resize 2015-10-08T05:59:04.432485014-04:00 3970a269caa59a2e64d665702946ce269f534764b5c25a396f7c2df (from ubuntu:latest) die 2015-10-08T05:59:04.772842691-04:00 firstclone: (from libstorage) Unmount[firstclone] container 3970a269 success 2015-10-08T05:59:47.016443142-04:00 newvolume: (from libstorage) Snapshot[newvolume@newsnap delete failed Volume snapshot inuse 2015-10-08T06:00:03.254380587-04:00 newvolume: (from libstorage) Volume destroy failed exit 2015-10-08T06:00:22.505840283-04:00 newvolume: (from libstorage) VolumeRollback newvolume@newsnap success 2015-10-08T06:00:43.861918486-04:00 1545fee295afac7fd8e743a2811b3c3f8ad0e027e9ca482695e77ce (from ubuntu:latest) create 2015-10-08T06:00:43.968121844-04:00 1545fee295afac7fd8e743a2811b3c3f8ad0e027e9ca482695e77ce (from ubuntu:latest) attach 2015-10-08T06:00:47.125238229-04:00 newvolume: (from libstorage) Mount success 2015-10-08T06:00:47.126041470-04:00 1545fee295afac7fd8e743a2811b3c3f8ad0e027e9ca482695e77ce (from ubuntu:latest) start 2015-10-08T06:00:47.237933994-04:00 1545fee295afac7fd8e743a2811b3c3f8ad0e027e9ca482695e77ce (from ubuntu:latest) resize 2015-10-08T06:00:52.135643720-04:00 1545fee295afac7fd8e743a2811b3c3f8ad0e027e9ca482695e77ce (from ubuntu:latest) die 2015-10-08T06:00:52.873037212-04:00 newvolume: (from libstorage) Unmount[newvolume] container 1545fee2 success9 Work in ProgressNew Volume Controller for GlusterFs is being integratedMigration is being worked on.docker dsvolume migrate {--tofar|--tonear} -v|--volume=VOLUME S3OBJECTLocal volumes needs to use thinpools on dm. Refer to convoy. https://github.com/rancher/convoy/blob/master/docs/devicemapper.md 10 Related TechnologiesThis section describes and tracks all related technologies for cloud container management10.1 Kubernetes vs Docker ComposeKubernetes in short is awesome. Kubernetes design comes with great design fundamentals based on Google’s decade long container management framework. Docker Compose is very primitive, understands container lifecycle’s well. But Kubernetes understands application lifecycle over containers better. And we deploy applications and not containers.10.2 MesosKubernetes connects and understands only containers so far. But there are other workloads like mapreduce, batch processing and MPI cloud applications that do not necessarily fit in the container framework. Mesos is great in this class of applications. It presents a pluggable Frameworks for extending Mesos to any kind of applications. Mesos natively understands docker containerizer. So for managing a datacenter/cloud that can be used for varied application types, Mesos is great.10.3 Mesos + Docker Engine + Swarm + Docker Compose vs Mesos + Docker Engine + KubernetesSwarm is Docker’s way of extending Docker Engine to be cluster aware. Kubernetes is doing this great over plain Docker engine. And as already mentioned Docker Compose is very primitive and is no match for the flexibility of Kubernetes. Mesos + Docker Engine + Kubernetes is Mesosphere. Mesosphere theme would be to provide a consistent Kubernetes like interface to schedule and manage any application class workloads over a cluster.11 ConclusionLibstorage fundamentals are strong. It can be integrated with Docker Engine as is today. Its functionality will definitely enhance Docker engine capabilities and may be needed with Mesos as well. The community and Mesosphere is driving complete ecosystem over Kubernetes which understands cluster, and brings in the needed functionality inclusive of volume management. The basic architecture treats docker engine as per Node functionality, Kubernetes works over a cluster. But Docker, is extending Libnetwork and has Swarm, that extends Docker engine to be cluster aware. So Libstorage within Docker framework is more suited, than elsewhere.

Aziro Marketing

blogImage

DevOps Essentials: Toolchain, Advanced State and Maturity Model

DevOps, to me, concisely is the seamless integration and automation of development and operations activities, towards achieving accelerated delivery of the software or service throughout its life.In simple practical terms, it is the CI – continuous integration, CD – continuous deployment, CQ – continuous quality and CO – continuous operations.It can be seen as a philosophy, practice or culture. Whether you follow ITIL, Agile or something else, DevOps will help you accelerate throughput. And in turn increase the productivity & quality at a reduced time.Some of the most popular tools in the space of DevOps are Chef, Puppet & Ansible which primarily help automate deployment and configuration of your software. The DevOps chain starts at unit testing with JUnit & NUnit and SCM tools such as svn, clearcase & git. These are integrated with a build server such as Jenkins. QA frameworks such as Selenium, AngularJS & Robot automate the testing which makes it possible to run the test cycles repeatedly as needed to ensure quality. On passing the quality tests, the build is deployed to desired target environments – test, UAT, staging or even production.Illustration 1: Example DevOps Tools ChainIn its primitive scope of DevOps, the ops part comprises of the traditional build & release practice of the software. And in its advanced form, it can be taken to the Cloud with Highly -Available, -Scalable, -Resilient and Self-Healing capabilities.Illustration 2: Advanced State DevOpsWe have a team of DevOps champions helping our customers achieve their DevOps goal achieving DevOps maturity.Illustration 3: DevOps Maturity Model

Aziro Marketing

EXPLORE ALL TAGS
2019 dockercon
Advanced analytics
Agentic AI
agile
AI
AI ML
AIOps
Amazon Aws
Amazon EC2
Analytics
Analytics tools
AndroidThings
Anomaly Detection
Anomaly monitor
Ansible Test Automation
apache
apache8
Apache Spark RDD
app containerization
application containerization
applications
Application Security
application testing
artificial intelligence
asynchronous replication
automate
automation
automation testing
Autonomous Storage
AWS Lambda
Aziro
Aziro Technologies
big data
Big Data Analytics
big data pipeline
Big Data QA
Big Data Tester
Big Data Testing
bitcoin
blockchain
blog
bluetooth
buildroot
business intelligence
busybox
chef
ci/cd
CI/CD security
cloud
Cloud Analytics
cloud computing
Cloud Cost Optimization
cloud devops
Cloud Infrastructure
Cloud Interoperability
Cloud Native Solution
Cloud Security
cloudstack
cloud storage
Cloud Storage Data
Cloud Storage Security
Codeless Automation
Cognitive analytics
Configuration Management
connected homes
container
Containers
container world 2019
container world conference
continuous-delivery
continuous deployment
continuous integration
Coronavirus
Covid-19
cryptocurrency
cyber security
data-analytics
data backup and recovery
datacenter
data protection
data replication
data-security
data-storage
deep learning
demo
Descriptive analytics
Descriptive analytics tools
development
devops
devops agile
devops automation
DEVOPS CERTIFICATION
devops monitoring
DevOps QA
DevOps Security
DevOps testing
DevSecOps
Digital Transformation
disaster recovery
DMA
docker
dockercon
dockercon 2019
dockercon 2019 san francisco
dockercon usa 2019
docker swarm
DRaaS
edge computing
Embedded AI
embedded-systems
end-to-end-test-automation
FaaS
finance
fintech
FIrebase
flash memory
flash memory summit
FMS2017
GDPR faqs
Glass-Box AI
golang
GraphQL
graphql vs rest
gui testing
habitat
hadoop
hardware-providers
healthcare
Heartfullness
High Performance Computing
Holistic Life
HPC
Hybrid-Cloud
hyper-converged
hyper-v
IaaS
IaaS Security
icinga
icinga for monitoring
Image Recognition 2024
infographic
InSpec
internet-of-things
investing
iot
iot application
iot testing
java 8 streams
javascript
jenkins
KubeCon
kubernetes
kubernetesday
kubernetesday bangalore
libstorage
linux
litecoin
log analytics
Log mining
Low-Code
Low-Code No-Code Platforms
Loyalty
machine-learning
Meditation
Microservices
migration
Mindfulness
ML
mobile-application-testing
mobile-automation-testing
monitoring tools
Mutli-Cloud
network
network file storage
new features
NFS
NVMe
NVMEof
NVMes
Online Education
opensource
openstack
opscode-2
OSS
others
Paas
PDLC
Positivty
predictive analytics
Predictive analytics tools
prescriptive analysis
private-cloud
product sustenance
programming language
public cloud
qa
qa automation
quality-assurance
Rapid Application Development
raspberry pi
RDMA
real time analytics
realtime analytics platforms
Real-time data analytics
Recovery
Recovery as a service
recovery as service
Retail
rsa
rsa 2019
rsa 2019 san francisco
rsac 2018
rsa conference
rsa conference 2019
rsa usa 2019
SaaS Security
san francisco
SDC India 2019
SDDC
security
Security Monitoring
Selenium Test Automation
selenium testng
serverless
Serverless Computing
Site Reliability Engineering
smart homes
smart mirror
SNIA
snia india 2019
SNIA SDC 2019
SNIA SDC INDIA
SNIA SDC USA
software
software defined storage
software-testing
software testing trends
software testing trends 2019
SRE
STaaS
storage
storage events
storage replication
Storage Trends 2018
storage virtualization
support
Synchronous Replication
technology
tech support
test-automation
Testing
testing automation tools
thought leadership articles
trends
tutorials
ui automation testing
ui testing
ui testing automation
vCenter Operations Manager
vCOPS
virtualization
VMware
vmworld
VMworld 2019
vmworld 2019 san francisco
VMworld 2019 US
vROM
Web Automation Testing
web test automation
WFH

Real People, Real Replies.
No Bots, No Black Holes.

Big things at Aziro often start small - a message, an idea, a quick hello. A real human reads every enquiry, and a simple conversation can turn into a real opportunity.
Start yours with us.

Phone

Talk to us

+1 844 415 0777

Email

Drop us a line at

info@aziro.com

Got a Tech Challenge? Let’s Talk