Tag Archive

Below you'll find a list of all posts that have been tagged as "kubernetes"
blogImage

4 Things What Aziro (formerly MSys Technologies) will do at KubeCon 2019

The Kubernetes and cloud native communities have grown at a tremendous pace in the last couple of years. The buzz and the general vibe of before and after KubeCon is a testimony to this. As the storage and cloud industry veers towards cloud native technologies, events like Kubernetes are the perfect place to educate, brainstorm, and reflect on the further advancements of cloud native computing. This blog details the technologies that our cloud-native DNA digs at these events. KubeCon and CloudNativeCon are havens for technocrats, and as an active participant of the Digital Transformation epoch, you should check them out too. We have also enumerated key relevant events that you should attend at KubeCon 2019. Cloud Native Technologies 1. Cloud Native Technologies for Enterprises Today’s volatile markets expect high quality applications that are fast and agile. Enterprises need to shorten their time to market in terms of developing agile capabilities that can disarm competition and cater to the market. While critical business drivers for every enterprise may vary, business criteria such as time to market, cost reduction, and easier manageability are usually reckoned important. Containers are emerging as the default for applications across these use cases, and Kubernetes is the right choice to orchestrate these containers. With this in mind, KubeCon is the ideal venue for enterprises where they can learn and network with solution providers to strategize their Cloud Native roadmap. 2. The Cloud Native Solution to Cloud Security Risks Data Security is always a key concern for enterprises. The dynamic nature of containers exponentially increases security threats to enterprises. It is therefore important that cloud native-centric security products focus specifically on security needs of the cloud ecosystem. KubeCon 2019 has a host of talks and sessions that focus on the growing need of Kubernetes security, some of them being: The Devil in the Details: Kubernetes’ First Security Assessment – Aaron Small, Google & Jay Beale, InGuardians [Tuesday, November 19 • 10:55am – 11:30am] Securing Communication Between Meshes and Beyond with SPIFFE Federation – Evan Gilman, Scytale & Oliver Liu, Google [Thursday, November 21 • 2:25pm – 3:00pm] How Kubernetes Components Communicate Securely in Your Cluster – Maya Kaczorowski, Google [Thursday, November 21 • 11:50am – 12:25pm] How Yelp Moved Security From the App to the Mesh with Envoy and OPA – Daniel Popescu & Ben Plotnick, Yelp [Thursday, November 21 • 10:55am – 11:30am] Kuberenetes 3. Kubernetes: The Door to a Multi-Cloud World Today’s businesses are unfulfilled with applications that adhere strictly to one-track environments. Enterprises profit from applications that are versatile and can move between environments. Kubernetes and containers facilitate enterprises to run applications across environments- on-premise VMs, public cloud or multiple clouds, fostering portability, and agility. Kubernetes and containers have helped many IT leaders bridge on-premise and public cloud environments. The widespread adoption of Kubernetes and containers into the mainstream production environment is driving innovation. Kubernetes has helped companies turn the idea of multi-cloud into a reality. By being able to run the same container images across multiple cloud platforms, IT teams can maintain control over their IT and security. Despite this, businesses need to assess their cloud prowess time and again, and so require assistance to reevaluate existing strategy and to chart a new one wherever applicable. If you need to assess your serverless infrastructure or are looking to customize solutions for your business, here are some talks that you should attend: Serverless Platform for Large Scale Mini-Apps: From Knative to Production – Yitao Dong & Ke Wang, Ant Financial [Wednesday, November 20 • 5:20pm – 5:55pm] KubeFlow’s Serverless Component: 10x Faster, a 1/10 of the Effort – Orit Nissan-Messing, Iguazio [Tuesday, November 19 • 4:25pm – 5:00pm] Kubernetes Storage Cheat Sheet for VM Administrators – Manu Batra & Jing Xu, Google [Wednesday, November 20 • 4:25pm – 5:00pm] Only Slightly Bent: Uber’s Kubernetes Migration Journey for Microservices – Yunpeng Liu, Uber [Tuesday, November 19 • 10:55am – 11:30am] Growth and Design Patterns in the Extensions Ecosystem – Eric Tune, Google [Wednesday, November 20 • 11:50am – 12:25pm] 4. Application support and community Kubernetes is one of the agile-est technologies that offer a wide spectrum of workloads that sustain users and use cases. It supports multiple workloads like programming languages and frameworks, enabling stateless, stateful, and data-processing workloads. Kubernetes’ growth, support, and broad adoption justify its popularity among other container solutions. The project has gained a very large active user and developer open source community, as well as the support of global enterprises, IT market leaders, and major cloud providers. You can connect with some of the best minds in the business by participating in any of the social events at KubeCon USA 2019: Taco Tuesday Welcome Reception + Sponsor Booth Crawl, sponsored by SAIC [Tuesday, November 19 • 6:40pm – 8:40pm] Diversity Lunch + Hack – sponsored by Google Cloud [Wednesday, November 20 • 12:30pm – 2:15pm] All-Attendee Block Party (Name Badge Required to Attend) [Wednesday, November 20 • 6:00pm – 9:00pm] Meet Aziro (formerly MSys Technologies)’ architects [Click here to know how] These are some reasons why we eagerly look forward to this cloud native event. Are you as excited as us to attend KubeCon 2019? See you soon!

Aziro Marketing

blogImage

Kubernetes – Bridging the Gap between 5G and Intelligent Edge Computing

PrologueIn the era of digital transformation, the 5G network is a leap forward. But frankly, the tall promises of the 5G network are cornering the edge computing technology to democratize data at a granular level. To add to the vows, 5G also demands that edge computing enhances performance and latency while slashing the cost. Kubernetes – an open-source container-orchestration is a dealmaker between 5G and edge computing.In this blog, you will read:A decade defined by the cloudThe legend of cloud-native ContainersThe rise of Container Network Functions (CNFs)Edge computing must reinvent the wheelKubernetes – powering 5G at the edgeKubeEdge – giving an edge to KubernetesA decade defined by the cloudWhat oil is to the automobile industry, the cloud is to Information Technology (IT) industry. Cloud revolutionized the tech space by making data available at your fingertips. Amazon’s Elastic Compute Cloud (EC2) planted the seed of the cloud somewhere in the early 2000s. Google Cloud and Microsoft Azure followed this. However, the real growth of cloud technology skyrocketed only after 2010-2012.Numbers underlining the future trends– Per Cisco, cloud computing will process more than 90 percent of the workloads in 2021– PerRightScale, the business run around 41 percent workloads in private cloud and 38 percent in the public cloud– Per Cisco, 75 percent of all compute instance and cloud workloads will be SaaS by the end of 2021The legend of cloud-native ContainersThe advent of cloud-native is a hallmark of evolutionary development in the cloud ecosystem. The fundamental nature of the architecture of cloud-native is the abstraction of multiple layers of the infrastructure. This means a cloud-native architect has to define those layers via code. And when coding, one gets a chance to include top functionalities to maximize the value of the business. Cloud-native also empowers coders to create scripts for infrastructure scalability.Cloud-native container tech is making a noteworthy contribution to the future growth of the cloud-native ecosystem. It is playing a more significant role in enabling capabilities of the 5G architecture in real-time. With container-focused web services, 5G network companies can achieve resource isolation and reproducibility to drive resiliency and faster deployment. Containers make the process of deployment less intricate, which powers the 5G infrastructure to scale data requirements spanning cloud networks. Organizations can leverage Containers to process data and compute it on a massive scale.A conflation of Containers and DevOps work magic for 5G. Bringing these loosely coupled services will help 5G providers to automate application deployment, receive feedback swiftly, eliminate bottlenecks, and achieve a self-paced continuous improvement mechanism. They can provision resources on-demand with unified management across a hybrid cloud.The fire of cloud-native is ignited in the telecom sector. The coming decade – 2021-2030, will witness it spread like wildfire.The rise of the Container Network Functions (CNFs)We witnessed the rise of Container Network Functions (CNFs), while network providers were using containers with VMware and virtual network functions (VNF). CNFs are functions of a network that can run on Kubernetes across multi-cloud and/or hybrid cloud infrastructure. CNFs are ultra-lightweight compared to VNFs, which traditionally operate in the VMware environment. This makes CNFs super portable and scalable. But, the underlining factor in the CNF architecture is that it is deployable over a bare metal server that brings down the cost dramatically.5G – the next wave in the telecom sector promises to offer next-gen services entailing automation, elasticity, and transparency. Looking at the requirement micro-segmented architectures, VNF (VMware environment) would not be an ideal choice for 5G providers. Logically, the adoption of CNFs is a natural step forward. Of course, doing away entirely with VMware isn’t anytime on the board. Therefore, a hybrid model of VNF and CNF sounds good.Recently, Intel, in collaboration with Red Hat, created a cloud-based onboarding service and test bed to conflate CNF (containerized environment) and VNF (VMware environment). The test bed is expected to enhance compatibility between CNF and VNF and slash the deployment time. The architecture looks like the image below.Edge computing must reinvent the wheelMultiple devices generate a massive amount of data concurrently. To enable cloud centers to process such data is a herculean task. Edge computing architecture puts infrastructure close to data devices within a distributed environment that results in faster response time and lower latency. Edge computing’s local processing of data simplifies the process and reduces the overall costs. Edge computing has been working as a catalyst for the telecommunication industry to date. However, with 5G in the picture, the boundaries are all set to push.The rising popularity of the 5G network is putting a thrust on intuitive experiences in real-time. 5G catapults the speed of the broadband by up to 10x and plummets the device density by around a million devices/sq.km. For this, 5G requires ultra-low latency, which can be created by a digital infrastructure powered by edge computing.Honestly, edge computing must start flapping its wings for the success of the 5G network. It must ensure– Better device management– Lesser resource utilization– More lightweight capabilities– Ultra-low latency– Increased security blanket and data transfer reliabilityKubernetes – powering 5G at the edge“Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.” Kubernetes.ioKubernetes streamlines the underlying compute spanning distributed environment and imparts consistency at the edge. Kubernetes helps network providers maximize the value of Containers at the edge by automation and swift deployment with a broader security blanket. Kubernetes for edge computing will eliminate most of the labor-intensive workloads, thereby, driving better productivity and quality.Kubernetes has an unquestionable role to play in unleashing the commercial value of 5G, at least for now. The only alternative to Kubernetes is VMware, which does not make the cut due to space and cost issues. Kubernetes architecture has proved to accelerate the automation of mission-critical workloads and reduce the overall cost of 5G deployment.A Microservices architecture is required to support non-real-time components of 5G. Kubernetes can create a self-controlled closed loop, which ensures a required number of Microservices are hosted and controlled at the desired level. Further, the Horizontal Pod Autoscaler of Kubernetes can release new container instances depending on the workload at the edge.Last year, AT&T signed an eight-figure and multi-year deal with Mirantis to roll out 5G leveraging OpenStack and Kubernetes. Ryan Van Wyk, AT&T Associate VP of the Network, had quoted, “There really isn’t much of an alternative. Your alternative is VMware. We’ve done the assessments, and VMware doesn’t check boxes we need.”KubeEdge – giving an edge to KubernetesKubeEdge is an open-source project built on Kubernetes. The latest version, KubeEdge v1.3, hones the capabilities of Kubernetes to power intelligent orchestration of containerized application at the edge. KubeEdge streamlines communication between edge and cloud data center by infrastructure support for network, app. deployment, and metadata. The best part is that it allows coders to create a customized logic script to enable resource-constrained device communication at the edge.Future aheadGartner quotes, “Around 10 percent of enterprise-generated data is created and processed outside a traditional centralized data center or cloud. By 2025, this figure will reach 75 percent.”The proliferation of devices due to IoT, Big Data, and AI will generate data of mammoth amount. For the success of 5G, it is essential that edge computing handles these complex workloads and maintains data elasticity. Therefore, Kubernetes will be the functional backbone of edge computing imparting resiliency in orchestrating containerized applications.

Aziro Marketing

blogImage

Kubernetes Day India 2019: Overview and Pro Tips for Attendees

Cloud, Container, and Kubernetes: Where it all beganMany companies today are adopting an all-in cloud strategy to tap the agility and speed the cloud offers. Business are resorting to Kubernetes Cloud providers to help adopt a robust cloud strategy. A great strategy helps them to connect to advanced capabilities such as blockchain and AI in real time.To run applications effectively in the cloud, a cloud-native approach is the best measure. Cloud-native means building and running applications that optimize the cloud computing delivery model. It suggests that the apps live in the public cloud, as opposed to an on-premise datacenter. Containers facilitate this smooth transition from on-premise to cloud by helping developers quickly spin up new, cloud-native workloads.The Cloud Native Computing Foundation (CNCF) is an open source software foundation dedicated to making cloud native computing universal and sustainable. Hence, the CNCF aims at promoting container and container orchestration technologies, emphasizing on the use case for Kubernetes.Kubernetes is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications. Kubernetes makes it easy to deploy and operate applications based on a microservice architecture.Going by the trend in the past few years, Kubernetes enjoys unprecedented popularity as the most preferred cloud-native platform. (According to a survey conducted as part of a recent SDxCentral report on container and cloud orchestration, 64 percent of respondents said they were using Kubernetes, 36 percent said they were using Docker Swarm, and 18 percent said they were using Apache Mesos.) This makes Kubernetes the flagship project of CNCF, backed by tech giants like Google, AWS, Microsoft, IBM, Intel, Cisco, and Red Hat.Kubernetes Day, IndiaKubernetes Day India is the first ever CNCF event in India and is being hosted at the Infosys premises in Bengaluru.Kubernetes Day- a one day, single track event, brings together local and international experts for developers of all levels interested in Kubernetes and related cloud-native technologies. With this event, the CNCF aims at appealing to a large number of developers who might not necessarily travel to KubeCon + CloudNativeCon events in Europe, China, and North America.Talks range from introductory-level to advanced, given by speakers fromdiverse companies driving the technologies; andthe end users deploying them.The speakers are an eclectic blend of local and international tech experts, some of them being:Keynote from Liz Rice, Technology Evangelist, Aqua Security @lizriceNoobernetes 101: Top 10 Questions We Get from New K8s Users – Neependra Khare, CloudYuga Technologies & Karthik Gaekwad, OracleFirst Steps to Becoming a Kubernetes Certified Application Developer – Ben Hall, KatacodaHow to Secure Your Kubernetes Clusters – Cindy Blake, GitLabUsing Kubernetes API Effectively with Golang – Vishal Biyani, Infracloud.ioMaking Cloud-native Computing Universal and Sustainable – Dee Kumar, CNCFKubernetes for Java Developers – Arun Gupta, Amazon Web ServicesBuilding a PaaS for Robotics with Kubernetes – Dhananajay Sathe, Rapyuta RoboticsHow to Contribute to Kubernetes – Nikhita Raghunath, LoodseHow You Can Benefit by Attending Kubernetes DayThere are many reasons why engineers will attend the conference. You can benefit if you’re attending for any (or all) of the following reasons:You’re interested in the topic at hand and have questions that can be answered by subject matter expertsYou’re looking for jobs in the relevant domain, or you wish to enhance your technical skillsetYou want to sell services of a similar natureYou are looking to recruit people with the skillsets being discussedDespite your motives to attend, Kubernetes Day meets all of the above criteria. Your attendance means that you can network with peers from the industry, or learn from experts you’ve heard of and wish to learn from them in person!In case You’re a SpeakerFirst of all, congratulations! To be speaking at Kubernetes Day, you have likely beaten great odds and belong to a small percentage of speakers who get this privilege. Here are some tips that can boost your confidence for D day: –You don’t need to be an expert in presentation design – It is challenging to follow busy slides while listening to the speaker, so remember to keep it comprehensible and simpleAn engaging story with the right amount of excitement, imagination, and humor keeps the audience focused, try infusing relevant stories in your talk wherever possible.Practice, practice, and then practice some more; the most exciting speakers practice their presentations, so don’t skip it!Technical glitches can be your nightmare come true- have backup plans to deal with technical snafusDon’t Shy Away from NetworkingNetworking is not a cakewalk. Kubernetes Day may or may not be your first attendance at a tech conference, but getting yourself to talk to total strangers is never easy. However, it’s not that hard either- if you’ve done a little prep before you attend. You can be sure that every other attendee is as nervous as you and may have similar qualms about interaction, and that’s exactly what can work for you. As peers from the same community, you share at least one common interest- get talking about it! Ask them about their experience working on Kubernetes, what piqued their interest in this technology or what they think the future holds for cloud-native. Once you’ve made a real connection, the next step is to complement it with a virtual connection- connect with them on LinkedIn or Twitter.Network with Aziro (formerly MSys Technologies) at Kubernetes DayTwo of our senior engineers- Sundarlal, Narendra Kumar are attending and would love to meet up and compare notes on their Kubernetes experience. If you want to meet any of them, you can share your details here, and we’ll make it happen! Or you can directly tweet us @MSys_Tech with the hashtag #MSysatK8sBLR, and that’ll work too.

Aziro Marketing

blogImage

Kubernetes storage validation by Ansible test automation framework

Ansible is mainly used for software provisioning, configuration management, and application-deployment tool. We have used it for developing test automation framework to validate Kubernetes storage. How we used ansible as a test automation tool to validate Kubernetes storage is explained in this post.Why we used ansible?Kubernetes is a clustered environment where we will have 2 or more worker nodes and one or more master node. We have to create CSI driver volumes in it to validate our storage box.So, the test environment will consist of multiple hosts. The volume may be mount on any of the pod created in any of the worker nodes. So dynamically, we need to validate any of the worker nodes. If we use any programming/scripting languages, then we need to handle remote code execution. We worked in a couple of automation projects using PowerShell and python. But remote code execution library needs a lot of work. But in ansible, the heavy lifting of remote execution is taken care of by itself. So, we can only concentrate only on core test validation logicHow ansible is used?As part of the Kubernetes storage validation, there are many features to be validated.Features such as Volume group, Snapshot group, Volume mutator, Volume resize need to be validated. Each feature will have many test cases.For each feature, we created a role. Each test is covered in tasks file under role.In main.yml in roles will call all the test tasks file.Structure of ansible automation framework rolesroles Feature_test  volumegroup_provision        Tasks          Test1.yml          Test2.yml          Main.yml  volumesnapshot_provision  volume_resize  basic_volume_workflow Lib  resources    (library files sc,pvc,pod and IO inside Pod) volgroup_play.yml volsnaphost_play.yml volresize_play.yml basic_volume_play.yml Hosts In the above framework, test1.yml and test2.yml are tasks file where test cases would be written. Each feature will have its own play file—for example, Volgroup_play.yml. So if we execute volgroup_play.yml, then tests reside in test1.yml and test2.yml will be executed. Below command will execute the play ansible-playbook -I hosts volgroup_play.yml -vvChallenges:Problem:In ansible, if a task is failed, then execution will be stopped. So, if 10 test cases are there in a feature, and if a second test is failed, then remaining 8 test cases will not be executed.Solution:Each test case is written inside block and rescue. So, when testing is failed, it will be handled by a rescue block. In the rescue block, we will clean up the testbed so that the next test case will be executed without any issues.Sample test file.- Block:   - include: test_header  vars: Test_file: ‘test1.yml’ Test_description: ‘volume group provision basic workflow’  < creation of SC,PVC and POD and validation logic>     - include: test_footer  vars: Test_file: “test1.yml” Test_description: ‘volume group provision basic workflow’ Test_result: “Pass” rescue:  - include: test_footer   vars: Test_file: “test1.yml” Test_description: ‘volume group provision basic workflow’ Test_result: “Fail”  < Cleanup logic>Problem:Some of the tasks which can be done easier in a programming language are tough in ansible.Solution:Write custom ansible module using python.Pros of using ansible as automation framework:Ansible is very simple to implement.It takes care of heavy lifting of remote code executionFor clustered environment, speed of automation development is considerably higher.Cons:Though it is simple, still ansible is not programming language. When straightforward commands are written, it will be easier. but when we write logic, few lines of programming language will do what 100 lines of ansible does.When multiple tasks need to be executed in nested loop passion, it will be very hard to implement that in ansible. (we have to use ‘include’ module with loops then again use ‘include’ modules. It is not very intuitive)Conclusion:Ansible can be used as a test automation framework for Kubernetes storage validation. Wherever heavy programming logic is required , it is better to use custom ansible module using python which will make life easier..filledCheckboxes input[type="checkbox"]{opacity:0;display :none;}.multiStepFormBody button.close{z-index:99;}

Aziro Marketing

blogImage

Decoding the Self-Healing Kubernetes: Step by Step

PrologueBusiness application that fails to operate 24/7 would be considered inefficient in the market. The idea is that applications run uninterrupted irrespective of a technical glitch, feature update, or a natural disaster. In today’s heterogeneous environment where infrastructure is intricately layered, a continuous workflow of application is possible via self-healing.Kubernetes, which is a container orchestration tool, facilitates the smooth working of the application by abstracting machines physically. Moreover, the pods and containers in Kubernetes can self-heal.Captain America asked Bruce Banner in Avengers to get angry to transform into ‘The Hulk’. Bruce replied, “That’s my secret Captain. I’m always angry.”You must have understood the analogy here. Let’s simplify – Kubernetes will self-heal organically, whenever the system is affected.Kubernetes’s self-healing property ensures that the clusters always function at the optimal state. Kubernetes can self-detect two types of object – podstatus and containerstatus. Kubernetes’s orchestration capabilities can monitor and replace unhealthy container as per the desired configuration. Likewise, Kubernetes can fix pods, which are the smallest units encompassing single or multiple containers.The three container states include1. Waiting – created but not running. A container, which is in a waiting stage, will still run operations like pulling images or applying secrets, etc. To check the Waiting pod status, use the below command. kubectl describe pod [POD_NAME] Along with this state, a message and reason about the state are displayed to provide more information....  State:          Waiting   Reason:       ErrImagePull ... 2. Running Pods – containers that are running without issues. The following command is executed before the pod enters the Running state.postStartRunning pods will display the time of the entrance of the container....  State:          Running   Started:      Wed, 30 Jan 2019 16:46:38 +0530 ... 3. Terminated Pods – container, which fails or completes its execution; stands terminated. The following command is executed before the pod is moved to Terminated.prestopTerminated pods will display the time of the entrance of the container....  State:          Terminated    Reason:       Completed    Exit Code:    0    Started:      Wed, 30 Jan 2019 11:45:26 +0530    Finished:     Wed, 30 Jan 2019 11:45:26 +0530 ... Kubernetes’ self-healing Concepts – pod’s phase, probes, and restart policy.The pod phase in Kubernetes offers insight into the pod’s placement. We can havePending Pods – created but not runningRunning Pods – runs all the containersSucceeded Pods – successfully completed container lifecycleFailed Pods – minimum one container failed and all container terminatedUnknown PodsKubernetes execute liveliness and readiness probes for the Pods to check if they function as per the desired state. The liveliness probe will check a container for its running status. If a container fails the probe, Kubernetes will terminate it and create a new container in accordance with the restart policy. The readiness probe will check a container for its service request serving capabilities. If a container fails the probe, then Kubernetes will remove the IP address of the related pod.Liveliness probe example. apiVersion: v1 kind: Pod metadata:  labels:    test: liveness  name: liveness-http spec:   containers:   - args:    - /server    image: k8s.gcr.io/liveness    livenessProbe:      httpGet:        # when "host" is not defined, "PodIP" will be used        # host: my-host        # when "scheme" is not defined, "HTTP" scheme will be used. Only "HTTP" and "HTTPS" are allowed        # scheme: HTTPS        path: /healthz        port: 8080        httpHeaders:        - name: X-Custom-Header          value: Awesome      initialDelaySeconds: 15      timeoutSeconds: 1    name: liveness The probes includeExecAction – to execute commands in containers.TCPSocketAction – to implement a TCP check w.r.t to the IP address of a container.HTTPGetAction – to implement a HTTP Get check w.r.t to the IP address of a container.Each probe gives one of three results:Success: The Container passed the diagnostic.Failure: The Container failed the diagnostic.Unknown: The diagnostic failed, so no action should be taken.Demo description of Self-Healing Kubernetes – Example 1We need to set the code replication to trigger the self-healing capability of Kubernetes.Let’s see an example of the Nginx file. apiVersion: apps/v1 kind: Deployment metadata:   name: nginx-deployment-sample spec:  selector:    matchLabels:      app: nginx  replicas:4  template:    metadata:      labels:        app: nginx    spec:      containers:      - name: nginx        image: nginx:1.14.2        ports:        - containerPort: 80 In the above code, we see that the total number of pods across the cluster must be 4.Let’s now deploy the file.kubectl apply nginx-deployment-sampleLet’s list the pods, usingkubectl get pods -l app=nginxHere is the output.NAME                                    READY      STATUS    RESTARTS            AGE nginx-deployment-test-83586599-r299i    1/1       Running        0                5s       nginx-deployment-test-83586599-f299h    1/1       Running        0                5s nginx-deployment-test-83586599-a534k    1/1       Running        0                5s nginx-deployment-test-83586599-v389d    1/1       Running        0                5s As you see above, we have created 4 pods.Let’s delete one of the pods.kubectl delete nginx-deployment-test-83586599-r299iThe pod is now deleted. We get the following outputpod "deployment nginx-deployment-test-83586599-r299i" deletedNow again, list the pods.kubectl get pods -l app=nginxWe get the following output.NAME                                    READY     STATUS   RESTARTS    AGE nginx-deployment-test-83586599-u992j    1/1       Running     0         5s       nginx-deployment-test-83586599-f299h    1/1       Running     0         5s nginx-deployment-test-83586599-a534k    1/1       Running     0         5s nginx-deployment-test-83586599-v389d    1/1       Running     0         5s   We have 4 pods again, despite deleting one.Kubernetes has self-healed to create a new node and maintain the count to 4.Demo description of Self-Healing Kubernetes – Example 2Get pod details$ kubectl get pods -o wideGet first nginx pod and delete it – one of the nginx pods should be in ‘Terminating’ status$ NGINX_POD=$(kubectl get pods -l app=nginx --output=jsonpath="{.items[0].metadata.name}") $ kubectl delete pod $NGINX_POD; kubectl get pods -l app=nginx -o wide $ sleep 10 Get pod details – one nginx pod should be freshly started$ kubectl get pods -l app=nginx -o wideGet deployement details and check the events for recent changes$ kubectl describe deployment nginx-deploymentHalt one of the nodes (node2) $ vagrant halt node2 $ sleep 30 Get node details – node2 Status=NotReady$ kubectl get nodesGet pod details – everything looks fine – you need to wait 5 minutes$ kubectl get pods -o widePod will not be evicted until it is 5 minutes old – (see Tolerations in ‘describe pod’ ). It prevents Kubernetes to spin up the new containers when it is not necessary$ NGINX_POD=$(kubectl get pods -l app=nginx --output=jsonpath="{.items[0].metadata.name}") $ kubectl describe pod $NGINX_POD | grep -A1 Tolerations Sleeping for 5 minutes$ sleep 300Get pods details – Status=Unknown/NodeLost and new container was started$ kubectl get pods -o wideGet deployment details – again AVAILABLE=3/3$ kubectl get deployments -o widePower on the node2 node $ vagrant up node2 $ sleep 70 Get node details – node2 should be Ready again$ kubectl get nodesGet pods details – ‘Unknown’ pods were removed$ kubectl get pods -o wideSource: GitHub. Author: Petr RuzickaConclusionKubernetes can self-heal applications and containers, but what about healing itself when the nodes are down? For Kubernetes to continue self-healing, it needs a dedicated set of infrastructure, with access to self-healing nodes all the time. The infrastructure must be driven by automation and powered by predictive analytics to preempt and fix issues beforehand. The bottom line is that at any given point in time, the infrastructure nodes should maintain the required count for uninterrupted services.Reference: kubernetes.io, GitHub

Aziro Marketing

EXPLORE ALL TAGS
2019 dockercon
Advanced analytics
Agentic AI
agile
AI
AI ML
AIOps
Amazon Aws
Amazon EC2
Analytics
Analytics tools
AndroidThings
Anomaly Detection
Anomaly monitor
Ansible Test Automation
apache
apache8
Apache Spark RDD
app containerization
application containerization
applications
Application Security
application testing
artificial intelligence
asynchronous replication
automate
automation
automation testing
Autonomous Storage
AWS Lambda
Aziro
Aziro Technologies
big data
Big Data Analytics
big data pipeline
Big Data QA
Big Data Tester
Big Data Testing
bitcoin
blockchain
blog
bluetooth
buildroot
business intelligence
busybox
chef
ci/cd
CI/CD security
cloud
Cloud Analytics
cloud computing
Cloud Cost Optimization
cloud devops
Cloud Infrastructure
Cloud Interoperability
Cloud Native Solution
Cloud Security
cloudstack
cloud storage
Cloud Storage Data
Cloud Storage Security
Codeless Automation
Cognitive analytics
Configuration Management
connected homes
container
Containers
container world 2019
container world conference
continuous-delivery
continuous deployment
continuous integration
Coronavirus
Covid-19
cryptocurrency
cyber security
data-analytics
data backup and recovery
datacenter
data protection
data replication
data-security
data-storage
deep learning
demo
Descriptive analytics
Descriptive analytics tools
development
devops
devops agile
devops automation
DEVOPS CERTIFICATION
devops monitoring
DevOps QA
DevOps Security
DevOps testing
DevSecOps
Digital Transformation
disaster recovery
DMA
docker
dockercon
dockercon 2019
dockercon 2019 san francisco
dockercon usa 2019
docker swarm
DRaaS
edge computing
Embedded AI
embedded-systems
end-to-end-test-automation
FaaS
finance
fintech
FIrebase
flash memory
flash memory summit
FMS2017
GDPR faqs
Glass-Box AI
golang
GraphQL
graphql vs rest
gui testing
habitat
hadoop
hardware-providers
healthcare
Heartfullness
High Performance Computing
Holistic Life
HPC
Hybrid-Cloud
hyper-converged
hyper-v
IaaS
IaaS Security
icinga
icinga for monitoring
Image Recognition 2024
infographic
InSpec
internet-of-things
investing
iot
iot application
iot testing
java 8 streams
javascript
jenkins
KubeCon
kubernetes
kubernetesday
kubernetesday bangalore
libstorage
linux
litecoin
log analytics
Log mining
Low-Code
Low-Code No-Code Platforms
Loyalty
machine-learning
Meditation
Microservices
migration
Mindfulness
ML
mobile-application-testing
mobile-automation-testing
monitoring tools
Mutli-Cloud
network
network file storage
new features
NFS
NVMe
NVMEof
NVMes
Online Education
opensource
openstack
opscode-2
OSS
others
Paas
PDLC
Positivty
predictive analytics
Predictive analytics tools
prescriptive analysis
private-cloud
product sustenance
programming language
public cloud
qa
qa automation
quality-assurance
Rapid Application Development
raspberry pi
RDMA
real time analytics
realtime analytics platforms
Real-time data analytics
Recovery
Recovery as a service
recovery as service
rsa
rsa 2019
rsa 2019 san francisco
rsac 2018
rsa conference
rsa conference 2019
rsa usa 2019
SaaS Security
san francisco
SDC India 2019
SDDC
security
Security Monitoring
Selenium Test Automation
selenium testng
serverless
Serverless Computing
Site Reliability Engineering
smart homes
smart mirror
SNIA
snia india 2019
SNIA SDC 2019
SNIA SDC INDIA
SNIA SDC USA
software
software defined storage
software-testing
software testing trends
software testing trends 2019
SRE
STaaS
storage
storage events
storage replication
Storage Trends 2018
storage virtualization
support
Synchronous Replication
technology
tech support
test-automation
Testing
testing automation tools
thought leadership articles
trends
tutorials
ui automation testing
ui testing
ui testing automation
vCenter Operations Manager
vCOPS
virtualization
VMware
vmworld
VMworld 2019
vmworld 2019 san francisco
VMworld 2019 US
vROM
Web Automation Testing
web test automation
WFH

LET'S ENGINEER

Your Next Product Breakthrough

Book a Free 30-minute Meeting with our technology experts.

Aziro has been a true engineering partner in our digital transformation journey. Their AI-native approach and deep technical expertise helped us modernize our infrastructure and accelerate product delivery without compromising quality. The collaboration has been seamless, efficient, and outcome-driven.

Customer Placeholder
CTO

Fortune 500 company