Tag Archive

Below you'll find a list of all posts that have been tagged as "cloud computing"
blogImage

10 Steps to Setup and Manage a Hadoop Cluster Using Ironfan

Recently, we faced a unique challenge – setup DevOps and management for a relatively complex Hadoop cluster on the Amazon EC2 Cloud. The obvious choice was to use a configuration management tool. Having extensively used Opscode’s Chef and given the flexibility and extensibility Chef provides; it was an obvious choice. While looking around for the best practices to manage a hadoop cluster using Chef, we stumbled upon: Ironfan What is Ironfan? In short Ironfan, open-souced by InfoChimps provides an abstraction on top of Chef, allowing users to easily provision, deploy and manage a cluster of servers – be it a simple web application or a complex Hadoop cluster. After a few experiments, we were convinced that Ironfan was the right thing to use as it simplifies a lot of configuration avoiding repetition while retaining the goodness of Chef. This blog shows how easy it is to setup and manage a Hadoop cluster using Ironfan. Pre-requisties: Chef Account (Hosted or Private) with knife.rb setup correctly on your client machine. Ruby setup (using RVM or otherwise) Installation: Now you can install IronFan on your machine using the steps mentioned here. Once you have all the packages setup correctly, perform these sanity checks: Ensure that the environment variable CHEF_USERNAME is your Chef Server username (unless your USER environment variable is the same as your Chef username) Ensure the the environment variable CHEF_HOMEBASE points to the location which contains the expanded out knife.rb ~/.chef should be a symbolic link to your knife directory in the CHEF_HOMEBASE Your knife/knife.rb file is not modified. Your Chef user PEM file should be in knife/credentials/{username}.pem Your organization’s Chef validator PEM file should be in knife/credentials/{organization}-validator.pem Your knife/credentials/knife-{organization}.rb file Should contain your Chef organization Should contain the chef_server_url Should contain the validation_client_name Should contain path to validation_key Should contain the aws_access_key_id/ aws_secret_access_key Should contain an AMI ID of an AMI you’d like to be able to boot in ec2_image_info Finally in the homebase rename the example_clusters directory to clusters. These are sample clusters than comes with Ironfan. Perform a knife cluster list command : $ knife cluster list Cluster Path: /.../homebase/clusters +----------------+-------------------------------------------------------+ | cluster | path | +----------------+-------------------------------------------------------+ | big_hadoop | /.../homebase/clusters/big_hadoop.rb | | burninator | /.../homebase/clusters/burninator.rb | ... Defining Cluster: Now lets define a cluster. A Cluster in IronFan is defined by a single file which describes all the configurations essential for a cluster. You can customize your cluster spec as follows: Define cloud provider settings Define base roles Define various facets Defining facet specific roles and recipes. Override properties of a particular facet server instance. Defining cloud provider settings: IronFan currently supports AWS and Rackspace Cloud providers. We will take an example of AWS cloud provider. For AWS you can provide config information like: Region, in which the servers will be deployed. Availibility zone to be used. EBS backed or Instance-Store backed servers Base Image(AMIs) to be used to spawn servers Security zone with the allowed port range. Defining Base Roles: You can define the global roles for a cluster. These roles will be applied to all servers unless explicitly overridden for any particular facet or server. All the available roles are defined in $CHEF_HOMEBASE/roles directory. You can create a custom role and use it in your cluster config. Defining Environment: Environments in Chef provide a mechanism for managing different environments such as production, staging, development, and testing, etc with one Chef setup (or one organization on Hosted Chef). With environments, you can specify per environment run lists in roles, per environment cookbook versions, and environment attributes. The available environments can be found in $CHEF_HOMEBASE/environments directory. Custom environments can be created and used. Ironfan.cluster 'my_first_cluster' do # Enviornment under which chef nodes will be placed environment :dev # Global roles for all servers role :systemwide role :ssh # Global ec2 cloud settings cloud(:ec2) do permanent true region 'us-east-1' availability_zones ['us-east-1c', 'us-east-1d'] flavor 't1.micro' backing 'ebs' image_name 'ironfan-natty' chef_client_script 'client.rb' security_group(:ssh).authorize_port_range(22..22) mount_ephemerals end Defining Facets: Facets are group of servers within a cluster. Facets share common attributes and roles. For example, in your cluster you have 2 app servers and 2 database servers then you can group the app servers under the app_server facet and the database servers under the database facet. Defining Facet specific roles and recipes: You can define roles and recipes particular to a facet. Even the global cloud settings can be overridden for a particular facet. facet :master do instances 1 recipe ‘nginx’ cloud(:ec2) do flavor ‘m1.small’ security_group(:web) do authorize_port_range(80..80) authorize_port_range(443..443) role :hadoop_namenode role :hadoop_secondarynn role :hadoop_jobtracker role :hadoop_datanode role :hadoop_tasktracker end facet :worker do instances 2 role :hadoop_datanode role :hadoop_tasktracker end In the above example we have defined a facet for Hadoop master node and a facet for worker node. The number of instances of master is set to 1 and that of worker is set to 2. Each master and worker facets have been assigned a set of roles. For master facet we have overridden the ec2 flavor settings as m1.medium. Also the security group for the master node is set to accept incoming traffic on port 80 and 443. Cluster Management: Now that we are ready with the cluster configuration lets get a hands on cluster management. All the cluster configuration files are placed under the $CHEF_HOMEBASE/clusters directory. We will place our new config file as hadoop_job001_cluster.rb. Now our new cluster should be listed in the cluster list. List Clusters: $ knife cluster list Cluster Path: /.../homebase/clusters +-------------+-------------------------+ | cluster | path | +-------------+-------------------------+ hadoop_job001 HOMEBASE/clusters/hadoop_job001_cluster.rb +-------------+-------------------------+ Show Cluster Configuration: $ knife cluster show hadoop_job001 Inventorying servers in hadoop_job001 cluster, all facets, all servers my_first_cluster: Loading chef my_first_cluster: Loading ec2 my_first_cluster: Reconciling DSL and provider information +-----------------------------+-------+-------------+----------+------------+-----+ | Name | Chef? | State | Flavor | AZ | Env | +-----------------------------+-------+-------------+----------+------------+-----+ | hadoop_job001-master-0 | no | not running | m1.small | us-east-1c | dev | | hadoop_job001-client-0 | no | not running | t1.micro | us-east-1c | dev | | hadoop_job001-client-1 | no | not running | t1.micro | us-east-1c | dev | +-----------------------------+-------+-------------+----------+------------+-----+ Launch Cluster: Launch Whole Cluster: $ knife cluster launch hadoop_job001 Loaded information for 3 computer(s) in cluster my_first_cluster +-----------------------------+-------+---------+----------+------------+-----+------------+--------- -------+----------------+------------+ | Name | Chef? | State | Flavor | AZ | Env | MachineID | Public IP | Private IP | Created On | +-----------------------------+-------+---------+----------+------------+-----+------------+----------------+----------------+------------+ | hadoop_job001-master-0 | yes | running | m1.small | us-east-1c | dev | i-c9e117b5 | 101.23.157.51 | 10.106.57.77 | 2012-12-10 | | hadoop_job001-client-0 | yes | running | t1.micro | us-east-1c | dev | i-cfe117b3 | 101.23.157.52 | 10.106.57.78 | 2012-12-10 | | hadoop_job001-client-1 | yes | running | t1.micro | us-east-1c | dev | i-cbe117b7 | 101.23.157.52 | 10.106.57.79 | 2012-12-10 | +-----------------------------+-------+---------+----------+------------+-----+------------+----------------+----------------+------------+ Launch a single instance of a facet: $ knife cluster launch hadoop_job001 master 0 Launch all instances of a facet: $ knife cluster launch hadoop_job001 worker Stop Whole Cluster: $ knife cluster stop hadoop_job001 Stop a single instance of a facet: $ knife cluster stop hadoop_job001 master 0 Stop all instances of a facet: $ knife cluster stop hadoop_job001 Setting up a Hadoop cluster and managing it cannot get easier than this! Just to re-cap, Ironfan, open-souced by InfoChimps, is a systems provisioning and deployment tool which automates entire systems configuration to enable the entire Big Data stack, including tools for data ingestion, scraping, storage, computation, and monitoring. There is another tool that we are exploring for Hadoop cluster management – Apache Ambari. We will post our findings and comparisons soon, stay tuned!

Aziro Marketing

blogImage

3-Way Multi Cloud Infrastructure Management With Terraform HCL

A Stronger Digital expertise mandates better Data Authority. Data plays a major role in different aspects of our business especially since the rise of Cloud computing technologies. Traditional storage systems are increasingly losing their charm while Cloud Storage infrastructures are being explored and supported more with innovative advances. However, Cloud Infrastructure can easily get too painful too quick if one isn’t rightly equipped for its management. Therefore, it’s imperative that we discuss and understand about Cloud computing technologies, their key service providers and most importantly the right means to manage the Cloud infrastructure.Peeping Into the Wonders of Cloud computing:Cloud computing, as it is very well known in recent times, is the delivery of computing services including – servers, storage, databases, networking, software, analytics, and intelligence—over the Internet (“the cloud”). We saw how during the disruptive reality of last two years, cloud provided us with not only business continuity but also faster innovation, flexible resources, and economies of scale. Some of the major ways in which cloud has change the digital landscape for good are:Economy – You Pay only for cloud services that you use,Better ROIs – Lower Op-ex and Cap-ex for even better service qualityAutomation – Form infrastructure management to regular deployments, everything is more efficient and automation-friendly.High Scalability – As the business grows in terms of clientele, the entire system can easily scale in no-timeIt is also a well-known fact that many major players have already established themselves as Cloud Infrastructure experts. Depending on the popularity and business merits of these cloud service providers, their share in the market varies (figure below)With the varying benefits and service feasibilities of the cloud vendors, business find it more economical to opt for multiple cloud infrastructures and invest in better expertise and resources to manage them all. One important tool that helps in this task is Terraform.Terraform – HCL and Multi-Cloud Infrastructure ManagementTerraform is a popular infrastructure-as-code (IaC) tool from HashiCorp for that helps with building, changing, and managing infrastructure. For managing Multi Cloud environments it uses a configuration language called the HashiCorp Configuration Language (HCL) which codifies cloud APIs into declarative configuration files. The configuration files are then read and provided an execution plan of changes, which can be reviewed, applied, and appropriately provisioned.To understand this better, we need to dive into the different aspects of Terraforms working that come together to manage our multi-cloud infrastructures.Terraform Plugins: A provider is a plugin that Terraform uses to create and manage our resources. It interact with cloud platforms and other services via their application programming interfaces (APIs).We have more than 1,000 providers in the HashiCorp and the Terraform community to manage resources on Amazon Web Services (AWS), Azure, Google Cloud Platform (GCP), Kubernetes, Helm, GitHub, Splunk, and DataDog etc. and also we can find providers for many of the platforms and services in the “Terraform Registry”.Terraform Work flow: Terraform – Workflow consist of 3 stages Write – Define the resourcesPlan – Preview the changes.Apply – Make the planned changes.2.1 Write: We can define resources across multiple cloud providers and services. For example, we can create a configuration to deploy an application on virtual machines in a Virtual Private Cloud (VPC) network with security groups and a load balancer.2.2 Plan: We can create an execution plan describing the infrastructure it will create, update, or destroy based on the existing infrastructure and our configuration.2.3 Apply: Based on our approval, Terraform performs the proposed operations in the correct order, respecting any resource dependencies. For example, if we update the properties of a VPC and change the number of virtual machines in that VPC, Terraform will recreate the VPC before scaling the virtual machines. 3. Terraform Cloud Infrastructure Management3.1 Installing Terraform (CentOS/RHEL)Install yum-config-manager to manage your repositories.sudo yum install -y yum-utilsApplying yum-config-manager to include HashiCorp Linux reposudo yum-config-manager –add-repohttps://rpm.releases.hashicorp.com/RHEL/hashicorp.repoInstall.sudo yum -y install terraform3.2 Building InfrastructureConfigure the AWS CLI from your terminal.aws configureEnsuring separate working directories for each Terraform configurationmkdir learn-terraform-aws-instanceChange into the directory.cd learn-terraform-aws-instanceCreate a file to define your infrastructure.touch main.tfComplete configuration – deploy with Terraform3.3 Change InfrastructureCreate a directory named learn-terraform-aws-instance and use the above configuration into a file named main.tf.Initialize the configuration.$ terraform initApply the configuration (the confirmation prompt needs ‘Yes’ as the response to proceed)$ terraform applyFor updating the ami of your instance the aws_instance.app_server resource needs to be changed under the provider block in main.tf byReplace the current AMI ID with a new one.Finally, post-configuration-change, again run terraform apply to see the change on existing resources3.4 Destroy InfrastructureThe terraform destroy command terminates resources managed by our Terraform project. Destroy the resources which we createdBy this way, we can Build, Change and Destroy Various Cloud infrastructure (AWS, AZURE, GCP etc.) by using Terraform HCL .ConclusionManaging a single cloud infrastructure for private and public business purposes can be helpful. It seems humanely impossible to juggle between multiple cloud vendors. Therefore, external help in the form of Terraform is highly valuable for the business to maintain their bandwidth for consistent innovations. The 3-way process to ensure efficient multi-cloud infrastructure management is a gift that would easily make Terraform an essential weapon in our digital arsenal. 

Aziro Marketing

blogImage

5 Key Motives to Adopt a Cloud-Native Approach

While some may argue that cloud-native history has been building for a while, it was companies like Amazon, Netflix, Apple, Google, and Facebook that heralded the underrated act of simplifying IT environments for application development. The last decade saw a bunch of highly innovative, dynamic, ready to deliver, and scaled-at-speed applications take over businesses that were stuck in complex, monolithic environments, and failed to deliver equally compelling applications. What dictated this change in track was the complexity and incompetence of traditional IT environments. These companies had already proven their competitive edge with their knack in identifying and adapting futuristic technology, but this time, they went back and uncomplicated matters. They attested cloud-native to be “the” factor that simplified app development if we are to continue this trend of data overload. Their success was amplified by their ability to harness the elasticity of the cloud by redirecting app development into cloud-native environments. Why Is Cloud-Native Gaining Importance? Application development has rapidly evolved into a hyper-seamless, almost invisible change woven into the users’ minds. We are now in an era where releases are a non-event. Google, Facebook, and Amazon update their software every few minutes without downtimes – and that’s where the industry is headed. The need to deploy applications and subsequent changes without disrupting the user experience have propelled software makers into harnessing the optimal advantages of the cloud. By building applications directly in the cloud, through microservice architectures, organizations can rapidly innovate and achieve unprecedented business agility, which is otherwise unimaginable. Key Drivers for Organizations to Going Native 1. Nurtures innovation With cloud-native, developers have access to functionally rich platforms and infinite computing resources at the infrastructure level. Organizations can leverage off the shelf SaaS applications rather than developing apps from scratch. With less time spent on building from the ground up, developers can spend more time innovating and creating value with the time and resources at hand. Cloud platforms also allow the trial of new ideas at lower costs –through low-code environments and viable platforms that cut back costs of infrastructure setup. 2. Enhances agility and scalability Monolithic application architectures make responding in real-time tedious; even the smallest tweaks in functionality necessitates re-test and deployment of the whole application. Organizations simply cannot afford to invest time in such a lengthy process. As microservice architectures are made of loosely tied independent elements, it is much easier to modify or append functionalities without disrupting the existing application. This process is much faster and is responsive to market demand. Additionally, microservice architectures are ideal for exploring fluctuations in user demands. Thanks to their simplicity, you only need to deploy additional capacity to cater to fluctuating demand (on an individual container), rather than the entire application. With the cloud, you can truly scale existing resources to meet real-time demand. 3. Minimizes time to market Organizations are heavily involved in time-consuming processes in traditional infrastructure management- be it provisioning, configuring, or managing resources. The complex entanglement between IT and dev teams often adds to the delay in decision making, therefore obstructing real-time response to market needs. Going cloud-native allows most processes to be automated. Tedious and bureaucratic operations that took up to 5-6 weeks in a traditional setup can be limited to less than two weeks in cloud-native environments. Automating on-premise applications can get complicated and time-consuming. Cloud-based app development overcomes this by providing developers with cloud-specific tools. Containers and microservice architectures play an essential part in making it faster for developers to write and release software sooner. 4. Fosters Cloud Economics It is believed that most businesses spend a majority of their IT budget in simply keeping the lights on. In a scenario where a chunk of the data center capacity is idle at any given point in time, it demands the need for cost-effective methodologies. Automation centric features like scalability, elastic computing, and pay-per-use models allow organizations to move away from costly expenditures and redirect them towards new features development. In simple words, with a cloud-native approach, you bring the expenses down to exactly what you use. 5. Improves management and security Managing cloud infrastructure can be handled with a cluster of options like API management tools, Container management tools, and cloud management tools. These tools lend holistic visibility to detect problems at the onset and optimize performance. When talking of cloud, concerns related to compliance and security are not far off. The threat landscape of IT is constantly evolving. When moving to the cloud, businesses often evolve their IT security to meet new challenges. This includes having architectures that are robust enough to support change without risking prevailing operations. The loosely coupled microservices of cloud-native architectures can significantly reduce the operational and security risk of massive failures. Adopting Cloud Native for Your Business Migrating to cloud-native is a paradigm shift in the approach of designing, development, and deployment of technology. By reducing the complexity of integration, cloud-native provides a tremendous opportunity for enterprises. They can drive growth by leveraging cloud-native environments to develop innovative applications without elaborate setups. Organizations are looking at a lifelong means of creating continuously scalable products with frequent releases, coupled with reduced complexities and opex. Cloud and cloud-native technologies signify the building of resilient and efficient IT infrastructure minus the complications, for the future. By selecting the right cloud-native solution provider, organizations can develop and deliver applications faster, without compromising on quality. Conclusion In an era of limitless choices, applications that quickly deliver on promises can provide a superior customer experience. Organizations can achieve this through faster product development, iterative quality testing, and continuous delivery. Cloud-native applications help organizations to be more responsive with the ability to reshape products and to test new ideas quickly, repetitively.

Aziro Marketing

blogImage

An Introduction to Serverless and FaaS (Functions as a Service)

Evolution of Serverless ComputingWe started with building monolithic applications for installing and configuring OS. This was followed by installing application code on every PC to VM’s to meet their user’s demand. It simplified the deployment and management of the servers. Datacenter providers started supporting a virtual machine, but this still required a lot of configuration and setup before being able to deploy the application code.After a few years, Containers came to the rescueDockers made its mark in the era of Containers, which made the deploying of applications easier. They provided a simpler interface to shipping code directly into production. They also made it possible for platform providers to get creative. Platforms could improve the scalability of users’ applications. But what if developers could focus on even less? It can be possible with Serverless Computing. What exactly is “Serverless”?Serverless computing is a cloud computing model which aims to abstract server management and low-level infrastructure decisions away from developers. In this model, the allocation of resources is managed by the cloud provider instead of the application architect, which brings some serious benefits. In other words, serverless aims to do exactly what it sounds like—allow applications to be developed without concerns for implementing, tweaking, or scaling a server.In the below diagram, you can understand that you wrap your Business Logic inside functions. In response to the events, these functions execute on the cloud. All the heavy lifting like Authentication, DB, File storage, Reporting, Scaling will be handled by your Serverless Platform. For Example AWS Lamba, Apache IBM openWhisk.When we say “Serverless Computing,” does it mean no servers involved?The answer is No. Let’s switch our mindset completely. Think about using only functions — no more managing servers. You (Developer) only care about the business logic and leave the rest to the Ops to handle.Functions as a Service (FaaS)It is an amazing concept based on Serverless Computing. It provides means to achieve the Serverless dream allowing developers to execute code in response to events without building out or maintaining a complex infrastructure. What this means is that you can simply upload modular chunks of functionality into the cloud that are executed independently. Sounds simple, right? Well, it is.If you’ve ever written a REST API, you’ll feel right at home. All the services and endpoints you would usually keep in one place are now sliced up into a bunch of tiny snippets, Microservices. The goal is to completely abstract away servers from the developer and only bill based on the number of times the functions have been invoked.Key components of FaaS:Function: Independent unit of the deployment. E.g.: file processing, performing a scheduled taskEvents: Anything that triggers the execution of the function is regarded as an event. E.g.: message publishing, file uploadResources: Refers to the infrastructure or the components used by the function. E.g.: database services, file system servicesQualities of a FaaS / Functions as a ServiceExecute logic in response to events. In this context, all logic (including multiple functions or methods) are grouped into a deployable unit, known as a “Function.”Handle packaging, deployment, scaling transparentlyScale your functions automatically and independently with usageMore time focused on writing code/app specific logic—higher developer velocity.Built-in availability and fault tolerancePay only for used resourcesUse cases for FaaSWeb/Mobile ApplicationsMultimedia processing: The implementation of functions that execute a transformational process in response to a file uploadDatabase changes or change data capture: Auditing or ensuring changes meet quality standardsIoT sensor input messages: The ability to respond to messages and scale in responseStream processing at scale: Processing data within a potentially infinite stream of messagesChatbots: Scaling automatically for peak demandsBatch jobs scheduled tasks: Jobs that require intense parallel computation, IO or network accessSome of the platforms for ServerlessIntroduction to AWS Lambda (Event-driven, Serverless computing platform)Introduced in November 2014, Amazon provides it as part of Amazon Web Services. It is a computing service that runs code in response to events and automatically manages the computing resources required by that code. Some of the features are:Runs Stateless – request-driven code called Lambda functions in Java, NodeJS & PythonTriggered by events (state transitions) in other AWS servicesPay only for the requests served and the compute timeAllows to Focus on business logic, not infrastructureHandles your codes: Capacity, Scaling, Monitoring and Logging, Fault Tolerance, and Security PatchingSample code on writing your first lambda function:This code demonstrates simple-cron-job written in NodeJS which makes HTTP POST Request for every 1 minute to some external service.For detail tutorial, you can read on https://parall.ax/blog/view/3202/tutorial- serverless-scheduled-tasksOutput: Makes a POST call for every minute. The function that is firing POST request is actually running on AWS Lambda (Serverless Platform).Conclusion: In conclusion, serverless platforms today are useful for tasks requiring high-throughput rather than very low latency. It also helps to complete individual requests in a relatively short time window. But the road to serverless can get challenging depending on the use case. And like any new technology innovations, serverless architectures will continue to evolve to become a well-established standard.References: https://blog.cloudability.com/serverless-computing-101/ https://www.doc.ic.ac.uk/~rbc/papers/fse-serverless-17.pdf https://blog.g2crowd.com/blog/trends/digital-platforms/2018-dp/serverless-computing/ https://www.manning.com/books/serverless-applications-with-node-js

Aziro Marketing

blogImage

Cloud Computing for Enterprise VS SMBs- Key Differentiators

With the growing demand and hype around cloud computing and its related services, it is not a surprise when boards have endless heated discussions on whether it’s time to switch to cloud computing services while leaving traditional enterprise software behind. However, the “me too” attitude is only good as long as the environment is favorable to you, as it is to your next door neighbor. What many organizations, SMEs and enterprises, tend to oversee is if it the right time/cause to jump to cloud computing services. The answer to this question needs to be heavily weighed and assessed; just a marketing pitch doesn’t suffice. As contrary to popular belief, not all cloud computing services may be the right ones for every business. A careful examination and comparison is a must. Also, do not limit this to different cloud providers alone. Before you embark on the cloud, ensure that is in the best interests to move from your enterprise software. Though cost effectiveness is a great advantage for SMEs, it doesn’t alter much for enterprises. So is the case with overall operating costs. Enterprises cannot consider benefits like these in order to move to the cloud. For an enterprise to reach out to cloud computing service providers, you need to look at things from a slightly different perspective. Security and performance risks can be your focus when you are sifting through cloud service providers. Seemingly minor security breaches or regular outages and downtimes, can cause irreversible damage to enterprises. Like with any enterprise, being unable to cater to clients in real time can result in a huge loss. Though cloud service providers are quick to scale and expand to adjust to your ongoing needs, have you analyzed their capacity? You need to evaluate their scale-out plans with regards to infrastructure. According to a research, many cloud based ERP systems are reported to lack the infrastructure to accommodate interoperability with existing applications. Cloud based services may limit extensive customization of the system. With users spread across various departments, it becomes imperative to adapt the system as per individual teams, constraints to do so result in inadequate performance and procedural rigidity. In another research by Forrester, enterprises are concerned about stability, and dedicating a team for the maintenance of the system. These are some of the factors that enterprises need to think through before availing cloud related services. Going with largely marketed concepts of lower costs and perceived scalability will only prove diminutive in the future.

Aziro Marketing

blogImage

IaaS vs. PaaS: Everything You Need To Know

PaaS and IaaS are two of the earliest and most widely used cloud computing services. They are similar in some ways, yet fundamentally different types of platforms. In simple words, IaaS is the combination of PaaS, Operating System, Middleware, and Runtime. Enterprises must understand these differences to choose the right type of cloud service for a given use case. Infrastructure-as-a-Service (IaaS) offers added control and flexibility over cloud infrastructure but is more complex to manage and optimize. In contrast, Platform-as-a-Service (PaaS) solutions offer the tools and the infrastructure required to expedite deployment. However, security, integration, and vendor lock-in are issues to look at in PaaS. This blog explains the definitions of IaaS vs. PaaS, its benefits and drawbacks, and a few examples of both IaaS and PaaS. IaaS vs. PaaS- Definitions Infrastructure as a Service (IaaS) offers on-demand access to virtualized IT infrastructure through the internet. Mostly, IaaS offerings allow access only to the core infrastructure components like compute, networking and storage. Users can install and manage the software they want to run on their cloud-based infrastructure. Platform as a service (PaaS) offers the infrastructure to host applications and also software tools to help clients build and deploy the applications. PaaS simplifies the entire setup and management of both hardware and software. Comparatively, PaaS is less flexible than IaaS and mainly caters to a narrow set of application development or deployment approaches. To be honest, they are not general-purpose replacements for an enterprise’s complete IT infrastructure and software development workflow. IaaS vs. PaaS- Benefits Infrastructure-as-a-Service solutions offer networking, storage, servers, operating systems, and other resources required to run the workloads. Infrastructure is made available by making use of the virtualization technology and can be used on a pay-as-you-go model. Benefits of IaaS solutions are: Fast scalability with the capacity to quickly provision or release computing resources as and when needed. Lowered costs, as companies pay only for the infrastructure they employ. Better usage of IT investments as there is no need for over-provisioning. Higher agility, offering enterprises the capacity to move quickly and take advantage of business opportunities. Platform-as-a-Service solutions offer cloud-based environments for developing, testing, running, and managing web-driven and cloud-driven applications. Companies get a state-of-the-art development environment without the need to buy, build or manage the underlying infrastructure. Benefits of PaaS solutions are: Rapid results with less time for coding, as PaaS solutions primarily include built-in options for pre-coded elements. More straightforward collaboration. Thanks to a development environment hosted in the cloud it is easier for distributed teams to collaborate. Better performance, with support for the entire web application lifecycle inside a single integrated environment. Lower costs due to agile development at scale. IaaS vs. PaaS- Disadvantages Disadvantages of IaaS are: The infrastructure runs legacy applications with cloud services, but these infrastructures might not be devised to secure legacy controls. Management of some internal resources required to manage business tasks Training is required more than often. Clients are responsible for business continuity, backup, and data security. Disadvantages of PaaS are: The data residing in the cloud servers is controlled by a third party. It can often be challenging to connect the services with the data stored in onsite data centers. It might not be easy to manage system migration if the vendor does not offer migration policies. Though PaaS services usually offer a wide range of customization and integration features, customization of legacy systems can become a big concern with Platform as a Service. PaaS limitations can be associated with particular services and applications as PaaS does not support all languages users want to work with. IaaS vs. PaaS- Examples Examples of IaaS are: AWS- Amazon Web Services Microsoft Azure Google Cloud Digital Ocean Alibaba Cloud Examples of PaaS are: Heroku Elastic Beanstalk from AWS Engine Yard Open Shift from RedHat Conclusion IaaS and PaaS are the most impressive emerging technologies ruling the world of cloud computing currently. Both have their own benefits and disadvantages. However, understanding the details given above can help identify which of these services will be beneficial for you to use. The choice depends on the requirements of specific workloads. To keep up with the emerging standards of modernization, enterprises must invest in cloud computing. Not only will it help in serving your customers better, but it will also help your business grow. It will remove the complexities and limitations that traditional IT infrastructures pose. Once you’ve decided that, choose whether you must opt for IaaS or PaaS, depending on how you want to run your cloud-based applications.

Aziro Marketing

blogImage

My Interesting Cloud Q&A Session from 2009

It was interesting to revisit this Q&A on cloud computing from almost 3 years ago. These were some of the questions posed to me during the cloud computing panel discussion at CSI Annual Convention 2009. It is interesting to see that Clogeny’s strategic bet on the cloud has paid off. Q: Each one of you has a different view (PaaS, services, testing, startup, management) in the domain. A 5-minute warmer on your take on cloud computing based on your current work will be great. This will set the stage nicely for the discussion. There are many “definitions” of cloud computing but for me “Cloud Computing is the fifth generation of computing after Mainframe, Personal Computer, Client-Server and the Web.” It’s not often that we have a whole new platform and delivery model to create businesses on. And what’s more its a new business model as well – using a 1000 servers for 1 hour costs the same as using 1 server for 1000 hours – no upfront costs, completely pay as you go! How has cloud computing suddenly creeped on us and become technologically and economically viable? Because of 3 reasons: Use of commodity hardware and increased software complexity to manage redundancy on such hardware. The perfect example of such softwares is virtualisation, MapReduce, Google File System, Amazon’s Dynamo, etc. Economies of scale. In a medium sized data center it costs $2.2 /GB/month while in a large data center it costs $0.40/GB/month. That is a cost saving of 5.7 times which cloud computing vendors have been possible to pass on to the customers. In general, cloud infrastructure players can avail 5 to 7 times decrease in cost. The third and according to me the most important reason: there was a need to scale for many organizations but not the ability to scale: As the world became data intensive, players realized that unless scalable computing, scalable storage and scalable software was available, their business models won’t scale. Consider analytics as an example. Some years back it was possible for mid-sized companies to mine the data in their own data center but with data doubling every year they have been unable to keep up. They have decided to scale out to the cloud. Amazon, Google realized this from their own needs very early and look here we are eating their dog-food! Developers with new ideas for innovative internet services no longer require large capital investments in hardware to deploy their service. They can potentially go from 1 customer to 100k customers in a matter of days. Over-provisioning or under-provisioning is no longer a factor if your product is hosted on cloud computing platforms. This enables small companies to focus on their core competency rather than worrying about infrastructure. This enables a much quicker go-to-market strategy. Another advantage is that clouds are available in various forms: Amazon EC2 is as good as a physical machine and you can control the entire software stack. Google AppEngine and salesforce.com are platforms which are highly restrictive but good for quick development and allows the scaling complexity to be handled by the platform itself. Microsoft Azure is at an intermediate point between the above two. So depending on your needs, you can choose the right cloud! As I said earlier its a new development environment and there is lot of scope for innovation which is what my company “Clogeny” is focusing on. Q: Cloud computing is not just about “compute” – it is also storage, content distribution and a new way of visualizing and using unlimited storage. How has storage progressed from multi-million dollar arrays and tapes to S3 and Azure and Google Apps? I remember that when I started writing filesystems I needed to check for an error indicating that the filesystem was full. It just struck me that I have no need for such error checking when using cloud storage. So yes, its actually possible to have potentially infinite storage. Storage: Storage arrays have grown in capacity and complexity over the years to satisfy the ever-increasing demand for size and speed. But cloud storage is pretty solid as well. Amazon, Microsoft and most other cloud vendors keep 3 copies of data and atleast 1 copy is kept at a separate geographical location. When you factor this into the costs, cloud storage is pretty cheap. Having said that, cloud storage is not going to replace local storage, fast and expensive arrays will still be needed for IOPS and latency hungry applications. But the market for such arrays may taper off. Content Distribution: A content delivery network is a system of nodes in multiple locations which co-operate to satisfy requests for content efficiently. These nodes move the content around to serve it optimally where the node nearest to the user, serves the request. All the cloud providers offer content distribution services thereby improving reach and performance since requests can be served around the world from the nearest available server. This makes the distribution extremely scalable and cost efficient. The fun part is that the integration between cloud and CDN is seamless and can be done through simple APIs. Visualizing storage: Storage models for the cloud have undergone a change as compared to the POSIX model and relational databases that we are used to. The POSIX model has given way to a more scalable flat key-value store in which a “bucket-name, object-name” tuple points to a piece of data. There is no concept of folder and files that we are used to. Note that for ease of use a folder-file hierarchy can be emulated. Amazon provides SimpleDB, a non-traditional database which is again easier to scale but your data organization and modeling will need to change when migrating to SimpleDB. MapReduce is a framework to operate on very large data sets in highly parallel environments. MapReduce can work on structured or unstructured data. Consider this as an example, there is a online photo sharing company called SmugMug which estimates that it has saved $500,000 in storage expenditures and cut its disk storage array costs in half by using Amazon S3. Q: CC breaks the traditional models of scalability and infrastructure investment, especially for startups. A 1-person startup can easily compare with an IBM or Google on infrastructure availability if the revenue model is in place. What are the implications and an example of how? Definitely, startups need to only focus on their revenue model and implementing their differentiators. The infrastructure, management and scaling are inherently available in a pay as you go manner so that ups and downs in traffic can be sustained. For examples, some sites get hit by very high traffic in first few weeks and need high infrastructure costs to service this traffic. But then the load tapers off and infrastructure lies unused. This is where the pay as you go model works very well. So yes, cloud computing is a leveller fostering many start-ups. Also many businesses are using cloud computing for scale-out whereby their in-house data center is enough to handle certain amount of load but when load goes beyond a certain point they avail the cloud. Such hybrid computing is sometimes more economically viable. Xignite employs Amazon EC2 and S3 to deliver financial market data to enterprise applications, portals, and websites for clients such as Forbes, Citi and Starbucks. This data needs to be delivered in real-time and needs rapid scale up and scale down. Q: What do you see when you gaze in the crystal bowl? Security is a concern for many customers but consider that the most paranoid customer – the US government has started a cloud computing initiative called “App.gov” where they are providing SaaS applications for federal use. Even if there are some issues, they are being surmounted as we speak. Cloud computing has now reached a critical mass and the ecosystem will continue to grow. In terms of technology, I believe that there will be some application software running on-premise and another piece running on the cloud for scaling out. The client part can provide service in case of disconnected operations and importantly can help to resolve latency issues. Most cloud computing applications will have in-built billing systems that will either be a standard or software that both the vendor and customer trust. I would love to see some standards emerging in this space since that will help to accelerate acceptance. “Over the long term, absent of other barriers, economics always wins!”and the economics of cloud computing are too strong to be ignored.

Aziro Marketing

EXPLORE ALL TAGS
2019 dockercon
Advanced analytics
Agentic AI
agile
AI
AI ML
AIOps
Amazon Aws
Amazon EC2
Analytics
Analytics tools
AndroidThings
Anomaly Detection
Anomaly monitor
Ansible Test Automation
apache
apache8
Apache Spark RDD
app containerization
application containerization
applications
Application Security
application testing
artificial intelligence
asynchronous replication
automate
automation
automation testing
Autonomous Storage
AWS Lambda
Aziro
Aziro Technologies
big data
Big Data Analytics
big data pipeline
Big Data QA
Big Data Tester
Big Data Testing
bitcoin
blockchain
blog
bluetooth
buildroot
business intelligence
busybox
chef
ci/cd
CI/CD security
cloud
Cloud Analytics
cloud computing
Cloud Cost Optimization
cloud devops
Cloud Infrastructure
Cloud Interoperability
Cloud Native Solution
Cloud Security
cloudstack
cloud storage
Cloud Storage Data
Cloud Storage Security
Codeless Automation
Cognitive analytics
Configuration Management
connected homes
container
Containers
container world 2019
container world conference
continuous-delivery
continuous deployment
continuous integration
Coronavirus
Covid-19
cryptocurrency
cyber security
data-analytics
data backup and recovery
datacenter
data protection
data replication
data-security
data-storage
deep learning
demo
Descriptive analytics
Descriptive analytics tools
development
devops
devops agile
devops automation
DEVOPS CERTIFICATION
devops monitoring
DevOps QA
DevOps Security
DevOps testing
DevSecOps
Digital Transformation
disaster recovery
DMA
docker
dockercon
dockercon 2019
dockercon 2019 san francisco
dockercon usa 2019
docker swarm
DRaaS
edge computing
Embedded AI
embedded-systems
end-to-end-test-automation
FaaS
finance
fintech
FIrebase
flash memory
flash memory summit
FMS2017
GDPR faqs
Glass-Box AI
golang
GraphQL
graphql vs rest
gui testing
habitat
hadoop
hardware-providers
healthcare
Heartfullness
High Performance Computing
Holistic Life
HPC
Hybrid-Cloud
hyper-converged
hyper-v
IaaS
IaaS Security
icinga
icinga for monitoring
Image Recognition 2024
infographic
InSpec
internet-of-things
investing
iot
iot application
iot testing
java 8 streams
javascript
jenkins
KubeCon
kubernetes
kubernetesday
kubernetesday bangalore
libstorage
linux
litecoin
log analytics
Log mining
Low-Code
Low-Code No-Code Platforms
Loyalty
machine-learning
Meditation
Microservices
migration
Mindfulness
ML
mobile-application-testing
mobile-automation-testing
monitoring tools
Mutli-Cloud
network
network file storage
new features
NFS
NVMe
NVMEof
NVMes
Online Education
opensource
openstack
opscode-2
OSS
others
Paas
PDLC
Positivty
predictive analytics
Predictive analytics tools
prescriptive analysis
private-cloud
product sustenance
programming language
public cloud
qa
qa automation
quality-assurance
Rapid Application Development
raspberry pi
RDMA
real time analytics
realtime analytics platforms
Real-time data analytics
Recovery
Recovery as a service
recovery as service
rsa
rsa 2019
rsa 2019 san francisco
rsac 2018
rsa conference
rsa conference 2019
rsa usa 2019
SaaS Security
san francisco
SDC India 2019
SDDC
security
Security Monitoring
Selenium Test Automation
selenium testng
serverless
Serverless Computing
Site Reliability Engineering
smart homes
smart mirror
SNIA
snia india 2019
SNIA SDC 2019
SNIA SDC INDIA
SNIA SDC USA
software
software defined storage
software-testing
software testing trends
software testing trends 2019
SRE
STaaS
storage
storage events
storage replication
Storage Trends 2018
storage virtualization
support
Synchronous Replication
technology
tech support
test-automation
Testing
testing automation tools
thought leadership articles
trends
tutorials
ui automation testing
ui testing
ui testing automation
vCenter Operations Manager
vCOPS
virtualization
VMware
vmworld
VMworld 2019
vmworld 2019 san francisco
VMworld 2019 US
vROM
Web Automation Testing
web test automation
WFH

LET'S ENGINEER

Your Next Product Breakthrough

Book a Free 30-minute Meeting with our technology experts.

Aziro has been a true engineering partner in our digital transformation journey. Their AI-native approach and deep technical expertise helped us modernize our infrastructure and accelerate product delivery without compromising quality. The collaboration has been seamless, efficient, and outcome-driven.

Customer Placeholder
CTO

Fortune 500 company