Leveraging a Platform as a Service (PaaS) is a great way to quickly build, innovate, and deploy a new product or service. By leveraging a PaaS vendor’s servers, load balancing, application scaling, and 3rd-party integrations, your engineering team can focus on building customer-facing features that add value to your business.
There comes a point, however, where many organizations outgrow their one-size-fits-all PaaS. Some common reasons include:
Performance to Cost Efficiency
Control and Customizations
Platform Limitations & Available Technologies
At Fairwinds, we’re often asked “How do we move from Heroku, but still support some of our most loved features of the platform?”
It’s a great question and also drives to the heart of some of Heroku’s strengths:
Easy, Standardized Rolling Deployment Methodology
Automated SSL Certificate Management and Infrastructure Scaling
Separate Testing Environments
Easy Access to Application Log Outputs
These features are very desirable from a team velocity standpoint and are among the top questions we see when clients ask about the switch.
Goals in Migrating
Let’s dive into how Kubernetes can help with some of these features as well as address some of the limitations of working in a one-size-fits-all PaaS. For this example, we’ll leverage Google Cloud Platform (GCP) as some of these features need a cloud platform backing to be achieved. At Fairwinds, we specialize in cloud migrations, as well as integrations to help you get the most back for your buck when moving your infrastructure.
Standardized Deployments and Secret Management
When leveraging Fairwinds’ ClusterOps Service, we help get you set up with a standards based set of Google Kubernetes Engine (GKE) clusters on which to schedule your containerized workloads. Some of our on-boards include helping get your application into containers so that it will play nicely in the Kubernetes ecosystem.
Standard deployments are backed with tooling like Helm and Reckoner to support deploying both your workloads, as well as core infrastructure tooling within Kubernetes.
Kubernetes natively supports rolling deployments and gives you sound defaults from which to customize your desired rolling strategy. Along with this, Kubernetes supports 12 factor applications by being able to mount in secrets and configurations to environment variables (as well as files if needed) so you can manage your application configuration as a single part of your deployment resources. This core tooling, combined with GitOps, can help you get to a more CI/CD workflow that can be tailored to your businesses delivery needs.
Kubernetes’ robust RBAC can also be leveraged to keep your secrets and credentials safe while living in the cluster. Although RBAC can sometimes be complex, you can leverage another Fairwinds tool called RBAC-Manager to help alleviate these complexities. Our ClusterOps on-boarding also sets up a core from which to expand your access strategy for allowing access to Kubernetes from your developers, as well as automated systems like CI/CD.
Infrastructure Management and SSL Certificate Management
As you may know, when your application starts to get heavy usage, you need to start growing your ancillary components like Load Balancers and Databases. Kubernetes via GKE has deep integrations to be able to auto-provision Cloud Load Balancers for individual services, as well as unify management under “ingress controllers,” so that you can slice up traffic to your applications as you need to when accommodating growth.
Nginx-ingress-controller is one of our most commonly used tools, which allows you to route traffic to different containers in your cluster. This can be especially helpful if you’re splitting up the monolith or testing ideas for new services.
Hand in hand with web-server load balancing comes security and encryption. If you’ve worked in operations, you may have experienced the panic of an expiring client facing certificate and having to update an unknown quantity of servers with the right credentials so your clients don’t hit the “site not secure” message in their browser. Kubernetes ecosystem has robust tooling that can automatically provision you certificates and auto-rotate them using Let's Encrypt so you don’t have to worry about it. This eases operational burden while keeping your traffic secure.
Whether you’re currently based in microservices, event-bus or monolith (or all three!), Kubernetes can help you reduce burden by putting in place some sound defaults to lean on as your grow and change.
Separate Testing Environments
Kubernetes has lovely features and capabilities that can enable you to separate workloads and provide service discovery contexts for your applications. Our most common use cases are housing a development and a staging/pre-prod environment side-by-side in one cluster. This is enabled by Kubernetes Namespaces. Namespaces are a logical boundary of grouped permissions and discovery that can help reduce how much configuration splay you have in your application configurations, as well as provide security separation between workloads. Some of our ClusterOps clients also have custom solutions that allow them to utilize feature branches, which can deploy your Pull Requests to an application in a separate namespace.
Access to Logs and Output
One of the biggest concerns when transitioning platforms is how you access the application data I need to debug production. No one wants to move to a new system and feel like they’ve lost visibility into the applications serving client traffic.
Kubernetes on GKE helps by automatically integrating with StackDriver, Google’s log aggregation tooling, so you don’t miss any of those messages. StackDriver also supports structured logs, so you can get better indexing and search-ability in your logs.
All these logs are also available via the Kubernetes API, so you can live-stream logs from a multitude of containers at the same time, from one endpoint.
Once you start feeling your stride in Kubernetes, it’s common to start thinking about some of the extras of auto-scaling your workloads as well as your cluster. Luckily, Kubernetes in GKE enables you to do Cluster Node Scaling as well as Workload scaling! HorizontalPodAutoscaling can help you scale your application based on aggregate load of all your containers and once your cluster reaches it’s CPU or Memory limits, it can also add extra Compute Nodes to help keep your application scaling up within it’s bounds, but not wasting nodes while system load is low.
What About Costs?
Lots of what we just outlined above sounds like it’s expensive or might incur some other hidden costs. The reality is that some of your processes and procedures may change since Kubernetes is a migration and not a drop-in replacement. Some of the considerations are learning Infrastructure as Code practices using tooling like Terraform as well as building confidence in your new automated delivery pipelines to reliably deliver code. Fairwinds ClusterOps helps alleviate some of the pressure by being a solid foundation and backstop — leveraging industry veterans and experience to help you keep up your pace and growth.
Additional cost savings are also realized when you compare, one-to-one, the resource cost of your PaaS, and cloud provider managed resources from GCP (and others). We often see a recurring monthly savings from 2x to 5x! This coupled with more flexibility and customization options tells the best story for our clients.
Cloud providers can help you avoid operational burden by leveraging CloudSQL for databases or MemoryStore for Redis servers. This enables you to focus on growing customers and data sets instead of focusing on database management.
We hope this post has provided a stepping stone to easing your concern with migrating off a PaaS and into Kubernetes.
We’re always available to talk more at firstname.lastname@example.org or on twitter (@fairwindsops). At the end of the day, we all need sound, repeatable technical solutions that drive your business and don’t add overhead. Those are the solutions we support and create with our clients.