Skip to main content

Build Ultra-Scalable Backends with Pekko and Kubernetes

Listen:

Update May 21, 2024: 

Lightbend, the company behind Akka, changed the license from Apache 2.0 to BSL, which makes Akka not really attractive anymore. The new license is in place from Akka 2.7 onwards. 

Pekko is a 2.6x fork of Akka, with the same functionalities,  maintained and further developed by the Apache Software Foundation, and is under Apache Software Licence 2 available.

Ractor is written in Rust, has network communication, single messaging, named actors and a cluster mode, and is under MIT license available.

------------------------------

Building a backend that can handle massive traffic and keep your users happy is tough. We're not talking about a simple website with a few hundred visitors a day. We're talking about high-performance systems that can handle spikes in traffic, process huge amounts of data, and keep chugging along even when things go wrong. I have made some great experiences with Akka as micro service in kubernetes, we use k8s to scale our IoT platform across different regions and hybrid setups. 

Running Akka in k8s is a great match from my point of view, they complement each other perfectly, allowing you to build backend systems that are not only scalable and resilient but also incredibly efficient and easy to manage.

Why Akka?

If you're not familiar with Akka, it's a toolkit and runtime for building highly concurrent, distributed, and resilient message-driven applications on the JVM. In plain English, this means it's a framework that helps you build apps that can handle a ton of stuff happening at once, spread the load across multiple machines, and keep running even when things go wrong.

The secret to Akka's power is the Actor Model. Actors are like little independent workers that can communicate with each other by sending messages. This makes it super easy to build complex systems that can scale up or down as needed.

Think of it like this: instead of having one overworked employee trying to juggle a thousand tasks, you have a team of specialists, each handling their own piece of the puzzle. This not only makes things more efficient, but it also makes it easier to recover from failures. If one actor crashes, the rest of the system can keep on running.

Why Kubernetes?

Now, let's talk about Kubernetes, or short k8s. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications, initially developed by Google to handle the infrastructure and applications for GCP.

What are containers? In a nutshell, containers are a way to package up an application and all its dependencies so that it can run reliably across different computing environments. There are multiple container management platforms out there, like Docker Swarm, Openshift, or Kubernetes. I personally like k8s, since it's easier to scale and distribute, plus you can better manage multiple installations in deployment sets, using Namespaces to distinguish between applications, etc etc. Kubernetes handles everything from scheduling containers on different machines to load balancing traffic to ensuring that your applications are always available.

Run Akka with k8s

There are multiple pros running Akka in k8s, first Akka actors are naturally suited to a microservices architecture. Each actor can be thought of as an independent service, handling specific tasks and communicating with other actors through messages. Kubernetes provides the perfect platform for deploying and managing these microservices.

2nd, Kubernetes excels at scaling applications up or down based on demand. With Akka actors running as microservices, you can easily add or remove actor instances to meet changing workloads. This ensures your backend can handle traffic spikes without breaking (given the cluster is set to autoscale). Adding a loadbalancer and scale out to the actors, k8s takes care to start new pods at a given load (scale-out), and also kills pods when the load goes back (scale-down). 

3rd, with Akka's built-in fault-tolerance mechanisms, combined with Kubernetes' self-healing capabilities, I create a backend that can withstand failures. If an actor crashes, Kubernetes can automatically restart it on a different node, ensuring uninterrupted operation. 

Here's a simplified ASCII diagram to illustrate the concept:

Akka with k8s simple architecture diagram


A yaml file for a simple setup looks like this one:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-akka-actor
spec:
  replicas: 3 
  selector:
    matchLabels:
      app: my-akka-actor
  template:
    metadata:
      labels:
        app: my-akka-actor
    spec:
      containers:
      - name: my-akka-actor
        image: my-akka-actor-image:latest
        ports:
        - containerPort: 8080 

---
apiVersion: v1
kind: Service
metadata:
  name: my-akka-actor-service
spec:
  selector:
    app: my-akka-actor
  ports:
    - protocol: TCP
      port: 80  # External port (can be different from containerPort)
      targetPort: 8080 
  type: LoadBalancer

This yaml tells Kubernetes to create three instances of your Akka actor, each running in its own container. Kubernetes will automatically manage these instances, ensuring they are spread across different nodes for high availability. The loadbalancer distributes the load, to have persistent session add a sticky mechanism, ideally with a session token.

If you want to build a backend that can handle anything you throw at it, Akka and Kubernetes can be your new best friends. They offer a scalable, resilient, and efficient solution that can help you deliver amazing user experiences.


Comments

Popular posts from this blog

Deal with corrupted messages in Apache Kafka

Under some strange circumstances, it can happen that a message in a Kafka topic is corrupted. This often happens when using 3rd party frameworks with Kafka. In addition, Kafka < 0.9 does not have a lock on Log.read() at the consumer read level, but does have a lock on Log.write(). This can lead to a rare race condition as described in KAKFA-2477 [1]. A likely log entry looks like this: ERROR Error processing message, stopping consumer: (kafka.tools.ConsoleConsumer$) kafka.message.InvalidMessageException: Message is corrupt (stored crc = xxxxxxxxxx, computed crc = yyyyyyyyyy Kafka-Tools Kafka stores the offset of each consumer in Zookeeper. To read the offsets, Kafka provides handy tools [2]. But you can also use zkCli.sh, at least to display the consumer and the stored offsets. First we need to find the consumer for a topic (> Kafka 0.9): bin/kafka-consumer-groups.sh --zookeeper management01:2181 --describe --group test Prior to Kafka 0.9, the only way to get this in...

Beyond Ctrl+F - Use LLM's For PDF Analysis

PDFs are everywhere, seemingly indestructible, and present in our daily lives at all thinkable and unthinkable positions. We've all got mountains of them, and even companies shouting about "digital transformation" haven't managed to escape their clutches. Now, I'm a product guy, not a document management guru. But I started thinking: if PDFs are omnipresent in our existence, why not throw some cutting-edge AI at the problem? Maybe Large Language Models (LLMs) and Retrieval Augmented Generation (RAG) could be the answer. Don't get me wrong, PDF search indexes like Solr exist, but they're basically glorified Ctrl+F. They point you to the right file, but don't actually help you understand what's in it. And sure, Microsoft Fabric's got some fancy PDF Q&A stuff, but it's a complex beast with a hefty price tag. That's why I decided to experiment with LLMs and RAG. My idea? An intelligent knowledge base built on top of our existing P...

Run Llama3 (or any LLM / SLM) on Your MacBook in 2024

I'm gonna be real with you: the Cloud and SaaS / PaaS is great... until it isn't. When you're elbow-deep in doing something with the likes of ChatGPT or Gemini or whatever, the last thing you need is your AI assistant starts choking (It seems that upper network connection was reset) because 5G or the local WiFi crapped out or some server halfway across the world is having a meltdown(s). That's why I'm all about running large language models (LLMs) like Llama3 locally. Yep, right on your trusty MacBook. Sure, the cloud's got its perks, but here's why local is the way to go, especially for me: Privacy:  When you're brainstorming the next big thing, you don't want your ideas floating around on some random server. Keeping your data local means it's  yours , and that's a level of control I can get behind. Offline = Uninterrupted Flow:  Whether you're on a plane, at a coffee shop with spotty wifi, or jus...