Knative Series: Knative Serving and Eventing

Manish Sharma
11 min readJan 24, 2023

--

Knative Serving is a set of components for building and deploying serverless workloads on Kubernetes. It provides a set of abstractions for building, deploying, and managing containerized applications, and it can be used to build both stateless and stateful applications.

The main components of Knative Serving are:

  1. Kubernetes: Knative Serving is built on top of Kubernetes and leverages its primitives, such as pods and services, to deploy and manage serverless applications.
  2. Activator: The Activator is a component that listens for incoming requests and routes them to the appropriate revision of an application. It is responsible for scaling the number of replicas of an application based on traffic. The activator is responsible for starting and stopping the serverless workloads based on the incoming traffic. It creates new replicas of the workloads when the traffic increases and scales down the replicas when the traffic decreases.
  3. Controller: The controller is responsible for managing the lifecycle of the serverless workloads. It watches for changes in the Kubernetes resources and updates the state of the workloads accordingly.
  4. Autoscaler: The autoscaler is responsible for scaling the serverless workloads based on the incoming traffic. It monitors the traffic and adjusts the number of replicas of the workloads to match the demand.
  5. Queue Proxy: The Queue Proxy is responsible for buffering the incoming traffic when the serverless application is not ready to handle it. It helps to smooth out the traffic spikes and ensures that the serverless application is not overwhelmed.
  6. Route: The route (route.serving.knative.dev) is responsible for routing incoming traffic to the appropriate serverless workloads. It uses the Kubernetes service to determine the endpoint for the incoming traffic and forwards it to the appropriate workload.
  7. Certificate Manager: The certificate manager is responsible for managing the SSL/TLS certificates for the serverless workloads. It automatically generates and renews the certificates and makes them available to the router.
  8. Revision: The Revision (revision.serving.knative.dev) is a specific version of the serverless application that is deployed to the cluster. Each revision has its own unique URL and can be updated or deleted independently of the other revisions.
  9. Configuration: The Configuration (configuration.serving.knative.dev) defines the desired state of the serverless application, including the container image, environment variables, and other configuration options.
  10. Service: The Service (service.serving.knative.dev) is the Kubernetes resource that represents the serverless application. It is responsible for exposing the application to the outside world and providing a stable endpoint for the incoming traffic.

Overall, Knative Serving provides a simplified way to deploy and run event-driven, container-based applications on Kubernetes, with auto-scaling, routing and more features out of the box. It enables developers to focus on writing code, and not worry about scaling, monitoring, and other operational concerns.

Istio is not a component of Knative Serving. Istio is an open-source service mesh that provides traffic management, service discovery, and security features for microservices-based applications. It can be used in conjunction with Knative Serving to provide additional features such as traffic management and service discovery, but it is not a part of the Knative Serving components.

However, Knative and Istio can work together, you can use Istio to handle traffic management, security and other features for your Knative services. You can install Istio on your Kubernetes cluster and then configure your Knative services to use the Istio resources for traffic management and security. This way you can use the benefits of Knative for building and deploying serverless applications, and Istio for traffic management, security and more.

How Reconciliation Process Works in Knative Serving

In Knative Serving, the reconciliation process is responsible for ensuring that the desired state of the system matches the actual state. It does this by constantly comparing the desired state, as specified in the Kubernetes manifests, with the actual state, as reported by the Kubernetes API. If there is a discrepancy, the reconciliation process will take action to bring the actual state in line with the desired state.

The reconciliation process in Knative Serving is implemented as a series of controllers, each responsible for a specific part of the system. These controllers are responsible for monitoring resources such as Services, Routes, and Configurations and taking action when necessary.

When a change is made to a resource, such as updating the configuration of a Service, the controller for that resource is notified. The controller then retrieves the current state of the resource from the Kubernetes API and compares it to the desired state, as specified in the Kubernetes manifests. If there is a discrepancy, the controller will take action to bring the actual state in line with the desired state.

For example, if a change is made to the configuration of a Service, the controller may need to scale up or down the number of replicas, or update the configuration of the pod running the service.

The Knative controllers are designed to be highly available, and there are multiple controllers running in parallel to ensure that the reconciliation process is reliable. The controllers are also designed to be idempotent, meaning that they can run multiple times without causing any negative effects.

Overall, the reconciliation process in Knative Serving is a key feature that ensures that the desired state of the system is always in line with the actual state, and that the system is always running in a stable and consistent state.

Knative Serving Examples

Example 1:

Here’s an example of a Kubernetes manifest file that can be used to deploy a Knative Service:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: my-service
spec:
template:
spec:
containers:
- image: my-registry/my-image:latest
env:
- name: MY_ENV_VAR
value: "my-value"
containerConcurrency: 80
route:
traffic:
- percent: 100
latestRevision: true

This manifest file defines a Knative Service called “my-service”, which is based on a container image named “my-registry/my-image:latest” and an environment variable called “MY_ENV_VAR” with the value “my-value”. The containerConcurrency field is set to 80, which limits the maximum number of requests that can be handled by the service at any given time. The Route section defines that all traffic should be directed to the latest revision of the service.

This is just a basic example, in a real-world scenario, you might also want to configure things like scaling, resource limits, traffic splitting, and security.

Example 2:

Here’s an example of a Kubernetes manifest file that can be used to deploy a Knative Service with traffic splitting:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: my-service
spec:
template:
spec:
containers:
- image: my-registry/my-image:latest
env:
- name: MY_ENV_VAR
value: "my-value"
containerConcurrency: 80
route:
traffic:
- percent: 70
latestRevision: true
- percent: 30
revisionName: my-service-canary

The Route section defines that 70% of the traffic should be directed to the latest revision of the service, while 30% should be directed to a revision named “my-service-canary”. This allows you to test a new version of the service with a small percentage of real-world traffic before rolling it out to all users.

Knative Serving Use Cases

Some of the use cases of Knative Serving include:

  1. Serverless Web Applications: Knative Serving can be used to deploy web applications that only run when they receive incoming requests. This can help to reduce costs and improve scalability.
  2. Event-Driven Applications: Knative Serving can be used to deploy event-driven applications that are triggered by specific events, such as changes in a database or incoming HTTP requests.
  3. Microservices: Knative Serving can be used to deploy microservices that are part of a larger application. It can help to simplify the deployment and management of these services.
  4. Machine Learning: Knative Serving can be used to deploy machine learning models as serverless functions. This allows for the easy scaling and management of these models.
  5. IoT: Knative Serving can be used to deploy serverless functions that handle the data coming from IoT devices, it can be used to scale up and down based on the data flow, and process the data in real-time.
  6. Blue/Green Deployment: Knative Serving allows to route traffic to different revisions of a service, this can be used to perform blue/green deployment, where the traffic is split between two versions of a service, the new version can be tested with a small percentage of traffic before rolling it out to all users.
  7. Canary Deployment: Knative Serving allows to route traffic to different revisions of a service, this can be used to perform canary deployment, where the traffic is split between two versions of a service, the new version can be tested with a small percentage of traffic before rolling it out to all users.

Knative Eventing

Knative Eventing is a set of Kubernetes-based components for building and deploying event-driven applications. It provides an abstraction layer on top of Kubernetes that makes it easy to build, deploy, and manage event-driven workloads.

Knative Eventing uses standard HTTP POST requests to send and receive events between event producers and sinks. These events conform to the CloudEvents specifications, which enables creating, parsing, sending, and receiving events in any programming language.

The main components of Knative Eventing are:

  1. Controller: The controller is responsible for managing the lifecycle of the event-driven workloads. It watches for changes in the Kubernetes resources and updates the state of the workloads accordingly.
  2. Broker: The broker is responsible for receiving and forwarding events between the different event-driven workloads. It acts as a central hub for the events and routes them to the appropriate event-driven workloads.
  3. Trigger: The trigger is responsible for listening for specific events and triggering specific actions in response. It can be configured to listen for specific events, such as changes in a database or incoming HTTP requests, and then trigger specific actions, such as sending a message to a message queue or updating a database.
  4. Event Source: The event source is responsible for generating events and forwarding them to the broker. It can be configured to generate events based on a schedule, changes in a database, or incoming HTTP requests.
  5. Event Sink: The event sink is responsible for handling events that are sent to it. It can be configured to perform specific actions in response to events, such as sending a message to a message queue or updating a database.
  6. Channel: The Channel is a Kubernetes resource that represents a communication channel for events. It can be used to buffer and manage the events and can be shared by multiple event-driven applications.
  7. Subscription: The Subscription is a Kubernetes resource that represents a subscription to events. It can be used to configure how events are delivered to the event-driven application, such as the type of channel or the filtering criteria for events.

Overall, Knative Eventing provides a simplified way to deploy and run event-driven, container-based applications on Kubernetes, with event routing and handling features out of the box. It enables developers to focus on writing code, and not worry about event routing, handling, and other operational concerns.

Knative Eventing Examples

Example 1:

Here’s an example of a Kubernetes manifest file that can be used to configure a Knative Eventing Broker:

apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
name: my-broker
spec:
config:
retry:
policy: fixed
backoffDelay: 2s
filter:
attributes:
source: my-source
ingress:
- filter:
attributes:
type: my-type
sink:
ref:
kind: Service
name: my-service

This manifest file defines a Knative Eventing Broker called “my-broker”, which has a retry policy that uses a fixed backoff delay of 2 seconds, and only accepts events from a source named “my-source”.

The ingress section defines how the events should be handled, it specifies that all events of type “my-type” should be sent to a Knative Service named “my-service”.

You can also configure many other things in the Broker, such as channels, subscribers, etc. This is just an example, in a real-world scenario, you might want to configure things like custom filtering, routing, or buffering.

Example 2:

Here’s an example of a Kubernetes manifest file that can be used to configure a Knative Eventing Trigger:

apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: my-trigger
spec:
broker: my-broker
filter:
attributes:
type: my-type
subscriber:
ref:
kind: Service
name: my-service

This manifest file defines a Knative Eventing Trigger called “my-trigger”, which listens to events on a broker named “my-broker”, and only accepts events of type “my-type”. The trigger will forward the events to a Knative Service named “my-service” that will handle the events.

Knative Eventing Use Cases

Some of the use cases of Knative Eventing include:

  1. Event-Driven Microservices: Knative Eventing can be used to build event-driven microservices that are triggered by specific events, such as changes in a database or incoming HTTP requests. This allows for better decoupling of services and improved scalability.
  2. Stream Processing: Knative Eventing can be used to process streams of data in real-time, such as sensor data, log data, or social media feeds. It can be used to route the data to different services for processing and storage.
  3. Asynchronous Workflows: Knative Eventing can be used to build asynchronous workflows that are triggered by specific events. This can help to improve the scalability and fault-tolerance of these workflows.
  4. Cloud-Native Event-Driven Architecture: Knative Eventing can be used to build a cloud-native event-driven architecture that is based on Kubernetes. This allows for better scalability, reliability, and security of the overall architecture.
  5. Event-Driven Integration: Knative Eventing can be used to integrate different services and systems using events. This allows for better decoupling of services and improved scalability.
  6. Event-Driven Autoscaling: Knative Eventing can be used to automatically scale services based on the incoming events. It can be used to ensure that the services are always able to handle the load.
  7. Event-Driven Triggers: Knative Eventing can be used to trigger different actions based on the incoming events. This can be used to perform automatic scaling, send notifications, or update databases.

Tips to Learner on Knative

Here are some tips for learners who are just getting started with Knative:

  1. Start with the basics: Make sure you understand the basics of Kubernetes and how it works before diving into Knative. Knative is built on top of Kubernetes, so it’s important to have a solid understanding of Kubernetes first.
  2. Try it out: The best way to learn about Knative is to try it out for yourself. You can start by deploying a simple application and experimenting with different configurations.
  3. Learn by example: Look at examples of real-world applications that have been built using Knative. This will give you a better understanding of how Knative can be used in practice.
  4. Understand the concepts: Take the time to understand the different concepts and components of Knative. This will help you to better understand how the different pieces fit together.
  5. Join the community: Join the Knative community and participate in discussions, ask questions, and share your experiences. This will help you to learn from others and stay up-to-date with the latest developments.
  6. Start with Knative Serving and Eventing: Start with Knative Serving and Eventing, these are the core components of Knative, and it will give you a good understanding of the overall architecture.
  7. Understand the use cases: Understand the use cases of Knative, and how it can be used to solve real-world problems, this will give you a good idea of how to use Knative in your own environment.
  8. Read the official documentation: Read the official documentation for Knative, which will provide you detailed information about the different components and how to use them.

Overall, the key to learning Knative is to start small and experiment with different configurations, as you gain more experience, you can dive deeper and explore more advanced features.

Thanks for reading! If you liked this article, we’d love for you to subscribe to our blog and stay informed about our latest content.

--

--

Manish Sharma
Manish Sharma

Written by Manish Sharma

I am technology geek & keep pushing myself to learn new skills. I am AWS Solution Architect — Associate, Professional & Terraform Associate Developer certified.