We are delighted to announce the new Strimzi 0.7.0 release with many awesome new features!
Exposing Kafka cluster to the outside
The Strimzi community has been asking for it for long time and finally it’s here! The Kafka cluster is now accessible from the outside of the Kubernets/OpenShift cluster where it’s deployed. Your application doesn’t need to be containarized and running in the same Kubernetes/OpenShift cluster for connecting to the Kafka cluster; your client application can be any client running outside of it.
There are three different ways for doing that using:
- Using OpenShift
Routes(so OpenShift only)
In order to do that, the new
Kafka.spec.kafka.listeners.external listener configures whether and how the Kafka cluster is exposed.
It can be of three different types :
Here’s an example snippet that exposes the Kafka cluster using OpenShift routes:
apiVersion: kafka.strimzi.io/v1alpha1 kind: Kafka metadata: name: my-cluster spec: kafka: ... listeners: external: type: route ...
When the Kafka cluster is exposed by using OpenShift routes, a dedicated
Route is created for every Kafka broker pod.
Route is created to serve as a Kafka bootstrap address.
Kafka clients can use these
Routes to connect to Kafka on port 443.
When the Kafka cluster is exposed by using LoadBalancers, a new
Service of type
LoadBalancer is created for every Kafka broker pod.
LoadBalancer is created to serve as a Kafka bootstrap address.
Loadbalancers listen to connections on port 9094.
When the Kafka cluster is exposed by using NodePorts,
Services of type
NodePort are created.
Kafka clients connect directly to the OpenShift or Kubernetes cluster nodes.
You must enable access to the ports on these nodes for each client (for example, in firewalls or security groups).
Each Kafka broker pod is then accessible on a separate port.
Service is created to serve as a Kafka bootstrap address.
It’s important to highlight that the connection from outside is currently only possible through TLS.
Of course, as with the other listeners, you can configure the type of
authentication you want for the
external listener from the currently supported: TLS and SASL SCRAM-SHA 512 (new with 0.7.0 release).
Support for SASL SCRAM-SHA-512 authentication
The previous 0.6.0 release provided only TLS client authentication as authentication method for
listeners in a
The new release adds the support for SASL authentication using the SCRAM-SHA-512 mechanism.
SCRAM (Salted Challenge Response Authentication Mechanism) is an authentication protocol that can establish mutual authentication using passwords.
SCRAM-SHA is recommended for authenticating Kafka clients when:
- The client supports authentication using SCRAM-SHA-512
- It is necessary to use passwords rather than the TLS certificates
- When you want to have authentication for unencrypted communication
Here’s an example snippet showing how to enable SCRAM-SHA-512 authentication on the
apiVersion: kafka.strimzi.io/v1alpha1 kind: Kafka metadata: name: my-cluster spec: # ... kafka: listeners: plain: authentication: type: scram-sha-512 # ...
When the SCRAM-SHA authentication is enabled on the Kafka brokers, it’s needed to set the related credentials on the client side as well.
You have to create a
KafkaUser resource enabling the
scram-sha-512 authentication type.
Running the User Operator, it will creates a related
Secret containing a auto-generated password.
The username comes from the
name of the
Here’s an example snippet of a
KafkaUser resource with SCRAM-SHA-512 authentication:
apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaUser metadata: name: my-user spec: authentication: type: scram-sha-512 # ...
Another kind of “client” connecting to the deployed Kafka cluster are Kafka Connect workers node.
In this case, the SCRAM-SHA-512 authentication needs to be enabled on the created
To use the authentication using the SCRAM-SHA-512 SASL mechanism, set the authentication
type property to the value
Specify the username in the
Specify the password as a link to a
Secret containing the password in the
It has to specify the name of the
Secret containing the password and the name of the key under which the password is stored inside the
Here’s an example snippet
KafkaConnect resource with SCRAM-SHA-512 authentication:
apiVersion: kafka.strimzi.io/v1alpha1 kind: Kafka metadata: name: my-connect-cluster spec: # ... kafka: listeners: plain: authentication: type: scram-sha-512 username: my-connect-user passwordSecret: secretName: my-connect-user password: password # ...
… and many mores
Other features were included in this new release, the most important ones:
- Using less wide RBAC permissions (ClusterRoleBindings where converted to RoleBindings where possible).
- Added network policies for managing access to Zookeeper ports and Kafka replication ports.
OwnerReferenceand Kubernetes garbage collection feature to delete resources and to track the ownership.
This release represents another really huge milestone for this open source project. You can refer to the release changes log for getting more information.
What are you waiting for? Engage with the community and help us to improve the Strimzi project for running your Kafka cluster on Kubernetes/OpenShift!