Strimzi can be upgraded to version 0.23.0 to take advantage of new features and enhancements, performance improvements, and security options.
As part of the upgrade, you upgrade Kafka to the latest supported version.
Each Kafka release introduces new features, improvements, and bug fixes to your Strimzi deployment.
Strimzi can be downgraded to the previous version if you encounter issues with the newer version.
Upgrade paths
Two upgrade paths are possible:
- Incremental
-
Upgrading Strimzi from the previous minor version to version 0.23.0.
- Multi-version
-
Upgrading Strimzi from an old version to version 0.23.0 within a single upgrade (skipping one or more intermediate versions).
For example, upgrading from Strimzi 0.20.0 directly to Strimzi 0.23.0.
Kafka version support
You can review supported Kafka versions in the Supported versions table.
-
The Operators column lists all released Strimzi versions (the Strimzi version is often called the "Operator version").
-
The Kafka versions column lists the supported Kafka versions for each Strimzi version.
Decide which Kafka version to upgrade to before beginning the Strimzi upgrade process.
Note
|
You can upgrade to a higher Kafka version as long as it is supported by your version of Strimzi.
In some cases, you can also downgrade to a previous supported Kafka version.
|
Downtime and availability
If topics are configured for high availability, upgrading Strimzi should not cause any downtime for consumers and producers that publish and read data from those topics.
Highly available topics have a replication factor of at least 3 and partitions distributed evenly among the brokers.
Upgrading Strimzi triggers rolling updates, where all brokers are restarted in turn, at different stages of the process.
During rolling updates, not all brokers are online, so overall cluster availability is temporarily reduced.
A reduction in cluster availability increases the chance that a broker failure will result in lost messages.
Upgrading Strimzi is a three-stage process.
To upgrade brokers and clients without downtime, you must complete the upgrade procedures in the following order:
-
Update your Cluster Operator to a new Strimzi version.
-
If you deployed the Cluster Operator using the installation YAML files, perform your upgrade by modifying the Operator installation files, as described in Upgrading the Cluster Operator.
-
If you deployed the Cluster Operator from OperatorHub.io, use the Operator Lifecycle Manager (OLM) to change the update channel for the Strimzi Operators to a new Strimzi version.
Depending on your chosen upgrade strategy, after updating the channel, either:
-
If you deployed the Cluster Operator using a Helm chart, use helm upgrade
.
The helm upgrade
command does not upgrade the Custom Resource Definitions for Helm.
Install the new CRDs manually after upgrading the Cluster Operator.
You can access the CRDs from GitHub or find them in the crd
subdirectory inside the Helm Chart.
-
Upgrade all Kafka brokers and client applications to the latest supported Kafka version.
-
If applicable, perform the following tasks:
-
Update existing custom resources to handle deprecated custom resource properties
-
Update listeners to use the GenericKafkaListener
schema
Optional: incremental cooperative rebalance upgrade
Consider upgrading consumers and Kafka Streams applications to use the incremental cooperative rebalance protocol for partition rebalances.
Kafka’s log message format version and inter-broker protocol version specify, respectively, the log format version appended to messages and the version of the Kafka protocol used in a cluster.
To ensure the correct versions are used, the upgrade process involves making configuration changes to existing Kafka brokers and code changes to client applications (consumers and producers).
The following table shows the differences between Kafka versions:
Kafka version |
Interbroker protocol version |
Log message format version |
ZooKeeper version |
2.6.0 |
2.6 |
2.6 |
3.5.8 |
2.6.1 |
2.6 |
2.6 |
3.5.8 |
2.6.2 |
2.6 |
2.6 |
3.5.9 |
2.7.0 |
2.7 |
2.7 |
3.5.8 |
2.8.0 |
2.8 |
2.8 |
3.5.9 |
Inter-broker protocol version
In Kafka, the network protocol used for inter-broker communication is called the inter-broker protocol.
Each version of Kafka has a compatible version of the inter-broker protocol.
The minor version of the protocol typically increases to match the minor version of Kafka, as shown in the preceding table.
The inter-broker protocol version is set cluster wide in the Kafka
resource.
To change it, you edit the inter.broker.protocol.version
property in Kafka.spec.kafka.config
.
Log message format version
When a producer sends a message to a Kafka broker, the message is encoded using a specific format.
The format can change between Kafka releases, so messages specify which version of the format they were encoded with. You can configure a Kafka broker to convert messages from newer format versions to a given older format version before the broker appends the message to the log.
In Kafka, there are two different methods for setting the message format version:
The default value of message.format.version
for a topic is defined by the log.message.format.version
that is set on the Kafka broker. You can manually set the message.format.version
of a topic by modifying its topic configuration.
The upgrade tasks in this section assume that the message format version is defined by the log.message.format.version
.
The steps to upgrade your Cluster Operator deployment to use Strimzi 0.23.0 are described in this section.
Follow this procedure if you deployed the Cluster Operator using the installation YAML files rather than OperatorHub.io.
The availability of Kafka clusters managed by the Cluster Operator is not affected by the upgrade operation.
Note
|
Refer to the documentation supporting a specific version of Strimzi for information on how to upgrade to that version.
|
This procedure describes how to upgrade a Cluster Operator deployment to use Strimzi 0.23.0.
Procedure
-
Take note of any configuration changes made to the existing Cluster Operator resources (in the /install/cluster-operator
directory).
Any changes will be overwritten by the new version of the Cluster Operator.
-
Update your custom resources to reflect the supported configuration options available for Strimzi version 0.23.0.
-
Update the Cluster Operator.
-
Modify the installation files for the new Cluster Operator version according to the namespace the Cluster Operator is running in.
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
-
If you modified one or more environment variables in your existing Cluster Operator Deployment
, edit the
install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
file to use those environment variables.
-
When you have an updated configuration, deploy it along with the rest of the installation resources:
kubectl replace -f install/cluster-operator
Wait for the rolling updates to complete.
-
If the new Operator version no longer supports the Kafka version you are upgrading from, the Cluster Operator returns a "Version not found" error message.
Otherwise, no error message is returned.
"Version 2.4.0 is not supported. Supported versions are: 2.6.0, 2.6.1, 2.7.0."
-
If the error message is returned, upgrade to a Kafka version that is supported by the new Cluster Operator version:
-
Edit the Kafka
custom resource.
-
Change the spec.kafka.version
property to a supported Kafka version.
-
If the error message is not returned, go to the next step.
You will upgrade the Kafka version later.
-
Get the image for the Kafka pod to ensure the upgrade was successful:
kubectl get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'
The image tag shows the new Operator version. For example:
quay.io/strimzi/kafka:0.23.0-kafka-2.8.0
Your Cluster Operator was upgraded to version 0.23.0 but the version of Kafka running in the cluster it manages is unchanged.
Following the Cluster Operator upgrade, you must perform a Kafka upgrade.
After you have upgraded your Cluster Operator to 0.23.0, the next step is to upgrade all Kafka brokers to the latest supported version of Kafka.
Kafka upgrades are performed by the Cluster Operator through rolling updates of the Kafka brokers.
The Cluster Operator initiates rolling updates based on the Kafka cluster configuration.
If Kafka.spec.kafka.config contains… |
The Cluster Operator initiates… |
Both the inter.broker.protocol.version and the log.message.format.version . |
A single rolling update. After the update, the inter.broker.protocol.version must be updated manually, followed by log.message.format.version .
Changing each will trigger a further rolling update. |
Either the inter.broker.protocol.version or the log.message.format.version . |
Two rolling updates. |
No configuration for the inter.broker.protocol.version or the log.message.format.version . |
Two rolling updates. |
As part of the Kafka upgrade, the Cluster Operator initiates rolling updates for ZooKeeper.
When upgrading Kafka, consider your settings for the STRIMZI_KAFKA_IMAGES
environment variable and the Kafka.spec.kafka.version
property.
-
Each Kafka
resource can be configured with a Kafka.spec.kafka.version
.
-
The Cluster Operator’s STRIMZI_KAFKA_IMAGES
environment variable provides a mapping between the Kafka version and the image to be used when that version is requested in a given Kafka
resource.
-
If Kafka.spec.kafka.image
is not configured, the default image for the given version is used.
-
If Kafka.spec.kafka.image
is configured, the default image is overridden.
Warning
|
The Cluster Operator cannot validate that an image actually contains a Kafka broker of the expected version.
Take care to ensure that the given image corresponds to the given Kafka version.
|
This procedure describes how to upgrade a Strimzi Kafka cluster to the latest supported Kafka version.
Compared to your current Kafka version, the new version might support a higher log message format version or inter-broker protocol version, or both.
Follow the steps to upgrade these versions, if required.
For more information, see Kafka versions.
Prerequisites
For the Kafka
resource to be upgraded, check that:
-
The Cluster Operator, which supports both versions of Kafka, is up and running.
-
The Kafka.spec.kafka.config
does not contain options that are not supported in the new Kafka version.
Procedure
-
Update the Kafka cluster configuration:
kubectl edit kafka my-cluster
-
If configured, ensure that Kafka.spec.kafka.config
has the log.message.format.version
and inter.broker.protocol.version
set to the defaults for the current Kafka version.
For example, if upgrading from Kafka version 2.7.0 to 2.8.0:
kind: Kafka
spec:
# ...
kafka:
version: 2.7.0
config:
log.message.format.version: "2.7"
inter.broker.protocol.version: "2.7"
# ...
If log.message.format.version
and inter.broker.protocol.version
are not configured,
Strimzi automatically updates these versions to the current defaults after the update to the Kafka version in the next step.
Note
|
The value of log.message.format.version and inter.broker.protocol.version must be strings to prevent them from being interpreted as floating point numbers.
|
-
Change the Kafka.spec.kafka.version
to specify the new Kafka version; leave the log.message.format.version
and inter.broker.protocol.version
at the defaults for the current Kafka version.
Note
|
Changing the kafka.version ensures that all brokers in the cluster will be upgraded to start using the new broker binaries.
During this process, some brokers are using the old binaries while others have already upgraded to the new ones.
Leaving the inter.broker.protocol.version unchanged ensures that the brokers can continue to communicate with each other throughout the upgrade.
|
For example, if upgrading from Kafka 2.7.0 to 2.8.0:
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
spec:
# ...
kafka:
version: 2.8.0 (1)
config:
log.message.format.version: "2.7" (2)
inter.broker.protocol.version: "2.7" (3)
# ...
-
Kafka version is changed to the new version.
-
Message format version is unchanged.
-
Inter-broker protocol version is unchanged.
Warning
|
You cannot downgrade Kafka if the inter.broker.protocol.version for the new Kafka version changes. The inter-broker protocol version determines the schemas used for persistent metadata stored by the broker, including messages written to __consumer_offsets . The downgraded cluster will not understand the messages.
|
-
If the image for the Kafka cluster is defined in the Kafka custom resource, in Kafka.spec.kafka.image
, update the image
to point to a container image with the new Kafka version.
-
Save and exit the editor, then wait for rolling updates to complete.
Check the progress of the rolling updates by watching the pod state transitions:
kubectl get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'
The rolling updates ensure that each pod is using the broker binaries for the new version of Kafka.
-
Depending on your chosen strategy for upgrading clients, upgrade all client applications to use the new version of the client binaries.
If required, set the version
property for Kafka Connect and MirrorMaker as the new version of Kafka:
-
For Kafka Connect, update KafkaConnect.spec.version
.
-
For MirrorMaker, update KafkaMirrorMaker.spec.version
.
-
For MirrorMaker 2.0, update KafkaMirrorMaker2.spec.version
.
-
If configured, update the Kafka resource to use the new inter.broker.protocol.version
version. Otherwise, go to step 9.
For example, if upgrading to Kafka 2.8.0:
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
spec:
# ...
kafka:
version: 2.8.0
config:
log.message.format.version: "2.7"
inter.broker.protocol.version: "2.8"
# ...
-
Wait for the Cluster Operator to update the cluster.
-
If configured, update the Kafka resource to use the new log.message.format.version
version. Otherwise, go to step 10.
For example, if upgrading to Kafka 2.8.0:
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
spec:
# ...
kafka:
version: 2.8.0
config:
log.message.format.version: "2.8"
inter.broker.protocol.version: "2.8"
# ...
-
Wait for the Cluster Operator to update the cluster.
-
The Kafka cluster and clients are now using the new Kafka version.
-
The brokers are configured to send messages using the inter-broker protocol version and message format version of the new version of Kafka.
Following the Kafka upgrade, if required, you can:
Strimzi provides a GenericKafkaListener
schema for the configuration of Kafka listeners in a Kafka
resource.
GenericKafkaListener
replaces the KafkaListeners
schema, which has been removed from Strimzi.
With the GenericKafkaListener
schema, you can configure as many listeners as required,
as long as their names and ports are unique.
The listeners
configuration is defined as an array, but the deprecated format is also supported.
For clients inside the Kubernetes cluster, you can create plain
(without encryption) or tls
internal listeners.
For clients outside the Kubernetes cluster, you create external listeners and specify a connection mechanism,
which can be nodeport
, loadbalancer
, ingress
or route
.
The KafkaListeners
schema used sub-properties for plain
, tls
and external
listeners, with fixed ports for each.
At any stage in the upgrade process, you must convert listeners configured using the KafkaListeners
schema into the format of the GenericKafkaListener
schema.
For example, if you are currently using the following configuration in your Kafka
configuration:
Old listener configuration
listeners:
plain:
# ...
tls:
# ...
external:
type: loadbalancer
# ...
Convert the listeners into the new format using:
New listener configuration
listeners:
#...
- name: plain
port: 9092
type: internal
tls: false (1)
- name: tls
port: 9093
type: internal
tls: true
- name: external
port: 9094
type: EXTERNAL-LISTENER-TYPE (2)
tls: true
-
The TLS property is now required for all listeners.
-
Options: ingress
, loadbalancer
, nodeport
, route
.
Make sure to use the exact names and port numbers shown.
For any additional configuration
or overrides
properties used with the old format, you need to update them to the new format.
Changes introduced to the listener configuration
:
-
overrides
is merged with the configuration
section
-
dnsAnnotations
has been renamed annotations
-
preferredAddressType
has been renamed preferredNodePortAddressType
-
address
has been renamed alternativeNames
-
loadBalancerSourceRanges
and externalTrafficPolicy
move to the listener configuration from the now deprecated template
For example, this configuration:
Old additional listener configuration
listeners:
external:
type: loadbalancer
authentication:
type: tls
overrides:
bootstrap:
dnsAnnotations:
#...
New additional listener configuration
listeners:
#...
- name: external
port: 9094
type:loadbalancer
tls: true
authentication:
type: tls
configuration:
bootstrap:
annotations:
#...
Important
|
The name and port numbers shown in the new listener configuration must be used for backwards compatibility.
Using any other values will cause renaming of the Kafka listeners and Kubernetes services.
|
The right approach to upgrading your client applications (including Kafka Connect connectors) depends on your particular circumstances.
Consuming applications need to receive messages in a message format that they understand. You can ensure that this is the case in one of two ways:
Using broker down-conversion puts extra load on the brokers, so it is not ideal to rely on down-conversion for all topics for a prolonged period of time.
For brokers to perform optimally they should not be down converting messages at all.
Broker down-conversion is configured in two ways:
-
The topic-level message.format.version
configures it for a single topic.
-
The broker-level log.message.format.version
is the default for topics that do not have the topic-level message.format.version
configured.
Messages published to a topic in a new-version format will be visible to consumers, because brokers perform down-conversion when they receive messages from producers, not when they are sent to consumers.
There are a number of strategies you can use to upgrade your clients:
- Consumers first
-
-
Upgrade all the consuming applications.
-
Change the broker-level log.message.format.version
to the new version.
-
Upgrade all the producing applications.
This strategy is straightforward, and avoids any broker down-conversion.
However, it assumes that all consumers in your organization can be upgraded in a coordinated way, and it does not work for applications that are both consumers and producers.
There is also a risk that, if there is a problem with the upgraded clients, new-format messages might get added to the message log so that you cannot revert to the previous consumer version.
- Per-topic consumers first
-
For each topic:
-
Upgrade all the consuming applications.
-
Change the topic-level message.format.version
to the new version.
-
Upgrade all the producing applications.
This strategy avoids any broker down-conversion, and means you can proceed on a topic-by-topic basis. It does not work for applications that are both consumers and producers of the same topic. Again, it has the risk that, if there is a problem with the upgraded clients, new-format messages might get added to the message log.
- Per-topic consumers first, with down conversion
-
For each topic:
-
Change the topic-level message.format.version
to the old version
(or rely on the topic defaulting to the broker-level log.message.format.version
).
-
Upgrade all the consuming and producing applications.
-
Verify that the upgraded applications function correctly.
-
Change the topic-level message.format.version
to the new version.
This strategy requires broker down-conversion, but the load on the brokers is minimized because it is only required for a single topic (or small group of topics) at a time. It also works for applications that are both consumers and producers of the same topic. This approach ensures that the upgraded producers and consumers are working correctly before you commit to using the new message format version.
The main drawback of this approach is that it can be complicated to manage in a cluster with many topics and applications.
Other strategies for upgrading client applications are also possible.
Note
|
It is also possible to apply multiple strategies.
For example, for the first few applications and topics the
"per-topic consumers first, with down conversion" strategy can be used.
When this has proved successful another, more efficient strategy can be considered acceptable to use instead.
|
After you have upgraded Strimzi to 0.23.0, you must ensure that your custom resources are using API version v1beta2
.
You can do this any time after upgrading to 0.23.0, but the upgrades must be completed before the next Strimzi minor version update.
Important
|
Upgrade of the custom resources to v1beta2 must be performed after upgrading the Cluster Operator, so the Cluster Operator can understand the resources.
|
Note
|
Upgrade of the custom resources to v1beta2 prepares Strimzi for a move to Kubernetes CRD v1 ,
which will be required for Kubernetes 1.22.
|
CLI upgrades to custom resources
Strimzi provides an API conversion tool with its release artifacts.
You can download its ZIP or TAR.GZ file from GitHub.
To use the tool, extract it and use the scripts in the bin
directory.
From its CLI, you can then use the tool to convert the format of your custom resources to v1beta2
in one of two ways:
After the conversion of your custom resources, you must set v1beta2
as the storage API version in your CRDs:
Manual upgrades to custom resources
Instead of using the API conversion tool to update custom resources to v1beta2
, you can manually update each custom resource to use v1beta2
:
Update the Kafka
custom resource, including the configurations for the other components:
Update the other custom resources that apply to your deployment:
The manual procedures show the changes that are made to each custom resource.
After these changes, you must use the API conversion tool to upgrade your CRDs.
Custom resources are edited and controlled using APIs added to Kubernetes by CRDs.
Put another way, CRDs extend the Kubernetes API to allow the creation of custom resources.
CRDs are themselves resources within Kubernetes.
They are installed in a Kubernetes cluster to define the versions of API for the custom resource.
Each version of the custom resource API can define its own schema for that version.
Kubernetes clients, including the Strimzi Operators, access the custom resources served by the Kubernetes API server using a URL path (API path), which includes the API version.
The introduction of v1beta2
updates the schemas of the custom resources.
Older API versions are deprecated.
The v1alpha1
API version is deprecated for the following Strimzi custom resources:
-
Kafka
-
KafkaConnect
-
KafkaConnectS2I
-
KafkaConnector
-
KafkaMirrorMaker
-
KafkaMirrorMaker2
-
KafkaTopic
-
KafkaUser
-
KafkaBridge
-
KafkaRebalance
The v1beta1
API version is deprecated for the following Strimzi custom resources:
-
Kafka
-
KafkaConnect
-
KafkaConnectS2I
-
KafkaMirrorMaker
-
KafkaTopic
-
KafkaUser
Important
|
The v1alpha1 and v1beta1 versions will be removed in the next minor release.
|
This procedure describes how to use the API conversion tool to convert YAML files describing the configuration for Strimzi custom resources into a format applicable to v1beta2
.
To do so, you use the convert-file
(cf
) command.
The convert-file
command can convert YAML files containing multiple documents.
For a multi-document YAML file, all the Strimzi custom resources it contains are converted.
Any non-Strimzi Kubernetes resources are replicated unmodified in the converted output file.
After you have converted the YAML file, you must apply the configuration to update the custom resource in the cluster.
Alternatively, if the GitOps synchronization mechanism is being used for updates on your cluster, you can use it to apply the changes.
The conversion is only complete when the custom resource is updated in the Kubernetes cluster.
Prerequisites
-
A Cluster Operator supporting the v1beta2
API version is up and running.
-
The API conversion tool, which is provided with the release artifacts.
-
The tool requires Java 11.
Use the CLI help for more information on the API conversion tool, and the flags available for the convert-file
command:
bin/api-conversion.sh help
bin/api-conversion.sh help convert-file
Use bin/api-conversion.cmd
for this procedure if you are using Windows.
Table 8. Flags for YAML file conversion
Flag |
Description |
-f , --file=NAME-OF-YAML-FILE
|
Specifies the YAML file for the Strimzi custom resource being converted |
-o, --output=NAME-OF-CONVERTED-YAML-FILE
|
Creates an output YAML file for the converted custom resource |
--in-place
|
Updates the original source file with the converted YAML |
Procedure
-
Run the API conversion tool with the convert-file
command and appropriate flags.
Example 1, converts a YAML file and displays the output, though the file does not change:
bin/api-conversion.sh convert-file --file input.yaml
Example 2, converts a YAML file, and writes the changes into the original source file:
bin/api-conversion.sh convert-file --file input.yaml --in-place
Example 3, converts a YAML file, and writes the changes into a new output file:
bin/api-conversion.sh convert-file --file input.yaml --output output.yaml
-
Update the custom resources using the converted configuration file.
kubectl apply -f CONVERTED-CONFIG-FILE
-
Verify that the custom resources have been converted.
kubectl get KIND CUSTOM-RESOURCE-NAME -o yaml
This procedure describes how to use the API conversion tool to convert Strimzi custom resources directly in the Kubernetes cluster into a format applicable to v1beta2
.
To do so, you use the convert-resource
(cr
) command.
The command uses Kubernetes APIs to make the conversions.
You can specify one or more of types of Strimzi custom resources, based on the kind
property, or you can convert all types.
You can also target a specific namespace or all namespaces for conversion.
When targeting a namespace, you can convert all custom resources in that namespace, or convert a single custom resource by specifying its name and kind.
Prerequisites
-
A Cluster Operator supporting the v1beta2
API version is up and running.
-
The API conversion tool, which is provided with the release artifacts.
-
The tool requires Java 11 (OpenJDK).
-
The steps require a user admin account with RBAC permission to:
-
Get the Strimzi custom resources being converted using the --name
option
-
List the Strimzi custom resources being converted without using the --name
option
-
Replace the Strimzi custom resources being converted
Use the CLI help for more information on the API conversion tool, and the flags available for the convert-resource
command:
bin/api-conversion.sh help
bin/api-conversion.sh help convert-resource
Use bin/api-conversion.cmd
for this procedure if you are using Windows.
Table 9. Flags for converting custom resources
Flag |
Description |
-k , --kind
|
Specifies the kinds of custom resources to be converted, or converts all resources if not specified |
-a , --all-namespaces
|
Converts custom resources in all namespaces |
-n , --namespace
|
Specifies a Kubernetes namespace or OpenShift project, or uses the current namespace if not specified |
--name
|
If --namespace and a single custom resource --kind is used, specifies the name of the custom resource being converted |
Procedure
-
Run the API conversion tool with the convert-resource
command and appropriate flags.
Example 1, converts all Strimzi resources in current namespace:
bin/api-conversion.sh convert-resource
Example 2, converts all Strimzi resources in all namespaces:
bin/api-conversion.sh convert-resource --all-namespaces
Example 3, converts all Strimzi resources in the my-kafka
namespace:
bin/api-conversion.sh convert-resource --namespace my-kafka
Example 4, converts only Kafka resources in all namespaces:
bin/api-conversion.sh convert-resource --all-namespaces --kind Kafka
Example 5, converts Kafka and Kafka Connect resources in all namespaces:
bin/api-conversion.sh convert-resource --all-namespaces --kind Kafka --kind KafkaConnect
Example 6, converts a Kafka custom resource named my-cluster
in the my-kafka
namespace:
bin/api-conversion.sh convert-resource --kind Kafka --namespace my-kafka --name my-cluster
-
Verify that the custom resources have been converted.
kubectl get KIND CUSTOM-RESOURCE-NAME -o yaml
This procedure describes how to use the API conversion tool to convert the CRDs that define the schemas used to instantiate and manage Strimzi-specific resources in a format applicable to v1beta2
.
To do so, you use the crd-upgrade
command.
Perform this procedure after converting all Strimzi custom resources in the whole Kubernetes cluster to v1beta2
.
If you upgrade your CRDs first, and then convert your custom resources, you will need to run this command again.
The command updates spec.versions
in the CRDs to declare v1beta2
as the storage API version.
The command also updates custom resources so they are stored under v1beta2
.
New custom resource instances are created from the specification of the storage API version, so only one API version is ever marked as the storage version.
When you have upgraded the CRDs to use v1beta2
as the storage version, you should only use v1beta2
properties in your custom resources.
Prerequisites
-
A Cluster Operator supporting the v1beta2
API version is up and running.
-
The API conversion tool, which is provided with the release artifacts.
-
The tool requires Java 11 (OpenJDK).
-
Custom resources have been converted to v1beta2
.
-
The steps require a user admin account with RBAC permission to:
-
List the Strimzi custom resources in all namespaces
-
Replace the Strimzi custom resources being converted
-
Update CRDs
-
Replace the status of the CRDs
Use the CLI help for more information on the API conversion tool:
bin/api-conversion.sh help
Use bin/api-conversion.cmd
for this procedure if you are using Windows.
Procedure
-
If you have not done so, convert your custom resources to use v1beta2
.
You can use the API conversion tool to do this in one of two ways:
-
Run the API conversion tool with the crd-upgrade
command.
bin/api-conversion.sh crd-upgrade
-
Verify that the CRDs have been upgraded so that v1beta2 is the storage version.
For example, for the Kafka topic CRD:
apiVersion: kafka.strimzi.io/v1beta2
kind: CustomResourceDefinition
metadata:
name: kafkatopics.kafka.strimzi.io
#...
spec:
group: kafka.strimzi.io
#...
versions:
- name: v1beta2
served: true
storage: true
#...
status:
#...
storedVersions:
- v1beta2
Procedure
Perform the following steps for each Kafka
custom resource in your deployment.
-
Update the Kafka
custom resource in an editor.
kubectl edit kafka KAFKA-CLUSTER
-
If you have not already done so, update .spec.kafka.listener
to the new generic listener format, as described in Updating listeners to the generic listener configuration.
Warning
|
The old listener format is not supported in API version v1beta2 .
|
-
If present, move affinity
from .spec.kafka.affinity
to .spec.kafka.template.pod.affinity
.
-
If present, move tolerations
from .spec.kafka.tolerations
to .spec.kafka.template.pod.tolerations
.
-
If present, remove .spec.kafka.template.tlsSidecarContainer
.
-
If present, remove .spec.kafka.tlsSidecarContainer
.
-
If either of the following policy configurations exist:
-
If either of the following loadbalancer
listener configurations exist:
-
If type: external
logging is configured in .spec.kafka.logging
:
Replace the name
of the ConfigMap containing the logging configuration:
logging:
type: external
name: my-config-map
With the valueFrom.configMapKeyRef
field, and specify both the ConfigMap name
and the key
under which the logging is stored:
logging:
type: external
valueFrom:
configMapKeyRef:
name: my-config-map
key: log4j.properties
-
If the .spec.kafka.metrics
field is used to enable metrics:
-
Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key.
The YAML must match what is currently in the .spec.kafka.metrics
field.
kind: ConfigMap
apiVersion: v1
metadata:
name: kafka-metrics
labels:
app: strimzi
data:
kafka-metrics-config.yaml: |
<YAML>
-
Add a .spec.kafka.metricsConfig
property that points to the ConfigMap and key:
metricsConfig:
type: jmxPrometheusExporter
valueFrom:
configMapKeyRef:
name: kafka-metrics
key: kafka-metrics-config.yaml
-
Delete the old .spec.kafka.metrics
field.
-
Save the file, exit the editor and wait for the updated custom resource to be reconciled.
What to do next
For each Kafka
custom resource, upgrade the configurations for ZooKeeper, Topic Operator, Entity Operator, and Cruise Control (if deployed) to support version v1beta2
.
This is described in the following procedures.
Procedure
Perform the following steps for each Kafka
custom resource in your deployment.
-
Update the Kafka
custom resource in an editor.
kubectl edit kafka KAFKA-CLUSTER
-
If present, move affinity
from .spec.zookeeper.affinity
to .spec.zookeeper.template.pod.affinity
.
-
If present, move tolerations
from .spec.zookeeper.tolerations
to .spec.zookeeper.template.pod.tolerations
.
-
If present, remove .spec.zookeeper.template.tlsSidecarContainer
.
-
If present, remove .spec.zookeeper.tlsSidecarContainer
.
-
If type: external
logging is configured in .spec.kafka.logging
:
Replace the name
of the ConfigMap containing the logging configuration:
logging:
type: external
name: my-config-map
With the valueFrom.configMapKeyRef
field, and specify both the ConfigMap name
and the key
under which the logging is stored:
logging:
type: external
valueFrom:
configMapKeyRef:
name: my-config-map
key: log4j.properties
-
If the .spec.zookeeper.metrics
field is used to enable metrics:
-
Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key.
The YAML must match what is currently in the .spec.zookeeper.metrics
field.
kind: ConfigMap
apiVersion: v1
metadata:
name: kafka-metrics
labels:
app: strimzi
data:
zookeeper-metrics-config.yaml: |
<YAML>
-
Add a .spec.zookeeper.metricsConfig
property that points to the ConfigMap and key:
metricsConfig:
type: jmxPrometheusExporter
valueFrom:
configMapKeyRef:
name: kafka-metrics
key: zookeeper-metrics-config.yaml
-
Delete the old .spec.zookeeper.metrics
field.
-
Save the file, exit the editor and wait for the updated custom resource to be reconciled.
Procedure
Perform the following steps for each Kafka
custom resource in your deployment.
-
Update the Kafka
custom resource in an editor.
kubectl edit kafka KAFKA-CLUSTER
-
If Kafka.spec.topicOperator
is used:
-
Move affinity
from .spec.topicOperator.affinity
to .spec.entityOperator.template.pod.affinity
.
-
Move tolerations
from .spec.topicOperator.tolerations
to .spec.entityOperator.template.pod.tolerations
.
-
Move .spec.topicOperator.tlsSidecar
to .spec.entityOperator.tlsSidecar
.
-
After moving affinity
, tolerations
, and tlsSidecar
, move the remaining configuration in .spec.topicOperator
to .spec.entityOperator.topicOperator
.
-
If type: external
logging is configured in .spec.topicOperator.logging
:
Replace the name
of the ConfigMap containing the logging configuration:
logging:
type: external
name: my-config-map
With the valueFrom.configMapKeyRef
field, and specify both the ConfigMap name
and the key
under which the logging is stored:
logging:
type: external
valueFrom:
configMapKeyRef:
name: my-config-map
key: log4j2.properties
-
Save the file, exit the editor and wait for the updated custom resource to be reconciled.
Procedure
Perform the following steps for each Kafka
custom resource in your deployment.
-
Update the Kafka
custom resource in an editor.
kubectl edit kafka KAFKA-CLUSTER
-
Move affinity
from .spec.entityOperator.affinity
to .spec.entityOperator.template.pod.affinity
.
-
Move tolerations
from .spec.entityOperator.tolerations
to .spec.entityOperator.template.pod.tolerations
.
-
If type: external
logging is configured in .spec.entityOperator.userOperator.logging
or .spec.entityOperator.topicOperator.logging
:
Replace the name
of the ConfigMap containing the logging configuration:
logging:
type: external
name: my-config-map
With the valueFrom.configMapKeyRef
field, and specify both the ConfigMap name
and the key
under which the logging is stored:
logging:
type: external
valueFrom:
configMapKeyRef:
name: my-config-map
key: log4j2.properties
-
Save the file, exit the editor and wait for the updated custom resource to be reconciled.
Procedure
Perform the following steps for each Kafka.spec.cruiseControl
configuration in your Kafka cluster.
-
Update the Kafka
custom resource in an editor.
kubectl edit kafka KAFKA-CLUSTER
-
If type: external
logging is configured in .spec.cruiseControl.logging
:
Replace the name
of the ConfigMap containing the logging configuration:
logging:
type: external
name: my-config-map
With the valueFrom.configMapKeyRef
field, and specify both the ConfigMap name
and the key
under which the logging is stored:
logging:
type: external
valueFrom:
configMapKeyRef:
name: my-config-map
key: log4j2.properties
-
If the .spec.cruiseControl.metrics
field is used to enable metrics:
-
Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key.
The YAML must match what is currently in the .spec.cruiseControl.metrics
field.
kind: ConfigMap
apiVersion: v1
metadata:
name: kafka-metrics
labels:
app: strimzi
data:
cruise-control-metrics-config.yaml: |
<YAML>
-
Add a .spec.cruiseControl.metricsConfig
property that points to the ConfigMap and key:
metricsConfig:
type: jmxPrometheusExporter
valueFrom:
configMapKeyRef:
name: kafka-metrics
key: cruise-control-metrics-config.yaml
-
Delete the old .spec.cruiseControl.metrics
field.
-
Save the file, exit the editor and wait for the updated custom resource to be reconciled.
Procedure
Perform the following steps for each Kafka
custom resource in your deployment.
-
Update the Kafka
custom resource in an editor.
kubectl edit kafka KAFKA-CLUSTER
-
Update the apiVersion
of the Kafka
custom resource to v1beta2
:
apiVersion: kafka.strimzi.io/v1beta1
apiVersion: kafka.strimzi.io/v1beta2
-
Save the file, exit the editor and wait for the updated custom resource to be reconciled.
Procedure
Perform the following steps for each KafkaConnect
custom resource in your deployment.
-
Update the KafkaConnect
custom resource in an editor.
kubectl edit kafkaconnect KAFKA-CONNECT-CLUSTER
-
If present, move:
KafkaConnect.spec.affinity
KafkaConnect.spec.tolerations
KafkaConnect.spec.template.pod.affinity
KafkaConnect.spec.template.pod.tolerations
spec:
# ...
affinity:
# ...
tolerations:
# ...
spec:
# ...
template:
pod:
affinity:
# ...
tolerations:
# ...
-
If type: external
logging is configured in .spec.logging
:
Replace the name
of the ConfigMap containing the logging configuration:
logging:
type: external
name: my-config-map
With the valueFrom.configMapKeyRef
field, and specify both the ConfigMap name
and the key
under which the logging is stored:
logging:
type: external
valueFrom:
configMapKeyRef:
name: my-config-map
key: log4j.properties
-
If the .spec.metrics
field is used to enable metrics:
-
Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key.
The YAML must match what is currently in the .spec.metrics
field.
kind: ConfigMap
apiVersion: v1
metadata:
name: kafka-connect-metrics
labels:
app: strimzi
data:
connect-metrics-config.yaml: |
<YAML>
-
Add a .spec.metricsConfig
property that points to the ConfigMap and key:
metricsConfig:
type: jmxPrometheusExporter
valueFrom:
configMapKeyRef:
name: kafka-connect-metrics
key: connect-metrics-config.yaml
-
Delete the old .spec.metrics
field.
-
Update the apiVersion
of the KafkaConnect
custom resource to v1beta2
:
apiVersion: kafka.strimzi.io/v1beta1
apiVersion: kafka.strimzi.io/v1beta2
-
Save the file, exit the editor and wait for the updated custom resource to be reconciled.
Procedure
Perform the following steps for each KafkaConnectS2I
custom resource in your deployment.
-
Update the KafkaConnectS2I
custom resource in an editor.
kubectl edit kafkaconnects2i S2I-CLUSTER
-
If present, move:
KafkaConnectS2I.spec.affinity
KafkaConnectS2I.spec.tolerations
KafkaConnectS2I.spec.template.pod.affinity
KafkaConnectS2I.spec.template.pod.tolerations
spec:
# ...
affinity:
# ...
tolerations:
# ...
spec:
# ...
template:
pod:
affinity:
# ...
tolerations:
# ...
-
If type: external
logging is configured in .spec.logging
:
Replace the name
of the ConfigMap containing the logging configuration:
logging:
type: external
name: my-config-map
With the valueFrom.configMapKeyRef
field, and specify both the ConfigMap name
and the key
under which the logging is stored:
logging:
type: external
valueFrom:
configMapKeyRef:
name: my-config-map
key: log4j.properties
-
If the .spec.metrics
field is used to enable metrics:
-
Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key.
The YAML must match what is currently in the .spec.metrics
field.
kind: ConfigMap
apiVersion: v1
metadata:
name: kafka-connect-s2i-metrics
labels:
app: strimzi
data:
connect-s2i-metrics-config.yaml: |
<YAML>
-
Add a .spec.metricsConfig
property that points to the ConfigMap and key:
metricsConfig:
type: jmxPrometheusExporter
valueFrom:
configMapKeyRef:
name: kafka-connect-s2i-metrics
key: connect-s2i-metrics-config.yaml
-
Delete the old .spec.metrics
field
-
Update the apiVersion
of the KafkaConnectS2I
custom resource to v1beta2
:
apiVersion: kafka.strimzi.io/v1beta1
apiVersion: kafka.strimzi.io/v1beta2
-
Save the file, exit the editor and wait for the updated custom resource to be reconciled.
Procedure
Perform the following steps for each KafkaMirrorMaker
custom resource in your deployment.
-
Update the KafkaMirrorMaker
custom resource in an editor.
kubectl edit kafkamirrormaker MIRROR-MAKER
-
If present, move:
KafkaMirrorMaker.spec.affinity
KafkaMirrorMaker.spec.tolerations
KafkaMirrorMaker.spec.template.pod.affinity
KafkaMirrorMaker.spec.template.pod.tolerations
spec:
# ...
affinity:
# ...
tolerations:
# ...
spec:
# ...
template:
pod:
affinity:
# ...
tolerations:
# ...
-
If type: external
logging is configured in .spec.logging
:
Replace the name
of the ConfigMap containing the logging configuration:
logging:
type: external
name: my-config-map
With the valueFrom.configMapKeyRef
field, and specify both the ConfigMap name
and the key
under which the logging is stored:
logging:
type: external
valueFrom:
configMapKeyRef:
name: my-config-map
key: log4j.properties
-
If the .spec.metrics
field is used to enable metrics:
-
Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key.
The YAML must match what is currently in the .spec.metrics
field.
kind: ConfigMap
apiVersion: v1
metadata:
name: kafka-mm-metrics
labels:
app: strimzi
data:
mm-metrics-config.yaml: |
<YAML>
-
Add a .spec.metricsConfig
property that points to the ConfigMap and key:
metricsConfig:
type: jmxPrometheusExporter
valueFrom:
configMapKeyRef:
name: kafka-mm-metrics
key: mm-metrics-config.yaml
-
Delete the old .spec.metrics
field.
-
Update the apiVersion
of the KafkaMirrorMaker
custom resource to v1beta2
:
apiVersion: kafka.strimzi.io/v1beta1
apiVersion: kafka.strimzi.io/v1beta2
-
Save the file, exit the editor and wait for the updated custom resource to be reconciled.
Procedure
Perform the following steps for each KafkaMirrorMaker2
custom resource in your deployment.
-
Update the KafkaMirrorMaker2
custom resource in an editor.
kubectl edit kafkamirrormaker2 MIRROR-MAKER-2
-
If present, move affinity
from .spec.affinity
to .spec.template.pod.affinity
.
-
If present, move tolerations
from .spec.tolerations
to .spec.template.pod.tolerations
.
-
If type: external
logging is configured in .spec.logging
:
Replace the name
of the ConfigMap containing the logging configuration:
logging:
type: external
name: my-config-map
With the valueFrom.configMapKeyRef
field, and specify both the ConfigMap name
and the key
under which the logging is stored:
logging:
type: external
valueFrom:
configMapKeyRef:
name: my-config-map
key: log4j.properties
-
If the .spec.metrics
field is used to enable metrics:
-
Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key.
The YAML must match what is currently in the .spec.metrics
field.
kind: ConfigMap
apiVersion: v1
metadata:
name: kafka-mm2-metrics
labels:
app: strimzi
data:
mm2-metrics-config.yaml: |
<YAML>
-
Add a .spec.metricsConfig
property that points to the ConfigMap and key:
metricsConfig:
type: jmxPrometheusExporter
valueFrom:
configMapKeyRef:
name: kafka-mm2-metrics
key: mm2-metrics-config.yaml
-
Delete the old .spec.metrics
field.
-
Update the apiVersion
of the KafkaMirrorMaker2
custom resource to v1beta2
:
apiVersion: kafka.strimzi.io/v1alpha1
apiVersion: kafka.strimzi.io/v1beta2
-
Save the file, exit the editor and wait for the updated custom resource to be reconciled.
Procedure
Perform the following steps for each KafkaBridge
resource in your deployment.
-
Update the KafkaBridge
custom resource in an editor.
kubectl edit kafkabridge KAFKA-BRIDGE
-
If type: external
logging is configured in KafkaBridge.spec.logging
:
Replace the name
of the ConfigMap containing the logging configuration:
logging:
type: external
name: my-config-map
With the valueFrom.configMapKeyRef
field, and specify both the ConfigMap name
and the key
under which the logging is stored:
logging:
type: external
valueFrom:
configMapKeyRef:
name: my-config-map
key: log4j2.properties
-
Update the apiVersion
of the KafkaBridge
custom resource to v1beta2
:
apiVersion: kafka.strimzi.io/v1alpha1
apiVersion: kafka.strimzi.io/v1beta2
-
Save the file, exit the editor and wait for the updated custom resource to be reconciled.
Procedure
Perform the following steps for each KafkaUser
custom resource in your deployment.
-
Update the KafkaUser
custom resource in an editor.
kubectl edit kafkauser KAFKA-USER
-
Update the apiVersion
of the KafkaUser
custom resource to v1beta2
:
apiVersion: kafka.strimzi.io/v1beta1
apiVersion: kafka.strimzi.io/v1beta2
-
Save the file, exit the editor and wait for the updated custom resource to be reconciled.
Procedure
Perform the following steps for each KafkaTopic
custom resource in your deployment.
-
Update the KafkaTopic
custom resource in an editor.
kubectl edit kafkatopic KAFKA-TOPIC
-
Update the apiVersion
of the KafkaTopic
custom resource to v1beta2
:
apiVersion: kafka.strimzi.io/v1beta1
apiVersion: kafka.strimzi.io/v1beta2
-
Save the file, exit the editor and wait for the updated custom resource to be reconciled.
Procedure
Perform the following steps for each KafkaConnector
custom resource in your deployment.
-
Update the KafkaConnector
custom resource in an editor.
kubectl edit kafkaconnector KAFKA-CONNECTOR
-
Update the apiVersion
of the KafkaConnector
custom resource to v1beta2
:
apiVersion: kafka.strimzi.io/v1alpha1
apiVersion: kafka.strimzi.io/v1beta2
-
Save the file, exit the editor and wait for the updated custom resource to be reconciled.
Procedure
Perform the following steps for each KafkaRebalance
custom resource in your deployment.
-
Update the KafkaRebalance
custom resource in an editor.
kubectl edit kafkarebalance KAFKA-REBALANCE
-
Update the apiVersion
of the KafkaRebalance
custom resource to v1beta2
:
apiVersion: kafka.strimzi.io/v1alpha1
apiVersion: kafka.strimzi.io/v1beta2
-
Save the file, exit the editor and wait for the updated custom resource to be reconciled.
You can upgrade Kafka consumers and Kafka Streams applications to use the incremental cooperative rebalance protocol for partition rebalances instead of the default eager rebalance protocol. The new protocol was added in Kafka 2.4.0.
Consumers keep their partition assignments in a cooperative rebalance and only revoke them at the end of the process, if needed to achieve a balanced cluster. This reduces the unavailability of the consumer group or Kafka Streams application.
Note
|
Upgrading to the incremental cooperative rebalance protocol is optional. The eager rebalance protocol is still supported.
|
Procedure
To upgrade a Kafka consumer to use the incremental cooperative rebalance protocol:
-
Replace the Kafka clients .jar
file with the new version.
-
In the consumer configuration, append cooperative-sticky
to the partition.assignment.strategy
. For example, if the range
strategy is set, change the configuration to range, cooperative-sticky
.
-
Restart each consumer in the group in turn, waiting for the consumer to rejoin the group after each restart.
-
Reconfigure each consumer in the group by removing the earlier partition.assignment.strategy
from the consumer configuration, leaving only the cooperative-sticky
strategy.
-
Restart each consumer in the group in turn, waiting for the consumer to rejoin the group after each restart.
To upgrade a Kafka Streams application to use the incremental cooperative rebalance protocol:
-
Replace the Kafka Streams .jar
file with the new version.
-
In the Kafka Streams configuration, set the upgrade.from
configuration parameter to the Kafka version you are upgrading from (for example, 2.3).
-
Restart each of the stream processors (nodes) in turn.
-
Remove the upgrade.from
configuration parameter from the Kafka Streams configuration.
-
Restart each consumer in the group in turn.