FIPS (Federal Information Processing Standards) are standards for computer security and interoperability. One of these standards - FIPS-140 - among other things, specifies certain requirements for cryptography modules. When running Strimzi on a FIPS-enabled Kubernetes cluster, the OpenJDK Java runtime used in our container images automatically detects it and switches into a special FIPS mode. In this mode, only the approved and validated security libraries and algorithms can be used.
In older Strimzi versions, Strimzi did not work well with the FIPS mode enabled. The features which were not enabled by OpenJDK in the FIPS mode included support for TLS certificates in PKCS12 format. And since Strimzi relies on these, it meant that it did not work properly.
That is why we added a configuration option in Strimzi 0.28 to disable FIPS mode in the OpenJDK Java runtime.
You can set the environment variable FIPS_MODE
in the Strimzi Cluster Operator deployment to disabled
and the operator will disable the OpenJDK FIPS mode.
This allows you to run Strimzi and Apache Kafka on your FIPS-enabled cluster.
But of course, it will also use unapproved and unvalidated cryptography modules.
So your Kafka cluster will not be FIPS compliant.
Improvements to FIPS support in Strimzi 0.33
In Strimzi 0.33, we bring further improvements to how we support FIPS. One of the main changes in 0.33 is that Strimzi moved to Java 17 in its container images. The latest version of OpenJDK 17 adds improved support for PKCS12 certificate stores. That laid the foundation for our improved FIPS support. Thanks to that, Strimzi can now work and use TLS without disabling the FIPS mode.
But to get everything working, we needed to do a bit more.
Updates to PKCS12 algorithms
While OpenJDK 17 added PKCS12 support in the FIPS mode, it still requires the PKCS12 stores to use specific algorithms for key and certificate encryption or as the MAC digest. So we had to update how we generate the PKCS12 certificate stores in Strimzi. This was done already in Strimzi 0.30 as we were aware of this requirement.
Using the default SecureRandom implementation
In previous Strimzi versions, we have also always automatically configured the SecureRandom implementation in the Kafka brokers to SHA1PRNG
.
In older Java versions, SHA1PRNG
was delivering significant performance improvements over the default SecureRandom.
In Java 17, the performance differences are mostly removed.
And SHA1PRNG
is not FIPS compatible.
In Strimzi 0.33, we decided to not automatically configure SHA1PRNG
anymore.
We now use the SecureRandom implementation which is the default in the Java environment.
When running with FIPS enabled, this provides a SecureRandom implementation that is FIPS compliant.
If for some reason you want to use a specific SecureRandom implementation, you can configure it in the Kafka
CR, in the path .spec.kafka.config
using the ssl.secure.random.implementation
configuration option.
### Changing the default password length for SCRAM-SHA credentials
Another issue we had to deal with was the credentials for SCRAM-SHA-512 authentication. The SCRAM-SHA-512 authentication which is supported by Strimzi is FIPS compatible. But to work properly, it requires the passwords to be at least 32 characters long. If the password is shorter, you will get an exception similar to this:
Caused by: java.security.InvalidKeyException: init() failed
at sun.security.pkcs11.P11Mac.engineInit(P11Mac.java:228) ~[jdk.crypto.cryptoki:?]
at javax.crypto.Mac.chooseProvider(Mac.java:365) ~[?:?]
at javax.crypto.Mac.init(Mac.java:434) ~[?:?]
...
Caused by: sun.security.pkcs11.wrapper.PKCS11Exception: CKR_KEY_SIZE_RANGE
at sun.security.pkcs11.wrapper.PKCS11.C_SignInit(Native Method) ~[jdk.crypto.cryptoki:?]
at sun.security.pkcs11.P11Mac.initialize(P11Mac.java:186) ~[jdk.crypto.cryptoki:?]
at sun.security.pkcs11.P11Mac.engineInit(P11Mac.java:226) ~[jdk.crypto.cryptoki:?]
at javax.crypto.Mac.chooseProvider(Mac.java:365) ~[?:?]
at javax.crypto.Mac.init(Mac.java:434) ~[?:?]
...
The length of the SCRAM-SHA password generated by the Strimzi User Operator is configurable. But by default, they are 12 characters long. That is why in Strimzi 0.33, we changed the default length to 32 characters to make the User Operator work on FIPS-enabled clusters out of the box.
If you want to use a different password length, you can use the environment variable STRIMZI_SCRAM_SHA_PASSWORD_LENGTH
to change it.
The environment variable can be set in the Kafka
custom resource in .spec.entityOperator.template.userOperatorContainer.env
or directly in the Deployment
of the standalone User Operator.
Upgrading
Thanks to the improvements described above, when you deploy a new Apache Kafka cluster with Strimzi 0.33 or newer on a FIPS-enabled Kubernetes cluster, everything should work out of the box.
But what if you are already running Strimzi and Apache Kafka on a FIPS-enabled Kubernetes cluster with the FIPS mode disabled through Strimzi’s FIPS_MODE
options?
In such a case, you can also run in FIPS mode. But to avoid any possible issues, you need to go through the following steps.
First, upgrade to Strimzi 0.33 or newer, but keep the FIPS_MODE
environment variable still set to disabled
to disable the FIPS mode.
Once you are running the new Strimzi version, but with the FIPS mode still disabled, you update the certificates and the SCRAM-SHA passwords:
-
If you initially deployed your cluster with a Strimzi version older than 0.30, it might still use the PKCS12 stores using old encryption and digest algorithms which are not supported with FIPS enabled. To get the certificates recreated with the new algorithms, you should renew the Cluster and Clients CA. If you use CAs generated by the Cluster Operator, you can use an annotation to trigger the renewal. If you use your own CAs, you can follow the docs to renew your own CA certificates.
-
If you use SCRAM-SHA authentication, you have to check the password length of your users. If they are less than 32 characters long, you will have to generate a new password for them. You can do that by deleting the user secret and having the User Operator generate a new one with a new password with sufficient length. Alternatively, if you provided your password using the
.spec.authentication.password
section of theKafkaUser
custom resource, you have to update the password in the Kubernetes secret referenced in the same password configuration. Don’t forget to update your clients to use the new passwords.
When all the certificates are using the correct algorithms and the SCRAM-SHA passwords are sufficiently long, you can enable the FIPS mode.
You do so by removing the environment variable FIPS_MODE
from the Strimzi Cluster Operator deployment.
When the operator restarts, it rolls all the operands to enable the FIPS mode.
When complete, all your Kafka clusters will be running with the OpenJDK FIPS mode enabled.
Of course, if you still prefer to run with the FIPS mode disabled, the FIPS_MODE
configuration option added in Strimzi 0.28 is still available.
Known limitations
Strimzi supports exposing JMX in ZooKeeper, Kafka brokers, Connect, and in MirrorMaker2. JMX authentication using a username and password is currently not supported in the FIPS mode. As a workaround, you can use JMX without authentication or use the Prometheus metrics for monitoring instead of JMX.
Another limitation is the JMX Trans support. The JMX Trans tool seems to be stale and does not support Java 17. The Strimzi JMX Trans container image is the only one which is still based on Java 11 and does not leverage the improved FIPS support in OpenJDK 17. So if you use JMX Trans, you might encounter issues as well.
Note: JMX Trans support in Strimzi is deprecated and is planned to be removed in Strimzi 0.35. For more information, please read the related Strimzi proposal.
Conclusion
While this blog post covers improvements to FIPS support in Strimzi, most of the work was done by the OpenJDK project. Without their support, we would not have been able to achieve such a great improvement. The changes done directly in Strimzi were very small in comparison.
Also, keep in mind that the actual configuration and environment of each user might differ. So if you run FIPS-enabled Kubernetes clusters, don’t forget to test the improved FIPS support first in your test environments before enabling it in production.