Content-Type: application/vnd.kafka.v2+json
Use the Strimzi Kafka Bridge to make HTTP requests to a Kafka cluster.
You can use the Kafka Bridge to integrate HTTP client applications with your Kafka cluster.
Install the Strimzi Kafka Bridge to run in the same environment as your Kafka cluster.
You can download and add the Kafka Bridge installation artifacts to your host machine. To try out the Kafka Bridge in your local environment, see the Kafka Bridge quickstart.
If you deployed Strimzi on Kubernetes, you can use the Strimzi Cluster Operator to deploy the Kafka Bridge to the Kubernetes cluster. You’ll need a running Kafka cluster that was deployed by the Cluster Operator in a Kubernetes namespace. You can configure your deployment to access the Kafka Bridge outside the Kubernetes cluster.
Strimzi documentation describes how to deploy the Kafka Bridge with Strimzi
The Kafka Bridge provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster. It offers the advantages of a web API connection to Strimzi, without the need for client applications to interpret the Kafka protocol.
The API has two main resources — consumers
and topics
— that are exposed and made accessible through endpoints to interact with consumers and producers in your Kafka cluster. The resources relate only to the Kafka Bridge, not the consumers and producers connected directly to Kafka.
The Kafka Bridge supports HTTP requests to a Kafka cluster, with methods to:
Send messages to a topic.
Retrieve messages from topics.
Retrieve a list of partitions for a topic.
Create and delete consumers.
Subscribe consumers to topics, so that they start receiving messages from those topics.
Retrieve a list of topics that a consumer is subscribed to.
Unsubscribe consumers from topics.
Assign partitions to consumers.
Commit a list of consumer offsets.
Seek on a partition, so that a consumer starts receiving messages from the first or last offset position, or a given offset position.
The methods provide JSON responses and HTTP response code error handling. Messages can be sent in JSON or binary formats.
Clients can produce and consume messages without the requirement to use the native Kafka protocol.
Kafka Bridge APIs use the OpenAPI Specification (OAS). OAS provides a standard framework for describing and implementing HTTP APIs.
The Kafka Bridge OpenAPI specification is in JSON format.
You can find the OpenAPI JSON files in the src/main/resources/
folder of the Kafka Bridge source download files.
The download files are available from the GitHub release page.
You can also use the GET /openapi
method to retrieve the OpenAPI v2 specification in JSON format.
You can configure the following between the Kafka Bridge and your Kafka cluster:
TLS or SASL-based authentication
A TLS-encrypted connection
You configure the Kafka Bridge for authentication through its properties file.
You can also use ACLs in Kafka brokers to restrict the topics that can be consumed and produced using the Kafka Bridge.
Authentication and encryption between HTTP clients and the Kafka Bridge is not supported directly by the Kafka Bridge. Requests sent from clients to the Kafka Bridge are sent without authentication or encryption. Requests must use HTTP rather than HTTPS.
You can combine the Kafka Bridge with the following tools to secure it:
Network policies and firewalls that define which pods can access the Kafka Bridge
Reverse proxies (for example, OAuth 2.0)
API gateways
Specify data formats and HTTP headers to ensure valid requests are submitted to the Kafka Bridge.
API request and response bodies are always encoded as JSON.
When performing consumer operations, POST
requests must provide the following Content-Type
header if there is a non-empty body:
Content-Type: application/vnd.kafka.v2+json
When performing producer operations, POST
requests must provide Content-Type
headers specifying the embedded data format of the messages produced. This can be either json
or binary
.
Embedded data format | Content-Type header |
---|---|
JSON |
|
Binary |
|
The embedded data format is set per consumer, as described in the next section.
The Content-Type
must not be set if the POST
request has an empty body.
An empty body can be used to create a consumer with the default values.
The embedded data format is the format of the Kafka messages that are transmitted, over HTTP, from a producer to a consumer using the Kafka Bridge. Two embedded data formats are supported: JSON and binary.
When creating a consumer using the /consumers/groupid
endpoint, the POST
request body must specify an embedded data format of either JSON or binary. This is specified in the format
field, for example:
{
"name": "my-consumer",
"format": "binary", # (1)
# ...
}
A binary embedded data format.
The embedded data format specified when creating a consumer must match the data format of the Kafka messages it will consume.
If you choose to specify a binary embedded data format, subsequent producer requests must provide the binary data in the request body as Base64-encoded strings. For example, when sending messages using the /topics/topicname
endpoint, records.value
must be encoded in Base64:
{
"records": [
{
"key": "my-key",
"value": "ZWR3YXJkdGhldGhyZWVsZWdnZWRjYXQ="
},
]
}
Producer requests must also provide a Content-Type
header that corresponds to the embedded data format, for example, Content-Type: application/vnd.kafka.binary.v2+json
.
When sending messages using the /topics
endpoint, you enter the message payload in the request body, in the records
parameter.
The records
parameter can contain any of these optional fields:
Message headers
Message key
Message value
Destination partition
POST
request to /topicscurl -X POST \
http://localhost:8080/topics/my-topic \
-H 'content-type: application/vnd.kafka.json.v2+json' \
-d '{
"records": [
{
"key": "my-key",
"value": "sales-lead-0001"
"partition": 2
"headers": [
{
"key": "key1",
"value": "QXBhY2hlIEthZmthIGlzIHRoZSBib21iIQ==" (1)
}
]
},
]
}'
The header value in binary format and encoded as Base64.
After creating a consumer, all subsequent GET requests must provide an Accept
header in the following format:
Accept: application/vnd.kafka.EMBEDDED-DATA-FORMAT.v2+json
The EMBEDDED-DATA-FORMAT
is either json
or binary
.
For example, when retrieving records for a subscribed consumer using an embedded data format of JSON, include this Accept header:
Accept: application/vnd.kafka.json.v2+json
Cross-Origin Resource Sharing (CORS) allows you to specify allowed methods and originating URLs for accessing the Kafka cluster in your Kafka Bridge HTTP configuration.
# ...
http.cors.enabled=true
http.cors.allowedOrigins=https://strimzi.io
http.cors.allowedMethods=GET,POST,PUT,DELETE,OPTIONS,PATCH
CORS allows for simple and preflighted requests between origin sources on different domains.
Simple requests are suitable for standard requests using GET
, HEAD
, POST
methods.
A preflighted request sends a HTTP OPTIONS request as an initial check that the actual request is safe to send.
On confirmation, the actual request is sent.
Preflight requests are suitable for methods that require greater safeguards, such as PUT
and DELETE
,
and use non-standard headers.
All requests require an origins value in their header, which is the source of the HTTP request.
For example, this simple request header specifies the origin as https://strimzi.io
.
Origin: https://strimzi.io
The header information is added to the request.
curl -v -X GET HTTP-ADDRESS/bridge-consumer/records \
-H 'Origin: https://strimzi.io'\
-H 'content-type: application/vnd.kafka.v2+json'
In the response from the Kafka Bridge, an Access-Control-Allow-Origin
header is returned.
HTTP/1.1 200 OK
Access-Control-Allow-Origin: * (1)
Returning an asterisk (*
) shows the resource can be accessed by any domain.
An initial preflight request is sent to Kafka Bridge using an OPTIONS
method.
The HTTP OPTIONS request sends header information to check that Kafka Bridge will allow the actual request.
Here the preflight request checks that a POST
request is valid from https://strimzi.io
.
OPTIONS /my-group/instances/my-user/subscription HTTP/1.1
Origin: https://strimzi.io
Access-Control-Request-Method: POST (1)
Access-Control-Request-Headers: Content-Type (2)
Kafka Bridge is alerted that the actual request is a POST
request.
The actual request will be sent with a Content-Type
header.
OPTIONS
is added to the header information of the preflight request.
curl -v -X OPTIONS -H 'Origin: https://strimzi.io' \
-H 'Access-Control-Request-Method: POST' \
-H 'content-type: application/vnd.kafka.v2+json'
Kafka Bridge responds to the initial request to confirm that the request will be accepted. The response header returns allowed origins, methods and headers.
HTTP/1.1 200 OK
Access-Control-Allow-Origin: https://strimzi.io
Access-Control-Allow-Methods: GET,POST,PUT,DELETE,OPTIONS,PATCH
Access-Control-Allow-Headers: content-type
If the origin or method is rejected, an error message is returned.
The actual request does not require Access-Control-Request-Method
header, as it was confirmed in the preflight request,
but it does require the origin header.
curl -v -X POST HTTP-ADDRESS/topics/bridge-topic \
-H 'Origin: https://strimzi.io' \
-H 'content-type: application/vnd.kafka.v2+json'
The response shows the originating URL is allowed.
HTTP/1.1 200 OK
Access-Control-Allow-Origin: https://strimzi.io
You can set a different log level for each operation that is defined by the Kafka Bridge OpenAPI specification.
Each operation has a corresponding API endpoint through which the bridge receives requests from HTTP clients. You can change the log level on each endpoint to produce more or less fine-grained logging information about the incoming and outgoing HTTP requests.
Loggers are defined in the log4j.properties
file, which has the following default configuration for healthy
and ready
endpoints:
log4j.logger.http.openapi.operation.healthy=WARN, out
log4j.additivity.http.openapi.operation.healthy=false
log4j.logger.http.openapi.operation.ready=WARN, out
log4j.additivity.http.openapi.operation.ready=false
The log level of all other operations is set to INFO
by default.
Loggers are formatted as follows:
log4j.logger.http.openapi.operation.<operation_id>
Where <operation_id>
is the identifier of the specific operation.
createConsumer
deleteConsumer
subscribe
unsubscribe
poll
assign
commit
send
sendToPartition
seekToBeginning
seekToEnd
seek
healthy
ready
openapi
Use this quickstart to try out the Strimzi Kafka Bridge in your local development environment.
You will learn how to do the following:
Produce messages to topics and partitions in your Kafka cluster
Create a Kafka Bridge consumer
Perform basic consumer operations, such as subscribing the consumer to topics and retrieving the messages that you produced
In this quickstart, HTTP requests are formatted as curl commands that you can copy and paste to your terminal.
Ensure you have the prerequisites and then follow the tasks in the order provided in this chapter.
In this quickstart, you will produce and consume messages in JSON format.
A Kafka cluster is running on the host machine.
A zipped distribution of the Strimzi Kafka Bridge is available for download.
Download the latest version of the Strimzi Kafka Bridge archive from the GitHub release page.
This procedure describes how to configure the Kafka and HTTP connection properties used by the Strimzi Kafka Bridge.
You configure the Kafka Bridge, as any other Kafka client, using appropriate prefixes for Kafka-related properties.
kafka.
for general configuration that applies to producers and consumers, such as server connection and security.
kafka.consumer.
for consumer-specific configuration passed only to the consumer.
kafka.producer.
for producer-specific configuration passed only to the producer.
As well as enabling HTTP access to a Kafka cluster, HTTP properties provide the capability to enable and define access control for the Kafka Bridge through Cross-Origin Resource Sharing (CORS). CORS is a HTTP mechanism that allows browser access to selected resources from more than one origin. To configure CORS, you define a list of allowed resource origins and HTTP methods to access them. Additional HTTP headers in requests describe the CORS origins that are permitted access to the Kafka cluster.
Edit the application.properties
file provided with the Strimzi Kafka Bridge installation archive.
Use the properties file to specify Kafka and HTTP-related properties, and to enable distributed tracing.
Configure standard Kafka-related properties, including properties specific to the Kafka consumers and producers.
Use:
kafka.bootstrap.servers
to define the host/port connections to the Kafka cluster
kafka.producer.acks
to provide acknowledgments to the HTTP client
kafka.consumer.auto.offset.reset
to determine how to manage reset of the offset in Kafka
For more information on configuration of Kafka properties, see the Apache Kafka website
Configure HTTP-related properties to enable HTTP access to the Kafka cluster.
For example:
bridge.id=my-bridge
http.enabled=true
http.host=0.0.0.0
http.port=8080 (1)
http.cors.enabled=true (2)
http.cors.allowedOrigins=https://strimzi.io (3)
http.cors.allowedMethods=GET,POST,PUT,DELETE,OPTIONS,PATCH (4)
The default HTTP configuration for the Kafka Bridge to listen on port 8080.
Set to true
to enable CORS.
Comma-separated list of allowed CORS origins. You can use a URL or a Java regular expression.
Comma-separated list of allowed HTTP methods for CORS.
Follow this procedure to install the Strimzi Kafka Bridge.
If you have not already done so, unzip the Kafka Bridge installation archive to any directory.
Run the Kafka Bridge script using the configuration properties as a parameter:
For example:
./bin/kafka_bridge_run.sh --config-file=<path>/configfile.properties
Check to see that the installation was successful in the log.
HTTP-Kafka Bridge started and listening on port 8080
HTTP-Kafka Bridge bootstrap servers localhost:9092
Use the Kafka Bridge to produce messages to a Kafka topic in JSON format by using the topics endpoint.
You can produce messages to topics in JSON format by using the topics endpoint. You can specify destination partitions for messages in the request body. The partitions endpoint provides an alternative method for specifying a single destination partition for all messages as a path parameter.
In this procedure, messages are produced to a topic called bridge-quickstart-topic
.
The Kafka cluster has a topic with three partitions.
You can use the kafka-topics.sh
utility to create topics.
bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic bridge-quickstart-topic --partitions 3 --replication-factor 1
bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic bridge-quickstart-topic
Note
|
If you deployed Strimzi on Kubernetes, you can create a topic using the KafkaTopic custom resource.
|
Using the Kafka Bridge, produce three messages to the topic you created:
curl -X POST \
http://localhost:8080/topics/bridge-quickstart-topic \
-H 'content-type: application/vnd.kafka.json.v2+json' \
-d '{
"records": [
{
"key": "my-key",
"value": "sales-lead-0001"
},
{
"value": "sales-lead-0002",
"partition": 2
},
{
"value": "sales-lead-0003"
}
]
}'
sales-lead-0001
is sent to a partition based on the hash of the key.
sales-lead-0002
is sent directly to partition 2.
sales-lead-0003
is sent to a partition in the bridge-quickstart-topic
topic using a round-robin method.
If the request is successful, the Kafka Bridge returns an offsets
array, along with a 200
code and a content-type
header of application/vnd.kafka.v2+json
. For each message, the offsets
array describes:
The partition that the message was sent to
The current message offset of the partition
#...
{
"offsets":[
{
"partition":0,
"offset":0
},
{
"partition":2,
"offset":0
},
{
"partition":0,
"offset":1
}
]
}
Make other curl requests to find information on topics and partitions.
curl -X GET \
http://localhost:8080/topics
[
"__strimzi_store_topic",
"__strimzi-topic-operator-kstreams-topic-store-changelog",
"bridge-quickstart-topic",
"my-topic"
]
curl -X GET \
http://localhost:8080/topics/bridge-quickstart-topic
{
"name": "bridge-quickstart-topic",
"configs": {
"compression.type": "producer",
"leader.replication.throttled.replicas": "",
"min.insync.replicas": "1",
"message.downconversion.enable": "true",
"segment.jitter.ms": "0",
"cleanup.policy": "delete",
"flush.ms": "9223372036854775807",
"follower.replication.throttled.replicas": "",
"segment.bytes": "1073741824",
"retention.ms": "604800000",
"flush.messages": "9223372036854775807",
"message.format.version": "2.8-IV1",
"max.compaction.lag.ms": "9223372036854775807",
"file.delete.delay.ms": "60000",
"max.message.bytes": "1048588",
"min.compaction.lag.ms": "0",
"message.timestamp.type": "CreateTime",
"preallocate": "false",
"index.interval.bytes": "4096",
"min.cleanable.dirty.ratio": "0.5",
"unclean.leader.election.enable": "false",
"retention.bytes": "-1",
"delete.retention.ms": "86400000",
"segment.ms": "604800000",
"message.timestamp.difference.max.ms": "9223372036854775807",
"segment.index.bytes": "10485760"
},
"partitions": [
{
"partition": 0,
"leader": 0,
"replicas": [
{
"broker": 0,
"leader": true,
"in_sync": true
},
{
"broker": 1,
"leader": false,
"in_sync": true
},
{
"broker": 2,
"leader": false,
"in_sync": true
}
]
},
{
"partition": 1,
"leader": 2,
"replicas": [
{
"broker": 2,
"leader": true,
"in_sync": true
},
{
"broker": 0,
"leader": false,
"in_sync": true
},
{
"broker": 1,
"leader": false,
"in_sync": true
}
]
},
{
"partition": 2,
"leader": 1,
"replicas": [
{
"broker": 1,
"leader": true,
"in_sync": true
},
{
"broker": 2,
"leader": false,
"in_sync": true
},
{
"broker": 0,
"leader": false,
"in_sync": true
}
]
}
]
}
curl -X GET \
http://localhost:8080/topics/bridge-quickstart-topic/partitions
[
{
"partition": 0,
"leader": 0,
"replicas": [
{
"broker": 0,
"leader": true,
"in_sync": true
},
{
"broker": 1,
"leader": false,
"in_sync": true
},
{
"broker": 2,
"leader": false,
"in_sync": true
}
]
},
{
"partition": 1,
"leader": 2,
"replicas": [
{
"broker": 2,
"leader": true,
"in_sync": true
},
{
"broker": 0,
"leader": false,
"in_sync": true
},
{
"broker": 1,
"leader": false,
"in_sync": true
}
]
},
{
"partition": 2,
"leader": 1,
"replicas": [
{
"broker": 1,
"leader": true,
"in_sync": true
},
{
"broker": 2,
"leader": false,
"in_sync": true
},
{
"broker": 0,
"leader": false,
"in_sync": true
}
]
}
]
curl -X GET \
http://localhost:8080/topics/bridge-quickstart-topic/partitions/0
{
"partition": 0,
"leader": 0,
"replicas": [
{
"broker": 0,
"leader": true,
"in_sync": true
},
{
"broker": 1,
"leader": false,
"in_sync": true
},
{
"broker": 2,
"leader": false,
"in_sync": true
}
]
}
curl -X GET \
http://localhost:8080/topics/bridge-quickstart-topic/partitions/0/offsets
{
"beginning_offset": 0,
"end_offset": 1
}
After producing messages to topics and partitions, create a Kafka Bridge consumer.
Before you can perform any consumer operations in the Kafka cluster, you must first create a consumer by using the consumers endpoint. The consumer is referred to as a Kafka Bridge consumer.
Create a Kafka Bridge consumer in a new consumer group named bridge-quickstart-consumer-group
:
curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group \
-H 'content-type: application/vnd.kafka.v2+json' \
-d '{
"name": "bridge-quickstart-consumer",
"auto.offset.reset": "earliest",
"format": "json",
"enable.auto.commit": false,
"fetch.min.bytes": 512,
"consumer.request.timeout.ms": 30000
}'
The consumer is named bridge-quickstart-consumer
and the embedded data format is set as json
.
Some basic configuration settings are defined.
The consumer will not commit offsets to the log automatically because the enable.auto.commit
setting is false
. You will commit the offsets manually later in this quickstart.
If the request is successful, the Kafka Bridge returns the consumer ID (instance_id
) and base URL (base_uri
) in the response body, along with a 200
code.
#...
{
"instance_id": "bridge-quickstart-consumer",
"base_uri":"http://<bridge_id>-bridge-service:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer"
}
Copy the base URL (base_uri
) to use in the other consumer operations in this quickstart.
Now that you have created a Kafka Bridge consumer, you can subscribe it to topics.
After you have created a Kafka Bridge consumer, subscribe it to one or more topics by using the subscription endpoint. When subscribed, the consumer starts receiving all messages that are produced to the topic.
Subscribe the consumer to the bridge-quickstart-topic
topic that you created earlier, in Producing messages to topics and partitions:
curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/subscription \
-H 'content-type: application/vnd.kafka.v2+json' \
-d '{
"topics": [
"bridge-quickstart-topic"
]
}'
The topics
array can contain a single topic (as shown here) or multiple topics. If you want to subscribe the consumer to multiple topics that match a regular expression, you can use the topic_pattern
string instead of the topics
array.
If the request is successful, the Kafka Bridge returns a 204
(No Content) code only.
After subscribing a Kafka Bridge consumer to topics, you can retrieve messages from the consumer.
Retrieve the latest messages from the Kafka Bridge consumer by requesting data from the records endpoint. In production, HTTP clients can call this endpoint repeatedly (in a loop).
Produce additional messages to the Kafka Bridge consumer, as described in Producing messages to topics and partitions.
Submit a GET
request to the records
endpoint:
curl -X GET http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/records \
-H 'accept: application/vnd.kafka.json.v2+json'
After creating and subscribing to a Kafka Bridge consumer, a first GET request will return an empty response because the poll operation starts a rebalancing process to assign partitions.
Repeat step two to retrieve messages from the Kafka Bridge consumer.
The Kafka Bridge returns an array of messages — describing the topic name, key, value, partition, and offset — in the response body, along with a 200
code. Messages are retrieved from the latest offset by default.
HTTP/1.1 200 OK
content-type: application/vnd.kafka.json.v2+json
#...
[
{
"topic":"bridge-quickstart-topic",
"key":"my-key",
"value":"sales-lead-0001",
"partition":0,
"offset":0
},
{
"topic":"bridge-quickstart-topic",
"key":null,
"value":"sales-lead-0003",
"partition":0,
"offset":1
},
#...
Note
|
If an empty response is returned, produce more records to the consumer as described in Producing messages to topics and partitions, and then try retrieving messages again. |
After retrieving messages from a Kafka Bridge consumer, try committing offsets to the log.
Use the offsets endpoint to manually commit offsets to the log for all messages received by the Kafka Bridge consumer. This is required because the Kafka Bridge consumer that you created earlier, in Creating a Kafka Bridge consumer, was configured with the enable.auto.commit
setting as false
.
Commit offsets to the log for the bridge-quickstart-consumer
:
curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/offsets
Because no request body is submitted, offsets are committed for all the records that have been received by the consumer. Alternatively, the request body can contain an array (OffsetCommitSeekList) that specifies the topics and partitions that you want to commit offsets for.
If the request is successful, the Kafka Bridge returns a 204
code only.
After committing offsets to the log, try out the endpoints for seeking to offsets.
Use the positions endpoints to configure the Kafka Bridge consumer to retrieve messages for a partition from a specific offset, and then from the latest offset. This is referred to in Apache Kafka as a seek operation.
Seek to a specific offset for partition 0 of the quickstart-bridge-topic
topic:
curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/positions \
-H 'content-type: application/vnd.kafka.v2+json' \
-d '{
"offsets": [
{
"topic": "bridge-quickstart-topic",
"partition": 0,
"offset": 2
}
]
}'
If the request is successful, the Kafka Bridge returns a 204
code only.
Submit a GET
request to the records
endpoint:
curl -X GET http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/records \
-H 'accept: application/vnd.kafka.json.v2+json'
The Kafka Bridge returns messages from the offset that you seeked to.
Restore the default message retrieval behavior by seeking to the last offset for the same partition. This time, use the positions/end endpoint.
curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/positions/end \
-H 'content-type: application/vnd.kafka.v2+json' \
-d '{
"partitions": [
{
"topic": "bridge-quickstart-topic",
"partition": 0
}
]
}'
If the request is successful, the Kafka Bridge returns another 204
code.
Note
|
You can also use the positions/beginning endpoint to seek to the first offset for one or more partitions. |
In this quickstart, you have used the Strimzi Kafka Bridge to perform several common operations on a Kafka cluster. You can now delete the Kafka Bridge consumer that you created earlier.
Delete the Kafka Bridge consumer that you used throughout this quickstart.
Delete the Kafka Bridge consumer by sending a DELETE
request to the instances endpoint.
curl -X DELETE http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer
If the request is successful, the Kafka Bridge returns a 204
code.
The Strimzi Kafka Bridge provides a REST API for integrating HTTP based client applications with a Kafka cluster. You can use the API to create and manage consumers and send and receive records over HTTP rather than the native Kafka protocol.
Version : 0.1.0
Consumers : Consumer operations to create consumers in your Kafka cluster and perform common actions, such as subscribing to topics, retrieving processed records, and committing offsets.
Producer : Producer operations to send records to a specified topic or topic partition.
Seek : Seek operations that enable a consumer to begin receiving messages from a given offset position.
Topics : Topic operations to send messages to a specified topic or topic partition, optionally including message keys in requests. You can also retrieve topics and topic metadata.
application/json
application/json
Type : < string, < integer (int32) > array > map
Information about Kafka Bridge instance.
Name | Schema |
---|---|
bridge_version |
string |
Name | Description | Schema |
---|---|---|
auto.offset.reset |
Resets the offset position for the consumer. If set to |
string |
consumer.request.timeout.ms |
Sets the maximum amount of time, in milliseconds, for the consumer to wait for messages for a request. If the timeout period is reached without a response, an error is returned. Default is |
integer |
enable.auto.commit |
If set to |
boolean |
fetch.min.bytes |
Sets the minimum amount of data, in bytes, for the consumer to receive. The broker waits until the data to send exceeds this amount. Default is |
integer |
format |
The allowable message format for the consumer, which can be |
string |
isolation.level |
If set to |
string |
name |
The unique name for the consumer instance. The name is unique within the scope of the consumer group. The name is used in URLs. If a name is not specified, a randomly generated name is assigned. |
string |
Name | Schema |
---|---|
headers |
|
key |
string |
offset |
integer (int64) |
partition |
integer (int32) |
topic |
string |
value |
string |
Type : < ConsumerRecord > array
Name | Description | Schema |
---|---|---|
base_uri |
Base URI used to construct URIs for subsequent requests against this consumer instance. |
string |
instance_id |
Unique ID for the consumer instance in the group. |
string |
Name | Schema |
---|---|
error_code |
integer (int32) |
message |
string |
Name | Description | Schema |
---|---|---|
key |
string |
|
value |
The header value in binary format, base64-encoded |
string (byte) |
Type : < KafkaHeader > array
Name | Schema |
---|---|
offset |
integer (int64) |
partition |
integer (int32) |
topic |
string |
Name | Schema |
---|---|
offsets |
< OffsetCommitSeek > array |
Name | Schema |
---|---|
offset |
integer (int64) |
partition |
integer (int32) |
Name | Schema |
---|---|
offsets |
< OffsetRecordSent > array |
Name | Schema |
---|---|
beginning_offset |
integer (int64) |
end_offset |
integer (int64) |
Name | Schema |
---|---|
partition |
integer (int32) |
topic |
string |
Name | Schema |
---|---|
leader |
integer (int32) |
partition |
integer (int32) |
replicas |
< Replica > array |
Name | Schema |
---|---|
partitions |
< Partition > array |
Name | Schema |
---|---|
headers |
|
partition |
integer (int32) |
Name | Schema |
---|---|
records |
< ProducerRecord > array |
Type : object
Name | Schema |
---|---|
records |
< ProducerRecordToPartition > array |
Name | Schema |
---|---|
broker |
integer (int32) |
in_sync |
boolean |
leader |
boolean |
Name | Schema |
---|---|
partitions |
< AssignedTopicPartitions > array |
topics |
Name | Description | Schema |
---|---|---|
configs |
Per-topic configuration overrides |
< string, string > map |
name |
Name of the topic |
string |
partitions |
< PartitionMetadata > array |
Name | Description | Schema |
---|---|---|
topic_pattern |
A regex topic pattern for matching multiple topics |
string |
topics |
< string > array |
Retrieves information about the Kafka Bridge instance, in JSON format.
HTTP Code | Description | Schema |
---|---|---|
200 |
Information about Kafka Bridge instance. |
application/json
{
"bridge_version" : "0.16.0"
}
Creates a consumer instance in the given consumer group. You can optionally specify a consumer name and supported configuration options. It returns a base URI which must be used to construct URLs for subsequent requests against this consumer instance.
Type | Name | Description | Schema |
---|---|---|---|
Path |
groupid |
ID of the consumer group in which to create the consumer. |
string |
Body |
body |
Name and configuration of the consumer. The name is unique within the scope of the consumer group. If a name is not specified, a randomly generated name is assigned. All parameters are optional. The supported configuration options are shown in the following example. |
HTTP Code | Description | Schema |
---|---|---|
200 |
Consumer created successfully. |
|
409 |
A consumer instance with the specified name already exists in the Kafka Bridge. |
|
422 |
One or more consumer configuration options have invalid values. |
application/vnd.kafka.v2+json
application/vnd.kafka.v2+json
Consumers
{
"name" : "consumer1",
"format" : "binary",
"auto.offset.reset" : "earliest",
"enable.auto.commit" : false,
"fetch.min.bytes" : 512,
"consumer.request.timeout.ms" : 30000,
"isolation.level" : "read_committed"
}
{
"instance_id" : "consumer1",
"base_uri" : "http://localhost:8080/consumers/my-group/instances/consumer1"
}
{
"error_code" : 409,
"message" : "A consumer instance with the specified name already exists in the Kafka Bridge."
}
{
"error_code" : 422,
"message" : "One or more consumer configuration options have invalid values."
}
Deletes a specified consumer instance. The request for this operation MUST use the base URL (including the host and port) returned in the response from the POST
request to /consumers/{groupid}
that was used to create this consumer.
Type | Name | Description | Schema |
---|---|---|---|
Path |
groupid |
ID of the consumer group to which the consumer belongs. |
string |
Path |
name |
Name of the consumer to delete. |
string |
HTTP Code | Description | Schema |
---|---|---|
204 |
Consumer removed successfully. |
No Content |
404 |
The specified consumer instance was not found. |
application/vnd.kafka.v2+json
application/vnd.kafka.v2+json
Consumers
{
"error_code" : 404,
"message" : "The specified consumer instance was not found."
}
Assigns one or more topic partitions to a consumer.
Type | Name | Description | Schema |
---|---|---|---|
Path |
groupid |
ID of the consumer group to which the consumer belongs. |
string |
Path |
name |
Name of the consumer to assign topic partitions to. |
string |
Body |
body |
List of topic partitions to assign to the consumer. |
HTTP Code | Description | Schema |
---|---|---|
204 |
Partitions assigned successfully. |
No Content |
404 |
The specified consumer instance was not found. |
|
409 |
Subscriptions to topics, partitions, and patterns are mutually exclusive. |
application/vnd.kafka.v2+json
application/vnd.kafka.v2+json
Consumers
{
"partitions" : [ {
"topic" : "topic",
"partition" : 0
}, {
"topic" : "topic",
"partition" : 1
} ]
}
{
"error_code" : 404,
"message" : "The specified consumer instance was not found."
}
{
"error_code" : 409,
"message" : "Subscriptions to topics, partitions, and patterns are mutually exclusive."
}
Commits a list of consumer offsets. To commit offsets for all records fetched by the consumer, leave the request body empty.
Type | Name | Description | Schema |
---|---|---|---|
Path |
groupid |
ID of the consumer group to which the consumer belongs. |
string |
Path |
name |
Name of the consumer. |
string |
Body |
body |
List of consumer offsets to commit to the consumer offsets commit log. You can specify one or more topic partitions to commit offsets for. |
HTTP Code | Description | Schema |
---|---|---|
204 |
Commit made successfully. |
No Content |
404 |
The specified consumer instance was not found. |
application/vnd.kafka.v2+json
application/vnd.kafka.v2+json
Consumers
{
"offsets" : [ {
"topic" : "topic",
"partition" : 0,
"offset" : 15
}, {
"topic" : "topic",
"partition" : 1,
"offset" : 42
} ]
}
{
"error_code" : 404,
"message" : "The specified consumer instance was not found."
}
Configures a subscribed consumer to fetch offsets from a particular offset the next time it fetches a set of records from a given topic partition. This overrides the default fetch behavior for consumers. You can specify one or more topic partitions.
Type | Name | Description | Schema |
---|---|---|---|
Path |
groupid |
ID of the consumer group to which the consumer belongs. |
string |
Path |
name |
Name of the subscribed consumer. |
string |
Body |
body |
List of partition offsets from which the subscribed consumer will next fetch records. |
HTTP Code | Description | Schema |
---|---|---|
204 |
Seek performed successfully. |
No Content |
404 |
The specified consumer instance was not found, or the specified consumer instance did not have one of the specified partitions assigned. |
application/vnd.kafka.v2+json
application/vnd.kafka.v2+json
Consumers
Seek
{
"offsets" : [ {
"topic" : "topic",
"partition" : 0,
"offset" : 15
}, {
"topic" : "topic",
"partition" : 1,
"offset" : 42
} ]
}
{
"error_code" : 404,
"message" : "The specified consumer instance was not found."
}
Configures a subscribed consumer to seek (and subsequently read from) the first offset in one or more given topic partitions.
Type | Name | Description | Schema |
---|---|---|---|
Path |
groupid |
ID of the consumer group to which the subscribed consumer belongs. |
string |
Path |
name |
Name of the subscribed consumer. |
string |
Body |
body |
List of topic partitions to which the consumer is subscribed. The consumer will seek the first offset in the specified partitions. |
HTTP Code | Description | Schema |
---|---|---|
204 |
Seek to the beginning performed successfully. |
No Content |
404 |
The specified consumer instance was not found, or the specified consumer instance did not have one of the specified partitions assigned. |
application/vnd.kafka.v2+json
application/vnd.kafka.v2+json
Consumers
Seek
{
"partitions" : [ {
"topic" : "topic",
"partition" : 0
}, {
"topic" : "topic",
"partition" : 1
} ]
}
{
"error_code" : 404,
"message" : "The specified consumer instance was not found."
}
Configures a subscribed consumer to seek (and subsequently read from) the offset at the end of one or more of the given topic partitions.
Type | Name | Description | Schema |
---|---|---|---|
Path |
groupid |
ID of the consumer group to which the subscribed consumer belongs. |
string |
Path |
name |
Name of the subscribed consumer. |
string |
Body |
body |
List of topic partitions to which the consumer is subscribed. The consumer will seek the last offset in the specified partitions. |
HTTP Code | Description | Schema |
---|---|---|
204 |
Seek to the end performed successfully. |
No Content |
404 |
The specified consumer instance was not found, or the specified consumer instance did not have one of the specified partitions assigned. |
application/vnd.kafka.v2+json
application/vnd.kafka.v2+json
Consumers
Seek
{
"partitions" : [ {
"topic" : "topic",
"partition" : 0
}, {
"topic" : "topic",
"partition" : 1
} ]
}
{
"error_code" : 404,
"message" : "The specified consumer instance was not found."
}
Retrieves records for a subscribed consumer, including message values, topics, and partitions. The request for this operation MUST use the base URL (including the host and port) returned in the response from the POST
request to /consumers/{groupid}
that was used to create this consumer.
Type | Name | Description | Schema |
---|---|---|---|
Path |
groupid |
ID of the consumer group to which the subscribed consumer belongs. |
string |
Path |
name |
Name of the subscribed consumer to retrieve records from. |
string |
Query |
max_bytes |
The maximum size, in bytes, of unencoded keys and values that can be included in the response. Otherwise, an error response with code 422 is returned. |
integer |
Query |
timeout |
The maximum amount of time, in milliseconds, that the HTTP Bridge spends retrieving records before timing out the request. |
integer |
HTTP Code | Description | Schema |
---|---|---|
200 |
Poll request executed successfully. |
|
404 |
The specified consumer instance was not found. |
|
406 |
The |
|
422 |
Response exceeds the maximum number of bytes the consumer can receive |
application/vnd.kafka.json.v2+json
application/vnd.kafka.binary.v2+json
application/vnd.kafka.v2+json
Consumers
[ {
"topic" : "topic",
"key" : "key1",
"value" : {
"foo" : "bar"
},
"partition" : 0,
"offset" : 2
}, {
"topic" : "topic",
"key" : "key2",
"value" : [ "foo2", "bar2" ],
"partition" : 1,
"offset" : 3
} ]
[
{
"topic": "test",
"key": "a2V5",
"value": "Y29uZmx1ZW50",
"partition": 1,
"offset": 100,
},
{
"topic": "test",
"key": "a2V5",
"value": "a2Fma2E=",
"partition": 2,
"offset": 101,
}
]
{
"error_code" : 404,
"message" : "The specified consumer instance was not found."
}
{
"error_code" : 406,
"message" : "The `format` used in the consumer creation request does not match the embedded format in the Accept header of this request."
}
{
"error_code" : 422,
"message" : "Response exceeds the maximum number of bytes the consumer can receive"
}
Subscribes a consumer to one or more topics. You can describe the topics to which the consumer will subscribe in a list (of Topics
type) or as a topic_pattern
field. Each call replaces the subscriptions for the subscriber.
Type | Name | Description | Schema |
---|---|---|---|
Path |
groupid |
ID of the consumer group to which the subscribed consumer belongs. |
string |
Path |
name |
Name of the consumer to subscribe to topics. |
string |
Body |
body |
List of topics to which the consumer will subscribe. |
HTTP Code | Description | Schema |
---|---|---|
204 |
Consumer subscribed successfully. |
No Content |
404 |
The specified consumer instance was not found. |
|
409 |
Subscriptions to topics, partitions, and patterns are mutually exclusive. |
|
422 |
A list (of |
application/vnd.kafka.v2+json
application/vnd.kafka.v2+json
Consumers
{
"topics" : [ "topic1", "topic2" ]
}
{
"error_code" : 404,
"message" : "The specified consumer instance was not found."
}
{
"error_code" : 409,
"message" : "Subscriptions to topics, partitions, and patterns are mutually exclusive."
}
{
"error_code" : 422,
"message" : "A list (of Topics type) or a topic_pattern must be specified."
}
Retrieves a list of the topics to which the consumer is subscribed.
Type | Name | Description | Schema |
---|---|---|---|
Path |
groupid |
ID of the consumer group to which the subscribed consumer belongs. |
string |
Path |
name |
Name of the subscribed consumer. |
string |
HTTP Code | Description | Schema |
---|---|---|
200 |
List of subscribed topics and partitions. |
|
404 |
The specified consumer instance was not found. |
application/vnd.kafka.v2+json
Consumers
{
"topics" : [ "my-topic1", "my-topic2" ],
"partitions" : [ {
"my-topic1" : [ 1, 2, 3 ]
}, {
"my-topic2" : [ 1 ]
} ]
}
{
"error_code" : 404,
"message" : "The specified consumer instance was not found."
}
Unsubscribes a consumer from all topics.
Type | Name | Description | Schema |
---|---|---|---|
Path |
groupid |
ID of the consumer group to which the subscribed consumer belongs. |
string |
Path |
name |
Name of the consumer to unsubscribe from topics. |
string |
HTTP Code | Description | Schema |
---|---|---|
204 |
Consumer unsubscribed successfully. |
No Content |
404 |
The specified consumer instance was not found. |
Consumers
{
"error_code" : 404,
"message" : "The specified consumer instance was not found."
}
Check if the bridge is running. This does not necessarily imply that it is ready to accept requests.
HTTP Code | Description | Schema |
---|---|---|
200 |
The bridge is healthy |
No Content |
Retrieves the OpenAPI v2 specification in JSON format.
HTTP Code | Description | Schema |
---|---|---|
200 |
OpenAPI v2 specification in JSON format retrieved successfully. |
string |
application/json
Check if the bridge is ready and can accept requests.
HTTP Code | Description | Schema |
---|---|---|
200 |
The bridge is ready |
No Content |
Retrieves a list of all topics.
HTTP Code | Description | Schema |
---|---|---|
200 |
List of topics. |
< string > array |
application/vnd.kafka.v2+json
Topics
[ "topic1", "topic2" ]
Sends one or more records to a given topic, optionally specifying a partition, key, or both.
Type | Name | Description | Schema |
---|---|---|---|
Path |
topicname |
Name of the topic to send records to or retrieve metadata from. |
string |
Query |
async |
Whether to return immediately upon sending records, instead of waiting for metadata. No offsets will be returned if specified. Defaults to false. |
boolean |
Body |
body |
HTTP Code | Description | Schema |
---|---|---|
200 |
Records sent successfully. |
|
404 |
The specified topic was not found. |
|
422 |
The record list is not valid. |
application/vnd.kafka.json.v2+json
application/vnd.kafka.binary.v2+json
application/vnd.kafka.v2+json
Producer
Topics
{
"records" : [ {
"key" : "key1",
"value" : "value1"
}, {
"value" : "value2",
"partition" : 1
}, {
"value" : "value3"
} ]
}
{
"offsets" : [ {
"partition" : 2,
"offset" : 0
}, {
"partition" : 1,
"offset" : 1
}, {
"partition" : 2,
"offset" : 2
} ]
}
{
"error_code" : 404,
"message" : "The specified topic was not found."
}
{
"error_code" : 422,
"message" : "The record list contains invalid records."
}
Retrieves the metadata about a given topic.
Type | Name | Description | Schema |
---|---|---|---|
Path |
topicname |
Name of the topic to send records to or retrieve metadata from. |
string |
HTTP Code | Description | Schema |
---|---|---|
200 |
Topic metadata |
application/vnd.kafka.v2+json
Topics
{
"name" : "topic",
"offset" : 2,
"configs" : {
"cleanup.policy" : "compact"
},
"partitions" : [ {
"partition" : 1,
"leader" : 1,
"replicas" : [ {
"broker" : 1,
"leader" : true,
"in_sync" : true
}, {
"broker" : 2,
"leader" : false,
"in_sync" : true
} ]
}, {
"partition" : 2,
"leader" : 2,
"replicas" : [ {
"broker" : 1,
"leader" : false,
"in_sync" : true
}, {
"broker" : 2,
"leader" : true,
"in_sync" : true
} ]
} ]
}
Retrieves a list of partitions for the topic.
Type | Name | Description | Schema |
---|---|---|---|
Path |
topicname |
Name of the topic to send records to or retrieve metadata from. |
string |
HTTP Code | Description | Schema |
---|---|---|
200 |
List of partitions |
< PartitionMetadata > array |
404 |
The specified topic was not found. |
application/vnd.kafka.v2+json
Topics
[ {
"partition" : 1,
"leader" : 1,
"replicas" : [ {
"broker" : 1,
"leader" : true,
"in_sync" : true
}, {
"broker" : 2,
"leader" : false,
"in_sync" : true
} ]
}, {
"partition" : 2,
"leader" : 2,
"replicas" : [ {
"broker" : 1,
"leader" : false,
"in_sync" : true
}, {
"broker" : 2,
"leader" : true,
"in_sync" : true
} ]
} ]
{
"error_code" : 404,
"message" : "The specified topic was not found."
}
Sends one or more records to a given topic partition, optionally specifying a key.
Type | Name | Description | Schema |
---|---|---|---|
Path |
partitionid |
ID of the partition to send records to or retrieve metadata from. |
integer |
Path |
topicname |
Name of the topic to send records to or retrieve metadata from. |
string |
Body |
body |
List of records to send to a given topic partition, including a value (required) and a key (optional). |
HTTP Code | Description | Schema |
---|---|---|
200 |
Records sent successfully. |
|
404 |
The specified topic partition was not found. |
|
422 |
The record is not valid. |
application/vnd.kafka.json.v2+json
application/vnd.kafka.binary.v2+json
application/vnd.kafka.v2+json
Producer
Topics
{
"records" : [ {
"key" : "key1",
"value" : "value1"
}, {
"value" : "value2"
} ]
}
{
"offsets" : [ {
"partition" : 2,
"offset" : 0
}, {
"partition" : 1,
"offset" : 1
}, {
"partition" : 2,
"offset" : 2
} ]
}
{
"error_code" : 404,
"message" : "The specified topic partition was not found."
}
{
"error_code" : 422,
"message" : "The record is not valid."
}
Retrieves partition metadata for the topic partition.
Type | Name | Description | Schema |
---|---|---|---|
Path |
partitionid |
ID of the partition to send records to or retrieve metadata from. |
integer |
Path |
topicname |
Name of the topic to send records to or retrieve metadata from. |
string |
HTTP Code | Description | Schema |
---|---|---|
200 |
Partition metadata |
|
404 |
The specified topic partition was not found. |
application/vnd.kafka.v2+json
Topics
{
"partition" : 1,
"leader" : 1,
"replicas" : [ {
"broker" : 1,
"leader" : true,
"in_sync" : true
}, {
"broker" : 2,
"leader" : false,
"in_sync" : true
} ]
}
{
"error_code" : 404,
"message" : "The specified topic partition was not found."
}
Retrieves a summary of the offsets for the topic partition.
Type | Name | Description | Schema |
---|---|---|---|
Path |
partitionid |
ID of the partition. |
integer |
Path |
topicname |
Name of the topic containing the partition. |
string |
HTTP Code | Description | Schema |
---|---|---|
200 |
A summary of the offsets for the topic partition. |
|
404 |
The specified topic partition was not found. |
application/vnd.kafka.v2+json
Topics
{
"beginning_offset" : 10,
"end_offset" : 50
}
{
"error_code" : 404,
"message" : "The specified topic partition was not found."
}
Revised on 2022-08-17 07:42:00 UTC