Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: deviceinsight/kafkactl
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v5.2.0
Choose a base ref
...
head repository: deviceinsight/kafkactl
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: main
Choose a head ref
Loading
Showing with 734 additions and 314 deletions.
  1. +1 −1 .github/aur/kafkactl/PKGBUILD.template
  2. +1 −1 .github/contributing.md
  3. +2 −2 .github/workflows/lint_test.yml
  4. +1 −1 .github/workflows/publish_ghpages.yml
  5. +1 −11 .github/workflows/release.yml
  6. +0 −3 .gitignore
  7. +0 −24 .goreleaser.yml
  8. +23 −0 CHANGELOG.md
  9. +0 −1 Makefile
  10. +61 −20 README.adoc
  11. +1 −12 cmd/config/useContext.go
  12. +1 −0 cmd/create/create-topic.go
  13. +100 −0 cmd/create/create-topic_test.go
  14. +24 −0 cmd/produce/produce_test.go
  15. +8 −0 cmd/root.go
  16. +53 −0 cmd/root_test.go
  17. +29 −30 go.mod
  18. +75 −63 go.sum
  19. +98 −0 internal/CachingSchemaRegistry.go
  20. +14 −37 internal/common-operation.go
  21. +2 −1 internal/consume/AvroMessageDeserializer.go
  22. +0 −39 internal/consume/CachingSchemaRegistry.go
  23. +1 −1 internal/consume/ProtobufMessageDeserializer.go
  24. +2 −2 internal/consume/consume-operation.go
  25. +12 −20 internal/consumergroupoffsets/consumer-group-offset-operation.go
  26. +34 −14 internal/global/config.go
  27. +23 −0 internal/k8s/executer_test.go
  28. +5 −1 internal/k8s/executor.go
  29. +16 −4 internal/k8s/pod_overrides.go
  30. +6 −0 internal/output/output.go
  31. +4 −3 internal/producer/AvroMessageSerializer.go
  32. +6 −14 internal/producer/MessageSerializer.go
  33. +65 −0 internal/producer/MessageSerializer_test.go
  34. +4 −0 internal/producer/ProtobufMessageSerializer.go
  35. +2 −2 internal/producer/producer-operation.go
  36. +1 −2 internal/testutil/helpers.go
  37. +3 −0 internal/testutil/testdata/msg-base64.json
  38. +55 −5 internal/topic/topic-operation.go
2 changes: 1 addition & 1 deletion .github/aur/kafkactl/PKGBUILD.template
Original file line number Diff line number Diff line change
@@ -8,7 +8,7 @@ url="https://github.com/deviceinsight/kafkactl/"
arch=("i686" "x86_64" "aarch64")
license=("APACHE")
depends=("glibc")
makedepends=('go>=1.22')
makedepends=('go>=1.23')
optdepends=('kubectl: for kafka running in Kubernetes cluster',
'bash-completion: auto-completion for kafkactl in Bash',
'zsh-completions: auto-completion for kafkactl in ZSH')
2 changes: 1 addition & 1 deletion .github/contributing.md
Original file line number Diff line number Diff line change
@@ -13,7 +13,7 @@ These tests typically run via Github Actions, but it is also easy to run them lo
on your development machine.

> :bulb: the integration tests do not clean up afterwards. To avoid interferences
> between different tests, we have helpers that create topics/groups/... with random suffixes. See e.g. [CreateTopic()](https://github.com/deviceinsight/kafkactl/blob/main/testutil/helpers.go#L17)
> between different tests, we have helpers that create topics/groups/... with random suffixes. See e.g. [CreateTopic()](https://github.com/deviceinsight/kafkactl/blob/main/internal/testutil/helpers.go#L20)
### Run all tests locally

4 changes: 2 additions & 2 deletions .github/workflows/lint_test.yml
Original file line number Diff line number Diff line change
@@ -24,7 +24,7 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.22'
go-version: '1.23'
- name: Run Unit tests
run: make test
- name: Upload logs
@@ -57,7 +57,7 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.22'
go-version: '1.23'
- name: Run integration tests
uses: nick-fields/retry@v2
with:
2 changes: 1 addition & 1 deletion .github/workflows/publish_ghpages.yml
Original file line number Diff line number Diff line change
@@ -15,7 +15,7 @@ jobs:
-
uses: actions/setup-go@v5
with:
go-version: '1.22'
go-version: '1.23'
-
name: Build and generate docs
run: make docs
12 changes: 1 addition & 11 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
@@ -19,15 +19,7 @@ jobs:
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.22'
-
name: setup-snapcraft
# FIXME: the mkdirs are a hack for https://github.com/goreleaser/goreleaser/issues/1715
run: |
sudo apt-get update
sudo apt-get -yq --no-install-suggests --no-install-recommends install snapcraft
mkdir -p $HOME/.cache/snapcraft/download
mkdir -p $HOME/.cache/snapcraft/stage-packages
go-version: '1.23'
-
name: Docker login
run: echo ${{ secrets.DOCKER_PASSWORD }} | docker login -u ${{ secrets.DOCKER_USERNAME }} --password-stdin
@@ -41,5 +33,3 @@ jobs:
# create personal access token: https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line
GITHUB_TOKEN: ${{ secrets.GH_PAT }}
AUR_SSH_PRIVATE_KEY: ${{ secrets.AUR_SSH_PRIVATE_KEY }}
# snapcraft export-login --snaps kafkactl --channels edge,beta,candidate,stable -
SNAPCRAFT_STORE_CREDENTIALS: ${{ secrets.SNAPCRAFT_TOKEN }}
3 changes: 0 additions & 3 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -18,9 +18,6 @@ kafkactl.exe
.kafkactl.yml
kafkactl.yml

### Snap
snap.login

### logs
*.log
/kafkactl-completion.bash
24 changes: 0 additions & 24 deletions .goreleaser.yml
Original file line number Diff line number Diff line change
@@ -40,30 +40,6 @@ release:
disable: false
draft: false

snapcrafts:
- id: default
publish: true
summary: A command-line interface for interaction with Apache Kafka
description: |
A Commandline interface for Apache Kafka which provides useful features adapted from kubectl for Kubernetes.
Multiple kafka brokers can be configured in a config file and the active broker is also persisted within the config.
In addition kafkactl supports auto-completion for its commands as well as topic names.
grade: stable
confinement: strict
license: Apache-2.0
apps:
kafkactl:
plugs: ["home", "network", "dot-kube", "config-kafkactl"]
completer: kafkactl-completion.bash
plugs:
dot-kube:
interface: personal-files
read:
- $HOME/.kube
config-kafkactl:
interface: personal-files
write:
- $HOME/.config/kafkactl/config.yml
brews:
-
repository:
23 changes: 23 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -6,6 +6,29 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

## [Unreleased]

## 5.5.0 - 2025-02-11
### Added
- [#234](https://github.com/deviceinsight/kafkactl/pull/234) caching to arvo client when producing messages
- [#236](https://github.com/deviceinsight/kafkactl/issues/236) set working directory to path with first loaded config file.
- [#233](https://github.com/deviceinsight/kafkactl/issues/233) replication factor is now printed for most topic output formats.

### Removed
- [#231](https://github.com/deviceinsight/kafkactl/issues/231) Remove support for installing kafkactl via snap.

### Fixed
- [#227](https://github.com/deviceinsight/kafkactl/issues/227) Incorrect handling of Base64-encoded values when producing from JSON
- [#228](https://github.com/deviceinsight/kafkactl/issues/228) Fix parsing version of gcloud kubectl.
- [#217](https://github.com/deviceinsight/kafkactl/issues/217) Allow reset of consumer-group in dead state

## 5.4.0 - 2024-11-28
### Added
- [#215](https://github.com/deviceinsight/kafkactl/pull/215) Add `--context` option to set command's context

## 5.3.0 - 2024-08-14
### Added
- [#203](https://github.com/deviceinsight/kafkactl/pull/203) Add pod override fields affinity and tolerations
- [#210](https://github.com/deviceinsight/kafkactl/pull/210) Create topic from file

## 5.2.0 - 2024-08-08

## 5.1.0 - 2024-08-07
1 change: 0 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
@@ -63,7 +63,6 @@ clean:
# manually executing goreleaser:
# export GITHUB_TOKEN=xyz
# export AUR_SSH_PRIVATE_KEY=$(cat /path/to/id_aur)
# snapcraft login
# docker login
# goreleaser --clean (--skip-validate)
#
81 changes: 61 additions & 20 deletions README.adoc
Original file line number Diff line number Diff line change
@@ -28,7 +28,7 @@ You can install the pre-compiled binary or compile from source.

[,bash]
----
# install tap repostory once
# install tap repository once
brew tap deviceinsight/packages
# install kafkactl
brew install deviceinsight/packages/kafkactl
@@ -50,13 +50,6 @@ Download the .deb or .rpm from the https://github.com/deviceinsight/kafkactl/rel

There's a kafkactl https://aur.archlinux.org/packages/kafkactl/[AUR package] available for Arch. Install it with your AUR helper of choice (e.g. https://github.com/Jguer/yay[yay]):

*snap*:

[,bash]
----
snap install kafkactl
----

[,bash]
----
yay -S kafkactl
@@ -70,7 +63,7 @@ Download the pre-compiled binaries from the https://github.com/deviceinsight/kaf

[,bash]
----
go get -u github.com/deviceinsight/kafkactl/v5
go install github.com/deviceinsight/kafkactl/v5@latest
----

*NOTE:* make sure that `kafkactl` is on PATH otherwise auto-completion won't work.
@@ -144,6 +137,25 @@ contexts:
# optional: nodeSelector to add to the pod
nodeSelector:
key: value
# optional: affinity to add to the pod
affinity:
# note: other types of affinity also supported
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "<key>"
operator: "<operator>"
values: [ "<value>" ]
# optional: tolerations to add to the pod
tolerations:
- key: "<key>"
operator: "<operator>"
value: "<value>"
effect: "<effect>"
# optional: clientID config (defaults to kafkactl-{username})
clientID: my-client-id
@@ -215,8 +227,6 @@ The config file location is resolved by
** `$HOME/.config/kafkactl/config.yml`
** `$HOME/.kafkactl/config.yml`
** `$APPDATA/kafkactl/config.yml`
** `$SNAP_REAL_HOME/.kafkactl/config.yml`
** `$SNAP_DATA/kafkactl/config.yml`
** `/etc/kafkactl/config.yml`

[#_project_config_files]
@@ -248,8 +258,6 @@ according to <<_config_file_read_order, the config file read order>>.

==== bash

*NOTE:* if you installed via snap, bash completion should work automatically.

----
source <(kafkactl completion bash)
----
@@ -367,10 +375,6 @@ a bash available. The second option uses a docker image build from scratch and s
Which option is more suitable, will depend on your use-case.
____

____
:warning: currently _kafkactl_ must *NOT* be installed via _snap_ in order for the kubernetes feature to work. The snap runs in a sandbox and is therefore unable to access the `kubectl` binary.
____

== Configuration via environment variables

Every key in the `config.yml` can be overwritten via environment variables. The corresponding environment variable
@@ -400,6 +404,7 @@ See the plugin documentation for additional documentation and usage examples.

Available plugins:

* https://github.com/deviceinsight/kafkactl-plugins/blob/main/aws/README.adoc[aws plugin]
* https://github.com/deviceinsight/kafkactl-plugins/blob/main/azure/README.adoc[azure plugin]

== Examples
@@ -469,7 +474,7 @@ kafkactl consume my-topic --from-timestamp 2014-04-26
----

The `from-timestamp` parameter supports different timestamp formats. It can either be a number representing the epoch milliseconds
or a string with a timestamp in one of the https://github.com/deviceinsight/kafkactl/blob/main/util/util.go#L10[supported date formats].
or a string with a timestamp in one of the https://github.com/deviceinsight/kafkactl/blob/main/internal/util/util.go#L10[supported date formats].

*NOTE:* `--from-timestamp` is not designed to schedule the beginning of consumer's consumption. The offset corresponding to the timestamp is computed at the beginning of the process. So if you set it to a date in the future, the consumer will start from the latest offset.

@@ -482,7 +487,7 @@ kafkactl consume my-topic --from-timestamp 2017-07-19T03:30:00 --to-timestamp 20

The `to-timestamp` parameter supports the same formats as `from-timestamp`.

*NOTE:* `--to-timestamp` is not designed to schedule the end of consumer's consumption. The offset corresponding to the timestamp is computed at the begininng of the process. So if you set it to a date in the future, the consumer will stop at the current latest offset.
*NOTE:* `--to-timestamp` is not designed to schedule the end of consumer's consumption. The offset corresponding to the timestamp is computed at the beginning of the process. So if you set it to a date in the future, the consumer will stop at the current latest offset.

The following example prints keys in hex and values in base64:

@@ -623,7 +628,7 @@ Producing protobuf message converted from JSON:
kafkactl produce my-topic --key='{"keyField":123}' --key-proto-type MyKeyMessage --value='{"valueField":"value"}' --value-proto-type MyValueMessage --proto-file kafkamsg.proto
----

A more complex protobuf message converted from a multi-line JSON string can be produced using a file input with custom separators.
A more complex protobuf message converted from a multi-line JSON string can be produced using a file input with custom separators.

For example, if you have the following protobuf definition (`complex.proto`):

@@ -796,6 +801,42 @@ or with protoset
kafkactl consume <topic> --key-proto-type TopicKey --value-proto-type TopicValue --protoset-file kafkamsg.protoset
----

=== Create topics

The `create topic` allows you to create one or multiple topics.

Basic usage:
[,bash]
----
kafkactl create topic my-topic
----

The partition count can be specified with:
[,bash]
----
kafkactl create topic my-topic --partitions 32
----

The replication factor can be specified with:
[,bash]
----
kafkactl create topic my-topic --replication-factor 3
----

Configs can also be provided:
[,bash]
----
kafkactl create topic my-topic --config retention.ms=3600000 --config=cleanup.policy=compact
----

The topic configuration can also be taken from an existing topic using the following:
[,bash]
----
kafkactl describe topic my-topic -o json > my-topic-config.json
kafkactl create topic my-topic-clone --file my-topic-config.json
----


=== Altering topics

Using the `alter topic` command allows you to change the partition count, replication factor and topic-level
13 changes: 1 addition & 12 deletions cmd/config/useContext.go
Original file line number Diff line number Diff line change
@@ -1,10 +1,7 @@
package config

import (
"sort"

"github.com/deviceinsight/kafkactl/v5/internal/global"

"github.com/deviceinsight/kafkactl/v5/internal/output"
"github.com/pkg/errors"

@@ -42,15 +39,7 @@ func newUseContextCmd() *cobra.Command {
return nil, cobra.ShellCompDirectiveNoFileComp
}

contextMap := viper.GetStringMap("contexts")
contexts := make([]string, 0, len(contextMap))
for k := range contextMap {
contexts = append(contexts, k)
}

sort.Strings(contexts)

return contexts, cobra.ShellCompDirectiveNoFileComp
return global.ListAvailableContexts(), cobra.ShellCompDirectiveNoFileComp
},
}

1 change: 1 addition & 0 deletions cmd/create/create-topic.go
Original file line number Diff line number Diff line change
@@ -27,6 +27,7 @@ func newCreateTopicCmd() *cobra.Command {
cmdCreateTopic.Flags().Int32VarP(&flags.Partitions, "partitions", "p", 1, "number of partitions")
cmdCreateTopic.Flags().Int16VarP(&flags.ReplicationFactor, "replication-factor", "r", -1, "replication factor")
cmdCreateTopic.Flags().BoolVarP(&flags.ValidateOnly, "validate-only", "v", false, "validate only")
cmdCreateTopic.Flags().StringVarP(&flags.File, "file", "f", "", "file with topic description")
cmdCreateTopic.Flags().StringArrayVarP(&flags.Configs, "config", "c", flags.Configs, "configs in format `key=value`")

return cmdCreateTopic
Loading