Kafka代理基于Cloud SQL代理的概念。它允许服务连接到KAFKA经纪人,而无需处理SASL/Plain Authentication和SSL证书。
它可以通过在本地机器上打开TCP插座并在使用插座时与相关的Kafka经纪人的连接来工作。元数据中的主机和端口以及从经纪人收到的FindCoordinator响应由本地同行代替。对于发现的经纪人(不配置为Boostrap服务器),本地侦听器是在随机端口上启动的。可以禁用动态的本地侦听器功能,并提供外部服务器映射的其他列表。
代理可以使用SASL/Plain终止TLS流量并验证用户。凭据验证方法是可配置的,并在RPC上使用Golang插件系统。
这些代理还可以使用可插入的方法互相验证,该方法透明了其他Kafka服务器和客户端。当前,实现了服务帐户的Google ID令牌,即代理客户端请求,并发送服务帐户JWT和Proxy Server接收并根据Google JWKS进行验证。
可以限制KAFKA API调用以防止某些操作,例如主题删除或产生请求。
看:
与Amazon MSK的Kafka代理
KAFKA协议指南
KAFKA协议指南
下表提供了受支持的Kafka版本(指定的一个和所有以前的Kafka版本)概述。并非每个Kafka版本都添加了与Kafka代理相关的新消息/版本,因此较新的Kafka版本也可以使用。
kafka代理版本 | kafka版本 |
---|---|
从0.11.0 | |
0.2.9 | 至2.8.0 |
0.3.1 | 至3.4.0 |
0.3.11 | 至3.7.0 |
0.3.12 | 至3.9.0 |
下载最新版本
Linux
curl -Ls https://github.com/grepplabs/kafka-proxy/releases/download/v0.3.12/kafka-proxy-v0.3.12-linux-amd64.tar.gz | tar xz
macos
curl -Ls https://github.com/grepplabs/kafka-proxy/releases/download/v0.3.12/kafka-proxy-v0.3.12-darwin-amd64.tar.gz | tar xz
将二进制移动到您的路上。
sudo mv ./kafka-proxy /usr/local/bin/kafka-proxy
make clean build
Docker图像可在Docker Hub上找到。
您可以启动一个Kafka-Proxy容器,以尝试使用
docker run --rm -p 30001-30003:30001-30003 grepplabs/kafka-proxy:0.3.12 server --bootstrap-server-mapping "localhost:19092,0.0.0.0:30001" --bootstrap-server-mapping "localhost:29092,0.0.0.0:30002" --bootstrap-server-mapping "localhost:39092,0.0.0.0:30003" --dial-address-mapping "localhost:19092,172.17.0.1:19092" --dial-address-mapping "localhost:29092,172.17.0.1:29092" --dial-address-mapping "localhost:39092,172.17.0.1:39092" --debug-enable
Kafka-Proxy现在可以在localhost:30001
, localhost:30002
和localhost:30003
,连接到Docker(网络桥接网路网关172.17.0.1
)的Kafka Brokers广告广告明文在localhost:19092
: localhost:29092
和Localhost:29092和localhost:39092
。
带有/opt/kafka-proxy/bin/
的预编译插件的docker映像用<release>-all
标记。
您可以启动带有Auth-LDAP插件的Kafka-Proxy容器,以尝试使用
docker run --rm -p 30001-30003:30001-30003 grepplabs/kafka-proxy:0.3.12-all server --bootstrap-server-mapping "localhost:19092,0.0.0.0:30001" --bootstrap-server-mapping "localhost:29092,0.0.0.0:30002" --bootstrap-server-mapping "localhost:39092,0.0.0.0:30003" --dial-address-mapping "localhost:19092,172.17.0.1:19092" --dial-address-mapping "localhost:29092,172.17.0.1:29092" --dial-address-mapping "localhost:39092,172.17.0.1:39092" --debug-enable --auth-local-enable --auth-local-command=/opt/kafka-proxy/bin/auth-ldap --auth-local-param=--url=ldap://172.17.0.1:389 --auth-local-param=--start-tls=false --auth-local-param=--bind-dn=cn=admin,dc=example,dc=org --auth-local-param=--bind-passwd=admin --auth-local-param=--user-search-base=ou=people,dc=example,dc=org --auth-local-param=--user-filter="(&(objectClass=person)(uid=%u)(memberOf=cn=kafka-users,ou=realm-roles,dc=example,dc=org))"
Run the kafka-proxy server Usage: kafka-proxy server [flags] Flags: --auth-gateway-client-command string Path to authentication plugin binary --auth-gateway-client-enable Enable gateway client authentication --auth-gateway-client-log-level string Log level of the auth plugin (default "trace") --auth-gateway-client-magic uint Magic bytes sent in the handshake --auth-gateway-client-method string Authentication method --auth-gateway-client-param stringArray Authentication plugin parameter --auth-gateway-client-timeout duration Authentication timeout (default 10s) --auth-gateway-server-command string Path to authentication plugin binary --auth-gateway-server-enable Enable proxy server authentication --auth-gateway-server-log-level string Log level of the auth plugin (default "trace") --auth-gateway-server-magic uint Magic bytes sent in the handshake --auth-gateway-server-method string Authentication method --auth-gateway-server-param stringArray Authentication plugin parameter --auth-gateway-server-timeout duration Authentication timeout (default 10s) --auth-local-command string Path to authentication plugin binary --auth-local-enable Enable local SASL/PLAIN authentication performed by listener - SASL handshake will not be passed to kafka brokers --auth-local-log-level string Log level of the auth plugin (default "trace") --auth-local-mechanism string SASL mechanism used for local authentication: PLAIN or OAUTHBEARER (default "PLAIN") --auth-local-param stringArray Authentication plugin parameter --auth-local-timeout duration Authentication timeout (default 10s) --bootstrap-server-mapping stringArray Mapping of Kafka bootstrap server address to local address (host:port,host:port(,advhost:advport)) --debug-enable Enable Debug endpoint --debug-listen-address string Debug listen address (default "0.0.0.0:6060") --default-listener-ip string Default listener IP (default "0.0.0.0") --dial-address-mapping stringArray Mapping of target broker address to new one (host:port,host:port). The mapping is performed during connection establishment --dynamic-advertised-listener string Advertised address for dynamic listeners. If empty, default-listener-ip is used --dynamic-listeners-disable Disable dynamic listeners. --dynamic-sequential-min-port int If set to non-zero, makes the dynamic listener use a sequential port starting with this value rather than a random port every time. --external-server-mapping stringArray Mapping of Kafka server address to external address (host:port,host:port). A listener for the external address is not started --forbidden-api-keys ints Forbidden Kafka request types. The restriction should prevent some Kafka operations e.g. 20 - DeleteTopics --forward-proxy string URL of the forward proxy. Supported schemas are socks5 and http --gssapi-auth-type string GSSAPI auth type: KEYTAB or USER (default "KEYTAB") --gssapi-disable-pa-fx-fast Used to configure the client to not use PA_FX_FAST. --gssapi-keytab string krb5.keytab file location --gssapi-krb5 string krb5.conf file path, default: /etc/krb5.conf (default "/etc/krb5.conf") --gssapi-password string Password for auth type USER --gssapi-realm string Realm --gssapi-servicename string ServiceName (default "kafka") --gssapi-spn-host-mapping stringToString Mapping of Kafka servers address to SPN hosts (default []) --gssapi-username string Username (default "kafka") -h, --help help for server --http-disable Disable HTTP endpoints --http-health-path string Path on which to health endpoint (default "/health") --http-listen-address string Address that kafka-proxy is listening on (default "0.0.0.0:9080") --http-metrics-path string Path on which to expose metrics (default "/metrics") --kafka-client-id string An optional identifier to track the source of requests (default "kafka-proxy") --kafka-connection-read-buffer-size int Size of the operating system's receive buffer associated with the connection. If zero, system default is used --kafka-connection-write-buffer-size int Sets the size of the operating system's transmit buffer associated with the connection. If zero, system default is used --kafka-dial-timeout duration How long to wait for the initial connection (default 15s) --kafka-keep-alive duration Keep alive period for an active network connection. If zero, keep-alives are disabled (default 1m0s) --kafka-max-open-requests int Maximal number of open requests pro tcp connection before sending on it blocks (default 256) --kafka-read-timeout duration How long to wait for a response (default 30s) --kafka-write-timeout duration How long to wait for a transmit (default 30s) --log-format string Log format text or json (default "text") --log-level string Log level debug, info, warning, error, fatal or panic (default "info") --log-level-fieldname string Log level fieldname for json format (default "@level") --log-msg-fieldname string Message fieldname for json format (default "@message") --log-time-fieldname string Time fieldname for json format (default "@timestamp") --producer-acks-0-disabled Assume fire-and-forget is never sent by the producer. Enabling this parameter will increase performance --proxy-listener-ca-chain-cert-file string PEM encoded CA's certificate file. If provided, client certificate is required and verified --proxy-listener-cert-file string PEM encoded file with server certificate --proxy-listener-cipher-suites strings List of supported cipher suites --proxy-listener-curve-preferences strings List of curve preferences --proxy-listener-keep-alive duration Keep alive period for an active network connection. If zero, keep-alives are disabled (default 1m0s) --proxy-listener-key-file string PEM encoded file with private key for the server certificate --proxy-listener-key-password string Password to decrypt rsa private key --proxy-listener-read-buffer-size int Size of the operating system's receive buffer associated with the connection. If zero, system default is used --proxy-listener-tls-enable Whether or not to use TLS listener --proxy-listener-tls-required-client-subject strings Required client certificate subject common name; example; s:/CN=[value]/C=[state]/C=[DE,PL] or r:/CN=[^val.{2}$]/C=[state]/C=[DE,PL]; check manual for more details --proxy-listener-write-buffer-size int Sets the size of the operating system's transmit buffer associated with the connection. If zero, system default is used --proxy-request-buffer-size int Request buffer size pro tcp connection (default 4096) --proxy-response-buffer-size int Response buffer size pro tcp connection (default 4096) --sasl-aws-profile string AWS profile --sasl-aws-region string Region for AWS IAM Auth --sasl-enable Connect using SASL --sasl-jaas-config-file string Location of JAAS config file with SASL username and password --sasl-method string SASL method to use (PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, GSSAPI, AWS_MSK_IAM (default "PLAIN") --sasl-password string SASL user password --sasl-plugin-command string Path to authentication plugin binary --sasl-plugin-enable Use plugin for SASL authentication --sasl-plugin-log-level string Log level of the auth plugin (default "trace") --sasl-plugin-mechanism string SASL mechanism used for proxy authentication: PLAIN or OAUTHBEARER (default "OAUTHBEARER") --sasl-plugin-param stringArray Authentication plugin parameter --sasl-plugin-timeout duration Authentication timeout (default 10s) --sasl-username string SASL user name --tls-ca-chain-cert-file string PEM encoded CA's certificate file --tls-client-cert-file string PEM encoded file with client certificate --tls-client-key-file string PEM encoded file with private key for the client certificate --tls-client-key-password string Password to decrypt rsa private key --tls-enable Whether or not to use TLS when connecting to the broker --tls-insecure-skip-verify It controls whether a client verifies the server's certificate chain and host name --tls-same-client-cert-enable Use only when mutual TLS is enabled on proxy and broker. It controls whether a proxy validates if proxy client certificate exactly matches brokers client cert (tls-client-cert-file)
kafka-proxy server --bootstrap-server-mapping "192.168.99.100:32400,0.0.0.0:32399" kafka-proxy server --bootstrap-server-mapping "192.168.99.100:32400,127.0.0.1:32400" --bootstrap-server-mapping "192.168.99.100:32401,127.0.0.1:32401" --bootstrap-server-mapping "192.168.99.100:32402,127.0.0.1:32402" --dynamic-listeners-disable kafka-proxy server --bootstrap-server-mapping "kafka-0.example.com:9092,0.0.0.0:32401,kafka-0.grepplabs.com:9092" --bootstrap-server-mapping "kafka-1.example.com:9092,0.0.0.0:32402,kafka-1.grepplabs.com:9092" --bootstrap-server-mapping "kafka-2.example.com:9092,0.0.0.0:32403,kafka-2.grepplabs.com:9092" --dynamic-listeners-disable kafka-proxy server --bootstrap-server-mapping "192.168.99.100:32400,127.0.0.1:32400" --external-server-mapping "192.168.99.100:32401,127.0.0.1:32402" --external-server-mapping "192.168.99.100:32402,127.0.0.1:32403" --forbidden-api-keys 20 export BOOTSTRAP_SERVER_MAPPING="192.168.99.100:32401,0.0.0.0:32402 192.168.99.100:32402,0.0.0.0:32403" && kafka-proxy server
kafka-proxy server --bootstrap-server-mapping "localhost:19092,0.0.0.0:30001,localhost:30001" --bootstrap-server-mapping "localhost:29092,0.0.0.0:30002,localhost:30002" --bootstrap-server-mapping "localhost:39092,0.0.0.0:30003,localhost:30003" --proxy-listener-cert-file "tls/ca-cert.pem" --proxy-listener-key-file "tls/ca-key.pem" --proxy-listener-tls-enable --proxy-listener-cipher-suites TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_AES_128_GCM_SHA256
SASL身份验证是由代理启动的。 SASL身份验证在客户端上被禁用,并在Kafka经纪人上启用。
kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9093,0.0.0.0:32399" --tls-enable --tls-insecure-skip-verify --sasl-enable --sasl-username myuser --sasl-password mysecret kafka-proxy server --bootstrap-server-mapping "kafka-0.example.com:9092,0.0.0.0:30001" --bootstrap-server-mapping "kafka-1.example.com:9092,0.0.0.0:30002" --bootstrap-server-mapping "kafka-1.example.com:9093,0.0.0.0:30003" --sasl-enable --sasl-username "alice" --sasl-password "alice-secret" --sasl-method "SCRAM-SHA-512" --log-level debug make clean build plugin.unsecured-jwt-provider && build/kafka-proxy server --sasl-enable --sasl-plugin-enable --sasl-plugin-mechanism "OAUTHBEARER" --sasl-plugin-command build/unsecured-jwt-provider --sasl-plugin-param "--claim-sub=alice" --bootstrap-server-mapping "192.168.99.100:32400,127.0.0.1:32400"
GSSAPI / Kerberos身份验证
kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9092,127.0.0.1:32500" --bootstrap-server-mapping "kafka-1.grepplabs.com:9092,127.0.0.1:32501" --bootstrap-server-mapping "kafka-2.grepplabs.com:9092,127.0.0.1:32502" --sasl-enable --sasl-method "GSSAPI" --gssapi-servicename kafka --gssapi-username kafkaclient1 --gssapi-realm EXAMPLE.COM --gssapi-krb5 /etc/krb5.conf --gssapi-keytab /etc/security/keytabs/kafka.keytab
AWS MSK IAM
kafka-proxy server --bootstrap-server-mapping "b-1-public.kafkaproxycluster.uls9ao.c4.kafka.eu-central-1.amazonaws.com:9198,0.0.0.0:30001" --bootstrap-server-mapping "b-2-public.kafkaproxycluster.uls9ao.c4.kafka.eu-central-1.amazonaws.com:9198,0.0.0.0:30002" --bootstrap-server-mapping "b-3-public.kafkaproxycluster.uls9ao.c4.kafka.eu-central-1.amazonaws.com:9198,0.0.0.0:30003" --tls-enable --tls-insecure-skip-verify --sasl-enable --sasl-method "AWS_MSK_IAM" --sasl-aws-region "eu-central-1" --log-level debug
SASL身份验证由代理执行。 SASL身份验证已在客户端启用,并在Kafka经纪人上被禁用。
make clean build plugin.auth-user && build/kafka-proxy server --proxy-listener-key-file "server-key.pem" --proxy-listener-cert-file "server-cert.pem" --proxy-listener-ca-chain-cert-file "ca.pem" --proxy-listener-tls-enable --auth-local-enable --auth-local-command build/auth-user --auth-local-param "--username=my-test-user" --auth-local-param "--password=my-test-password" make clean build plugin.auth-ldap && build/kafka-proxy server --auth-local-enable --auth-local-command build/auth-ldap --auth-local-param "--url=ldaps://ldap.example.com:636" --auth-local-param "--user-dn=cn=users,dc=exemple,dc=com" --auth-local-param "--user-attr=uid" --bootstrap-server-mapping "192.168.99.100:32400,127.0.0.1:32400" make clean build plugin.unsecured-jwt-info && build/kafka-proxy server --auth-local-enable --auth-local-command build/unsecured-jwt-info --auth-local-mechanism "OAUTHBEARER" --auth-local-param "--claim-sub=alice" --auth-local-param "--claim-sub=bob" --bootstrap-server-mapping "192.168.99.100:32400,127.0.0.1:32400"
验证代理客户端使用的客户证书与代理启动的身份验证的客户端证书完全相同
kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9093,0.0.0.0:32399" --tls-enable --tls-client-cert-file client.crt --tls-client-key-file client.pem --tls-client-key-password changeit --proxy-listener-tls-enable --proxy-listener-key-file server.pem --proxy-listener-cert-file server.crt --proxy-listener-key-password changeit --proxy-listener-ca-chain-cert-file ca.crt --tls-same-client-cert-enable
使用Google-ID(服务帐户JWT),Kafka代理客户端与Kafka代理服务器之间的身份验证
kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9092,127.0.0.1:32500" --bootstrap-server-mapping "kafka-1.grepplabs.com:9092,127.0.0.1:32501" --bootstrap-server-mapping "kafka-2.grepplabs.com:9092,127.0.0.1:32502" --dynamic-listeners-disable --http-disable --proxy-listener-tls-enable --proxy-listener-cert-file=/var/run/secret/server.cert.pem --proxy-listener-key-file=/var/run/secret/server.key.pem --auth-gateway-server-enable --auth-gateway-server-method google-id --auth-gateway-server-magic 3285573610483682037 --auth-gateway-server-command google-id-info --auth-gateway-server-param "--timeout=10" --auth-gateway-server-param "--audience=tcp://kafka-gateway.grepplabs.com" --auth-gateway-server-param "--email-regex=^[email protected]$" kafka-proxy server --bootstrap-server-mapping "127.0.0.1:32500,127.0.0.1:32400" --bootstrap-server-mapping "127.0.0.1:32501,127.0.0.1:32401" --bootstrap-server-mapping "127.0.0.1:32502,127.0.0.1:32402" --dynamic-listeners-disable --http-disable --tls-enable --tls-ca-chain-cert-file /var/run/secret/client/ca-chain.cert.pem --auth-gateway-client-enable --auth-gateway-client-method google-id --auth-gateway-client-magic 3285573610483682037 --auth-gateway-client-command google-id-provider --auth-gateway-client-param "--credentials-file=/var/run/secret/client/service-account.json" --auth-gateway-client-param "--target-audience=tcp://kafka-gateway.grepplabs.com" --auth-gateway-client-param "--timeout=10"
通过Test Socks5代理服务器连接
kafka-proxy tools socks5-proxy --addr localhost:1080 kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9092,127.0.0.1:32500" --bootstrap-server-mapping "kafka-1.grepplabs.com:9092,127.0.0.1:32501" --bootstrap-server-mapping "kafka-2.grepplabs.com:9092,127.0.0.1:32502" --forward-proxy socks5://localhost:1080
kafka-proxy tools socks5-proxy --addr localhost:1080 --username my-proxy-user --password my-proxy-password kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9092,127.0.0.1:32500" --bootstrap-server-mapping "kafka-1.grepplabs.com:9092,127.0.0.1:32501" --bootstrap-server-mapping "kafka-2.grepplabs.com:9092,127.0.0.1:32502" --forward-proxy socks5://my-proxy-user:my-proxy-password@localhost:1080
使用连接方法通过测试HTTP代理服务器连接
kafka-proxy tools http-proxy --addr localhost:3128 kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9092,127.0.0.1:32500" --bootstrap-server-mapping "kafka-1.grepplabs.com:9092,127.0.0.1:32501" --bootstrap-server-mapping "kafka-2.grepplabs.com:9092,127.0.0.1:32502" --forward-proxy http://localhost:3128
kafka-proxy tools http-proxy --addr localhost:3128 --username my-proxy-user --password my-proxy-password kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9092,127.0.0.1:32500" --bootstrap-server-mapping "kafka-1.grepplabs.com:9092,127.0.0.1:32501" --bootstrap-server-mapping "kafka-2.grepplabs.com:9092,127.0.0.1:32502" --forward-proxy http://my-proxy-user:my-proxy-password@localhost:3128
有时,可能不仅有必要验证客户端证书有效,而且还必须为具体用例发布客户端证书DN。这可以使用以下参数来实现:
--proxy-listener-tls-client-cert-validate-subject bool Whether to validate client certificate subject (default false) --proxy-listener-tls-required-client-subject-common-name string Required client certificate subject common name --proxy-listener-tls-required-client-subject-country stringArray Required client certificate subject country --proxy-listener-tls-required-client-subject-province stringArray Required client certificate subject province --proxy-listener-tls-required-client-subject-locality stringArray Required client certificate subject locality --proxy-listener-tls-required-client-subject-organization stringArray Required client certificate subject organization --proxy-listener-tls-required-client-subject-organizational-unit stringArray Required client certificate subject organizational unit
通过设置--proxy-listener-tls-client-cert-validate-subject true
,Kafka代理将检查客户端证书DN字段,以设置为使用--proxy-listener-tls-required-client-*
参数。匹配始终是精确的,并在一起使用,所有非空值。例如,要允许country=DE
and organization=grepplabs
的有效证书,请以下方式配置kafka代理:
kafka-proxy server --proxy-listener-tls-client-cert-validate-subject true --proxy-listener-tls-required-client-subject-country DE --proxy-listener-tls-required-client-subject-organization grepplabs
--- apiversion:apps/v1kind:deploymentMetadata:name:myAppspec:replicas:1 选择器:MatchLabels:App:MyApp 模板:元数据:标签:APP:MyApp注释:Prometheus.io/scrape:'true'spec:clandersains:containser: - 名称:kafka-proxy图像:grepplabs/kafka-proxy:最新ARGS: - 'server'-' - log-format = json'-' - bootstrap-server-mapping = kafka-0:9093,127.0.0.0.1:32400'--' - boottrap-server-server-mapping = kafka-1:kafka-1: 9093,127.0.0.1:32401'- '--bootstrap-server-mapping=kafka-2:9093,127.0.0.1:32402'- '--tls-enable'- '--tls-ca-chain-cert- file =/var/run/secret/kafka-ca-chain-certificate/ca-chain.cert.pem'-' - tls-client-cert-file =/var/run/run/necret/secret/kafka-client/kafka-client/kafka-client/ client.cert.pem'-' - tls-client-key-file =/var/run/necret/secret/kafka-client-key/client.key.pem.pem'--'--tls-client-key-key-password = $ (tls_client_key_password)' - ' - sasl-enable'-' - sasl-jaas-config-file =/var/run/run/secret/kafka-client-jaas/jaas.config'env:env: - 名称:TLS_CLIENT_KEY_KEY_PASSWORDVALUEFROM:SecretKeyRef:name:tls-client-key-passwordkey:密码通量: - 名称:“ sasl-jaas-config-file”坐骑:“/var/run/secret/kafka-client-jaas” - 名称:“ tls-ca-ca-chain-certificate” MountPath:“/var/run/run/secret/ kafka-ca-链认证” - 名称:“ tls-client-cert-file” MountPath:“/var/run/secret/kafka-client-certificate” - 名称:“ tls-client-key-key-fire”坐骑“ MountPath:” “/var/run/secret/kafka-client-key”端口: - name: metricscontainerPort: 9080 livenessProbe:httpGet: path: /health port: 9080initialDelaySeconds: 5periodSeconds: 3 readinessProbe:httpGet: path: /health port: 9080initialDelaySeconds: 5periodSeconds: 10timeoutSeconds: 5successThreshold: 2failureThreshold: 5- name: myapp image: myapp:最新端口: - containerport:8080name:指标env: - 名称:bootstrap_serversvalue:“ 127.0.0.1:32400,127.0.0.0.1:32401,127.0.0.0.1:32402”卷: - 名称:sasl-jaas-config-filesecret:secretname:sasl-jaas-config-file-名称:tls-ca-ca-chain-cercertificateSecretecret:secretname:tls-ca-chain-cain-certificate-名称 - 名称 - 名称: :secretname:tls-client-cert-file-名称:tls-client-key-filesecret:secretname:tls-client-key-file
--- apiversion:apps/v1kind:statefulsetmetadata:名称:kafka-proxyspec:选择器:matchlabels:app:kafka-proxy 复制品:1 ServiceName:Kafka-Proxy 模板:元数据:标签:APP:KAFKA-PROXYSPEC:容器: - 名称:kafka-proxy图像:grepplabs/kafka-proxy:最新ARGS: - 'server'-' - log-format = json'-' - bootstrap-server-mapping = kafka-0:9093,127.0.0.0.1:32400'--' - boottrap-server-server-mapping = kafka-1:kafka-1: 9093,127.0.0.1:32401'- '--bootstrap-server-mapping=kafka-2:9093,127.0.0.1:32402'- '--tls-enable'- '--tls-ca-chain-cert- file =/var/run/secret/kafka-ca-chain-certificate/ca-chain.cert.pem'-' - tls-client-cert-file =/var/run/run/necret/secret/kafka-client/kafka-client/kafka-client/ client.cert.pem'-' - tls-client-key-file =/var/run/necret/secret/kafka-client-key/client.key.pem.pem'--'--tls-client-key-key-password = $ (tls_client_key_password)' - ' - sasl-enable'-' - sasl-jaas-config-file =/var/run/run/secret/kafka-client-jaas/jaas/jaas.config'-------- - size = 32768'-' - proxy-response-buffer大小= 32768'--' - proxy-listener-read-read-read-buffer-size = 32768'--- proxy-listener-write-write-write-write-write-buffer size = 131072 ' - ' - kafka-connection-read-read-buffer-size = 131072'-' - kafka-connection-write-buffer-size = 32768'env: - 名称:tls_client_key_passwordvaluefrom:secretkeyref:name:tls-client-key-passwordkey:password folumemounts: - 名称:“ sasl-jaas-config-file”坐骑:“/var/run/secret/kafka-client-jaas” - 名称:“ tls-ca-ca-chain-certificate” MountPath:“/var/run/run/secret/ kafka-ca-链认证” - 名称:“ tls-client-cert-file” MountPath:“/var/run/secret/kafka-client-certificate” - 名称:“ tls-client-key-key-fire”坐骑“ MountPath:” “/var/run/secret/kafka-client-key”端口: - 姓名:量级:9080-名称:kafka -0containerport:32400-名称:kafka -1containerport:32401-名称:kafka -containerport:32402 PGET:路径: /健康端口:9080InitialDelaySeconds:5periodseconds:10timeOutseconds:5successthreshold:2failureThreshold:2faileRethreshold:5资源:请求:内存:内存:128mi cpu:100000m restartpolicy:1000m restartpolicy:explast卷:始终卷:总是 - 名称:sasl-jaas-config-filesecret:secretname:sasl-jaas-config-file-名称:tls-ca-ca-chain-cercertificateSecretecret:secretname:tls-ca-chain-cain-certificate-名称 - 名称 - 名称: :secretname:tls-client-cert-file-名称:tls-client-key-filesecret:secretname:tls-client-key-file
Kubectl Port-Forward Kafka-Proxy-0 32400:32400 32401:32401 32402:32402
使用Local主机:32400,Local Host:32401和Local Host:32402作为Bootstrap服务器
kafka.properties
broker.id=0 advertised.listeners=PLAINTEXT://kafka-0.kafka-headless.kafka:9092 ...
Kubectl Port-Forward -n Kafka Kafka-0 9092:9092
kafka-proxy服务器 - bootstrap-server-mapping“ 127.0.0.1:9092,0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.92” - -dial-address-mapping” kafka-0.kafka-0.kafka-0.kafka-headless.kafka:9092,0.0.0.0.0.0.0.0.0.0.0.0.90.0.0:90992:90992”
使用Local主机:19092作为Bootstrap服务器
Strimzi 0.13.0 CRD
apiversion:kafka.strimzi.io/v1beta1kind:kafkametadata:名称:测试群 名称空间:kafkaspec:kafka:版本:2.3.0 replicas:3listeners:plain:{} tls:{} config:offsets.topic.replication.factor.factor:3 trassaction.state.state.log.replication.replication.factor.factor.factor:3 Trassaction.state.state.state.state.log。 min.isr:2 num.Partitions:60 default.replication.fector.factor:3Storage:类型:JBOD卷: - ID:0类型:持续申请大小:20GI DELETEKLAIM:true Zookeeper:副本:3Storage:类型:持续索赔大小:5GI DELETEKLAIM:true EntityOperator:主题操作器:{} Useroperator:{}
Kubectl Port-Forward -n Kafka Test-Cluster-Kafka-0 9092:9092 kubectl port-forward -n kafka test-cluster-kafka-1 9093:9092 Kubectl Port-Forward -n Kafka Test-Cluster-Kafka-2 9094:9092 kafka-proxy服务器 - log级调试 - bootstrap-server-mapping“ 127.0.0.1:9092,0.0.0.0.0.0:19092” - bootstrap-server-mapping“ 127.0.0.1:9093,0.0.0.0.0:19093” - bootstrap-server-mapping“ 127.0.0.1:9094,0.0.0.0.0:19094” -dial-address-mapping“ test-cluster-kafka-0.test-cluster-kafka-brokers.kafka.svc.cluster.local.local:9092,0.0.0.0.0.0:9092” -dial-address-mapping“ test-cluster-kafka-1.test-cluster-kafka-brokers.kafka.svc.cluster.local:9092,0.0.0.0.0.0:9093” -dial-address-mapping“ test-cluster-kafka-2.test-cluster-kafka-brokers.kafka.svc.cluster.local:9092,0.0.0.0.0.0:9094”
使用Local主机:19092作为Bootstrap服务器
云SQL代理
萨拉马