or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

index.mdsasl-ssl-configuration.md
tile.json

sasl-ssl-configuration.mddocs/

SASL/SSL Configuration

Advanced security configuration for Kafka containers with SASL authentication and SSL/TLS encryption. This enables production-like security testing scenarios with authenticated and encrypted connections.

Key Information for Agents

Required for SASL/SSL:

  • PKCS12 format certificates (.pfx files) for keystore and optional truststore
  • Certificate passphrases (must match certificate files)
  • Confluent Platform >= 7.5.0 when using KRaft mode with SASL/SSL
  • SASL mechanism: SCRAM-SHA-256 or SCRAM-SHA-512

Default Behaviors:

  • Dual listeners: Both plaintext (port 9093) and secure (custom port) listeners are available
  • Plaintext listener: Always enabled on port 9093
  • Secure listener: Enabled on custom port specified in withSaslSslListener()
  • User creation timing: ZooKeeper mode creates user after container starts; KRaft mode creates user during initialization
  • Certificate format: Must be PKCS12 (.pfx); PEM certificates must be converted

Error Conditions:

  • KRaft + SASL with Confluent Platform < 7.5.0: Throws error during start()
  • Invalid certificate passphrases: Throws error during container startup
  • Malformed PKCS12 files: Throws error during container startup
  • Missing keystore: Required parameter; missing causes configuration error
  • Truststore: Optional; only needed for client certificate authentication or mTLS

Edge Cases:

  • Port conflicts: Choose secure listener port that doesn't conflict with default ports (9092, 9093, 9094)
  • Certificate content: Accepts Buffer, string (base64), or Readable stream
  • Multiple SASL users: Only one user configured via withSaslSslListener(); additional users require manual configuration via exec()
  • Client certificate validation: Requires truststore configuration
  • Network communication: Secure listener accessible via network aliases when using Docker networks
  • Certificate expiration: Expired certificates cause connection failures; ensure certificates are valid
  • Passphrase security: Passphrases are required but not validated until container startup

Capabilities

SASL/SSL Listener Configuration

Configure a secure listener with SASL authentication and SSL/TLS encryption for Kafka connections.

/**
 * Configure SASL/SSL authentication listener for secure Kafka connections
 * @param options - SASL/SSL configuration options
 * @returns Container instance for method chaining
 * @throws Error if KRaft mode with version < 7.5.0
 * @throws Error if certificate passphrases are invalid or files are malformed
 */
withSaslSslListener(options: SaslSslListenerOptions): this;

interface SaslSslListenerOptions {
  /** SASL authentication configuration */
  sasl: SaslOptions;
  /** Port number for the secure listener */
  port: number;
  /** Server keystore configuration (PKCS12 format) */
  keystore: PKCS12CertificateStore;
  /** Optional server truststore configuration (PKCS12 format) */
  truststore?: PKCS12CertificateStore;
}

interface SaslOptions {
  /** SASL authentication mechanism */
  mechanism: "SCRAM-SHA-256" | "SCRAM-SHA-512";
  /** User credentials for authentication */
  user: User;
}

interface User {
  /** Username for SASL authentication */
  name: string;
  /** Password for SASL authentication */
  password: string;
}

interface PKCS12CertificateStore {
  /** Certificate content as Buffer, string, or Readable stream */
  content: Buffer | string | Readable;
  /** Passphrase to decrypt the certificate */
  passphrase: string;
}

Version Requirements

  • With ZooKeeper Mode: Supported on all Kafka versions that support SASL/SSL
  • With KRaft Mode: Requires Confluent Platform >= 7.5.0

When using withSaslSslListener() with withKraft(), the container will throw an error if the Kafka version is below 7.5.0. The error occurs during start() method execution.

Version Detection:

  • Container automatically detects Confluent Platform version from image tag
  • Version check happens before container startup
  • Error message indicates minimum required version

Security Features

The SASL/SSL listener provides:

  • Encryption: TLS v1.2 for data in transit
  • Authentication: SCRAM-SHA-256 or SCRAM-SHA-512 mechanisms
  • Certificate Management: PKCS12 keystore and optional truststore
  • Dual Listeners: Maintains plaintext listener (port 9093) alongside secure listener
  • User Management: Automatic user creation with specified credentials
  • Network Security: Encrypted communication between clients and broker

Usage Examples

Basic SASL/SSL with ZooKeeper Mode

import fs from "fs";
import { KafkaContainer } from "@testcontainers/kafka";

await using container = await new KafkaContainer("confluentinc/cp-kafka:7.9.1")
  .withSaslSslListener({
    port: 9096,
    sasl: {
      mechanism: "SCRAM-SHA-512",
      user: {
        name: "app-user",
        password: "userPassword",
      },
    },
    keystore: {
      content: fs.readFileSync("kafka.server.keystore.pfx"),
      passphrase: "serverKeystorePassword",
    },
    truststore: {
      content: fs.readFileSync("kafka.server.truststore.pfx"),
      passphrase: "serverTruststorePassword",
    },
  })
  .start();

// Get connection details for secure listener
const host = container.getHost();
const securePort = container.getMappedPort(9096);
const secureBrokers = [`${host}:${securePort}`];

// Plaintext listener is still available on port 9093
const plaintextPort = container.getMappedPort(9093);
const plaintextBrokers = [`${host}:${plaintextPort}`];

Important Notes:

  • User is created after container starts (ZooKeeper mode)
  • Both listeners are accessible; choose based on security requirements
  • Keystore is required; truststore is optional

SASL/SSL with KRaft Mode

import fs from "fs";
import { KafkaContainer } from "@testcontainers/kafka";

// KRaft mode with SASL requires Confluent Platform >= 7.5.0
await using container = await new KafkaContainer("confluentinc/cp-kafka:7.9.1")
  .withKraft()
  .withSaslSslListener({
    port: 9096,
    sasl: {
      mechanism: "SCRAM-SHA-512",
      user: {
        name: "app-user",
        password: "userPassword",
      },
    },
    keystore: {
      content: fs.readFileSync("kafka.server.keystore.pfx"),
      passphrase: "serverKeystorePassword",
    },
    truststore: {
      content: fs.readFileSync("kafka.server.truststore.pfx"),
      passphrase: "serverTruststorePassword",
    },
  })
  .start();

Important Notes:

  • Version check: Container validates version >= 7.5.0 before starting
  • User creation: User is created during container initialization (before Kafka starts)
  • Error handling: Throws error immediately if version requirement not met

SASL/SSL with Docker Network

When using Docker networks, other containers can connect to Kafka using the secure listener via network aliases.

import fs from "fs";
import { Network } from "testcontainers";
import { KafkaContainer } from "@testcontainers/kafka";

await using network = await new Network().start();

await using kafka = await new KafkaContainer("confluentinc/cp-kafka:7.9.1")
  .withNetwork(network)
  .withNetworkAliases("kafka")
  .withSaslSslListener({
    port: 9096,
    sasl: {
      mechanism: "SCRAM-SHA-512",
      user: {
        name: "app-user",
        password: "userPassword",
      },
    },
    keystore: {
      content: fs.readFileSync("kafka.server.keystore.pfx"),
      passphrase: "serverKeystorePassword",
    },
    truststore: {
      content: fs.readFileSync("kafka.server.truststore.pfx"),
      passphrase: "serverTruststorePassword",
    },
  })
  .start();

// Other containers on the network can connect via:
// - kafka:9096 (secure listener within network)
// - kafka:9093 (plaintext listener within network)
// - kafka:9092 (internal broker port, not accessible from host)

Network Communication:

  • Containers on same network use network aliases (e.g., "kafka:9096")
  • Host connections use getHost() and getMappedPort() (e.g., "localhost:32768")
  • Internal broker port (9092) is not mapped to host; only accessible within container

Using SCRAM-SHA-256

import fs from "fs";
import { KafkaContainer } from "@testcontainers/kafka";

await using container = await new KafkaContainer("confluentinc/cp-kafka:7.9.1")
  .withSaslSslListener({
    port: 9096,
    sasl: {
      mechanism: "SCRAM-SHA-256",  // Using SHA-256 instead of SHA-512
      user: {
        name: "app-user",
        password: "userPassword",
      },
    },
    keystore: {
      content: fs.readFileSync("kafka.server.keystore.pfx"),
      passphrase: "serverKeystorePassword",
    },
  })
  .start();

Mechanism Selection:

  • SCRAM-SHA-256: Faster, less secure
  • SCRAM-SHA-512: Slower, more secure
  • Choose based on security requirements and performance needs

Using Certificate Content as Buffer

import fs from "fs";
import { KafkaContainer } from "@testcontainers/kafka";

const keystoreBuffer = fs.readFileSync("kafka.server.keystore.pfx");
const truststoreBuffer = fs.readFileSync("kafka.server.truststore.pfx");

await using container = await new KafkaContainer("confluentinc/cp-kafka:7.9.1")
  .withSaslSslListener({
    port: 9096,
    sasl: {
      mechanism: "SCRAM-SHA-512",
      user: {
        name: "app-user",
        password: "userPassword",
      },
    },
    keystore: {
      content: keystoreBuffer,  // Buffer directly
      passphrase: "serverKeystorePassword",
    },
    truststore: {
      content: truststoreBuffer,  // Buffer directly
      passphrase: "serverTruststorePassword",
    },
  })
  .start();

Using Certificate Content as Base64 String

import fs from "fs";
import { KafkaContainer } from "@testcontainers/kafka";

const keystoreBase64 = fs.readFileSync("kafka.server.keystore.pfx").toString("base64");
const truststoreBase64 = fs.readFileSync("kafka.server.truststore.pfx").toString("base64");

await using container = await new KafkaContainer("confluentinc/cp-kafka:7.9.1")
  .withSaslSslListener({
    port: 9096,
    sasl: {
      mechanism: "SCRAM-SHA-512",
      user: {
        name: "app-user",
        password: "userPassword",
      },
    },
    keystore: {
      content: keystoreBase64,  // Base64 string
      passphrase: "serverKeystorePassword",
    },
    truststore: {
      content: truststoreBase64,  // Base64 string
      passphrase: "serverTruststorePassword",
    },
  })
  .start();

Client Configuration

When connecting to a Kafka container with SASL/SSL enabled, your Kafka client must be configured appropriately.

Using @confluentinc/kafka-javascript

import { Kafka } from "@confluentinc/kafka-javascript";
import { KafkaContainer } from "@testcontainers/kafka";

await using container = await new KafkaContainer("confluentinc/cp-kafka:7.9.1")
  .withSaslSslListener({
    port: 9096,
    sasl: {
      mechanism: "SCRAM-SHA-512",
      user: {
        name: "app-user",
        password: "userPassword",
      },
    },
    keystore: {
      content: fs.readFileSync("kafka.server.keystore.pfx"),
      passphrase: "serverKeystorePassword",
    },
  })
  .start();

const kafka = new Kafka({
  kafkaJS: {
    brokers: [`${container.getHost()}:${container.getMappedPort(9096)}`],
    ssl: true,
  },
  "sasl.mechanism": "SCRAM-SHA-512",
  "sasl.username": "app-user",
  "sasl.password": "userPassword",
  "security.protocol": "sasl_ssl",
  "ssl.ca.location": "/path/to/kafka.client.truststore.pem",
});

const producer = kafka.producer();
await producer.connect();

Client Requirements:

  • SSL enabled: ssl: true or "security.protocol": "sasl_ssl"
  • SASL mechanism: Must match server configuration (SCRAM-SHA-256 or SCRAM-SHA-512)
  • Credentials: Username and password must match server configuration
  • CA certificate: Client needs CA certificate to verify server identity

Using kafkajs

import { Kafka } from "kafkajs";
import fs from "fs";
import { KafkaContainer } from "@testcontainers/kafka";

await using container = await new KafkaContainer("confluentinc/cp-kafka:7.9.1")
  .withSaslSslListener({
    port: 9096,
    sasl: {
      mechanism: "SCRAM-SHA-512",
      user: {
        name: "app-user",
        password: "userPassword",
      },
    },
    keystore: {
      content: fs.readFileSync("kafka.server.keystore.pfx"),
      passphrase: "serverKeystorePassword",
    },
  })
  .start();

const kafka = new Kafka({
  brokers: [`${container.getHost()}:${container.getMappedPort(9096)}`],
  ssl: {
    ca: [fs.readFileSync("/path/to/ca-cert.pem", "utf-8")],
  },
  sasl: {
    mechanism: "scram-sha-512",
    username: "app-user",
    password: "userPassword",
  },
});

const producer = kafka.producer();
await producer.connect();

kafkajs Configuration:

  • SSL: Configured via ssl object with CA certificates
  • SASL: Configured via sasl object with mechanism, username, password
  • Mechanism names: Use lowercase (e.g., "scram-sha-512" not "SCRAM-SHA-512")

Certificate Management

The module requires certificates in PKCS12 (.pfx) format. If you have PEM certificates, you can convert them using OpenSSL:

# Convert PEM to PKCS12 for keystore
openssl pkcs12 -export \
  -in server.crt \
  -inkey server.key \
  -out kafka.server.keystore.pfx \
  -name kafka-server \
  -password pass:serverKeystorePassword

# Convert PEM to PKCS12 for truststore
openssl pkcs12 -export \
  -in ca-cert.pem \
  -nokeys \
  -out kafka.server.truststore.pfx \
  -password pass:serverTruststorePassword

Certificate Requirements:

  • Format: PKCS12 (.pfx) only
  • Keystore: Must contain server certificate and private key
  • Truststore: Contains CA certificates for client certificate validation
  • Passphrase: Required for both keystore and truststore
  • Validity: Certificates must be valid (not expired)

Generating Self-Signed Certificates:

# Generate CA certificate
openssl req -new -x509 -keyout ca-key.pem -out ca-cert.pem -days 365

# Generate server certificate
openssl req -new -keyout server-key.pem -out server.csr
openssl x509 -req -in server.csr -CA ca-cert.pem -CAkey ca-key.pem -out server.crt -days 365

# Convert to PKCS12
openssl pkcs12 -export \
  -in server.crt \
  -inkey server-key.pem \
  -out kafka.server.keystore.pfx \
  -name kafka-server \
  -password pass:serverKeystorePassword

Important Notes

  1. Dual Listeners: The secure listener coexists with the plaintext listener. Both are available:

    • Plaintext: port 9093 (always enabled)
    • Secure: custom port (e.g., 9096)
  2. Port Configuration: Choose a secure listener port that doesn't conflict with default Kafka ports:

    • 9092: Internal broker port
    • 9093: Default plaintext client port
    • 9094: KRaft controller port
    • Recommended secure ports: 9096, 9097, etc.
  3. Truststore: The truststore parameter is optional. Include it when:

    • Using client certificate authentication
    • Requiring mutual TLS (mTLS)
    • Working with custom certificate chains
  4. Certificate Content: The content field accepts both:

    • Buffer: From fs.readFileSync() without encoding
    • string: Base64-encoded certificate data
    • Readable: Stream of certificate data
  5. Authentication Timing:

    • ZooKeeper mode: User created after container starts via kafka-configs command
    • KRaft mode: User created during container initialization before Kafka starts
  6. Error Handling: The container will throw errors for:

    • KRaft + SASL with Confluent Platform < 7.5.0
    • Invalid certificate passphrases
    • Malformed PKCS12 files
    • Missing required keystore
  7. Client Certificate Validation: When using truststore:

    • Server validates client certificates against truststore
    • Clients must present valid certificates signed by trusted CA
    • Without truststore, only server authentication is performed
  8. Multiple Users: The withSaslSslListener() method configures a single user. To add additional users:

    • Use container.exec() to run kafka-configs commands
    • Create users after container starts (ZooKeeper mode)
    • Create users during initialization (KRaft mode, more complex)
  9. Certificate Expiration: Expired certificates cause connection failures:

    • Check certificate validity before use
    • Regenerate certificates before expiration
    • Monitor certificate expiration dates in production-like tests
  10. Network Security: When using Docker networks:

    • Secure listener accessible via network aliases
    • Encryption applies to all network communication
    • Plaintext listener also available (consider disabling for production-like tests)