or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

aws-sdk-integration.mdaws-services.mdcontainer-configuration.mdendpoint-configuration.mdindex.mdnetwork-configuration.md
tile.json

aws-services.mddocs/

AWS Services

This document describes the AWS services supported by LocalStack and how to enable them in the LocalStack container, including service-specific behaviors, edge cases, and LocalStack-specific considerations.

Capabilities

Service Enum

Predefined AWS services that can be enabled in LocalStack.

/**
 * Enum of predefined AWS services supported by LocalStack
 * Each service has a LocalStack service name and a legacy port number
 * Implements the EnabledService interface
 */
public enum Service implements EnabledService {
    /** Amazon API Gateway - REST and WebSocket APIs */
    API_GATEWAY,

    /** Amazon EC2 - Virtual servers and compute resources */
    EC2,

    /** Amazon Kinesis - Real-time streaming data */
    KINESIS,

    /** Amazon DynamoDB - NoSQL database */
    DYNAMODB,

    /** Amazon DynamoDB Streams - Change data capture for DynamoDB */
    DYNAMODB_STREAMS,

    /** Amazon S3 - Object storage */
    S3,

    /** Amazon Kinesis Data Firehose - Data delivery streams */
    FIREHOSE,

    /** AWS Lambda - Serverless compute */
    LAMBDA,

    /** Amazon SNS - Pub/sub messaging */
    SNS,

    /** Amazon SQS - Message queuing */
    SQS,

    /** Amazon Redshift - Data warehouse */
    REDSHIFT,

    /** Amazon SES - Email sending service */
    SES,

    /** Amazon Route53 - DNS and domain management */
    ROUTE53,

    /** AWS CloudFormation - Infrastructure as code */
    CLOUDFORMATION,

    /** Amazon CloudWatch - Monitoring and observability */
    CLOUDWATCH,

    /** AWS Systems Manager Parameter Store */
    SSM,

    /** AWS Secrets Manager - Secrets storage */
    SECRETSMANAGER,

    /** AWS Step Functions - Workflow orchestration */
    STEPFUNCTIONS,

    /** Amazon CloudWatch Logs - Log aggregation */
    CLOUDWATCHLOGS,

    /** AWS Security Token Service - Temporary credentials */
    STS,

    /** AWS Identity and Access Management - Access control */
    IAM,

    /** AWS Key Management Service - Encryption key management */
    KMS;

    /**
     * Returns the LocalStack service name for this service
     * @return Service name as recognized by LocalStack
     */
    @Override
    public String getName();

    /**
     * Returns the legacy port number for this service
     * Only used in legacy mode (LocalStack < 0.11)
     * @return Port number (4566 in modern mode, service-specific port in legacy mode)
     * @deprecated Since LocalStack 0.11, all services use port 4566
     */
    @Override
    @Deprecated
    public int getPort();
}

EnabledService Interface

Flexible interface for enabling services, including custom service names.

/**
 * Interface representing an enabled AWS service in LocalStack
 * Allows both predefined services (via Service enum) and custom service names
 */
public interface EnabledService {
    /**
     * Returns the LocalStack service name
     * @return Service name as recognized by LocalStack (e.g., "s3", "sqs", "events")
     */
    String getName();

    /**
     * Returns the port for the service
     * Default implementation returns 4566 (the modern LocalStack edge port)
     * @return Port number
     */
    default int getPort() {
        return 4566;
    }

    /**
     * Creates an EnabledService with a custom service name
     * Useful for services not in the Service enum
     *
     * @param name The LocalStack service name (e.g., "events", "stepfunctions")
     * @return EnabledService instance with the specified name
     */
    static EnabledService named(String name);
}

Service List

Complete list of predefined services with their LocalStack names and legacy ports:

Service EnumLocalStack NameLegacy PortDescription
API_GATEWAYapigateway4567Amazon API Gateway
EC2ec24597Amazon EC2
KINESISkinesis4568Amazon Kinesis
DYNAMODBdynamodb4569Amazon DynamoDB
DYNAMODB_STREAMSdynamodbstreams4570DynamoDB Streams
S3s34572Amazon S3
FIREHOSEfirehose4573Kinesis Data Firehose
LAMBDAlambda4574AWS Lambda
SNSsns4575Amazon SNS
SQSsqs4576Amazon SQS
REDSHIFTredshift4577Amazon Redshift
SESses4579Amazon SES
ROUTE53route534580Amazon Route53
CLOUDFORMATIONcloudformation4581AWS CloudFormation
CLOUDWATCHcloudwatch4582Amazon CloudWatch
SSMssm4583AWS Systems Manager
SECRETSMANAGERsecretsmanager4584AWS Secrets Manager
STEPFUNCTIONSstepfunctions4585AWS Step Functions
CLOUDWATCHLOGSlogs4586CloudWatch Logs
STSsts4592AWS STS
IAMiam4593AWS IAM
KMSkms4599AWS KMS

Note: In LocalStack versions >= 0.11, all services use port 4566 regardless of the legacy port number.

Usage Examples

Using Predefined Services

import org.testcontainers.containers.localstack.LocalStackContainer;
import org.testcontainers.containers.localstack.LocalStackContainer.Service;
import org.testcontainers.utility.DockerImageName;

// Single service
LocalStackContainer localstack = new LocalStackContainer(
    DockerImageName.parse("localstack/localstack:2.0")
)
    .withServices(Service.S3);

// Multiple services
LocalStackContainer localstack = new LocalStackContainer(
    DockerImageName.parse("localstack/localstack:2.0")
)
    .withServices(Service.S3, Service.SQS, Service.DYNAMODB);

// All major services
LocalStackContainer localstack = new LocalStackContainer(
    DockerImageName.parse("localstack/localstack:2.0")
)
    .withServices(
        Service.S3,
        Service.DYNAMODB,
        Service.SQS,
        Service.SNS,
        Service.LAMBDA,
        Service.KINESIS,
        Service.CLOUDFORMATION,
        Service.CLOUDWATCH,
        Service.CLOUDWATCHLOGS,
        Service.IAM,
        Service.STS,
        Service.KMS
    );

Using Custom Service Names

Some LocalStack services may not be in the predefined Service enum. Use EnabledService.named() to enable them:

import org.testcontainers.containers.localstack.LocalStackContainer;
import org.testcontainers.containers.localstack.LocalStackContainer.EnabledService;
import org.testcontainers.utility.DockerImageName;

// Enable custom service not in the enum
LocalStackContainer localstack = new LocalStackContainer(
    DockerImageName.parse("localstack/localstack:2.0")
)
    .withServices(
        Service.S3,
        EnabledService.named("events"),        // Amazon EventBridge
        EnabledService.named("athena"),        // Amazon Athena
        EnabledService.named("glue")           // AWS Glue
    );

// Only custom services
LocalStackContainer localstack = new LocalStackContainer(
    DockerImageName.parse("localstack/localstack:2.0")
)
    .withServices(
        EnabledService.named("events"),
        EnabledService.named("scheduler")
    );

Service Name Mapping

Get the LocalStack service name from a Service enum:

String s3Name = Service.S3.getName();
System.out.println(s3Name);  // "s3"

String logsName = Service.CLOUDWATCHLOGS.getName();
System.out.println(logsName);  // "logs"

String dynamoName = Service.DYNAMODB.getName();
System.out.println(dynamoName);  // "dynamodb"

Legacy Port Numbers

In legacy mode (LocalStack < 0.11), services run on different ports:

LocalStackContainer legacyLocalstack = new LocalStackContainer(
    DockerImageName.parse("localstack/localstack:0.10.7")
)
    .withServices(Service.S3, Service.SQS);

legacyLocalstack.start();

// Get service-specific ports
int s3Port = Service.S3.getPort();
System.out.println(s3Port);  // 4572

int sqsPort = Service.SQS.getPort();
System.out.println(sqsPort);  // 4576

// Endpoints will use different ports
URI s3Endpoint = legacyLocalstack.getEndpointOverride(Service.S3);
URI sqsEndpoint = legacyLocalstack.getEndpointOverride(Service.SQS);
System.out.println(s3Endpoint);   // http://192.168.1.100:49201 (mapped from 4572)
System.out.println(sqsEndpoint);  // http://192.168.1.100:49205 (mapped from 4576)

Service-Specific Examples

S3 Example

import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import org.testcontainers.containers.localstack.LocalStackContainer;
import org.testcontainers.containers.localstack.LocalStackContainer.Service;
import org.testcontainers.utility.DockerImageName;

LocalStackContainer localstack = new LocalStackContainer(
    DockerImageName.parse("localstack/localstack:2.0")
)
    .withServices(Service.S3);

localstack.start();

try {
    S3Client s3 = S3Client.builder()
        .endpointOverride(localstack.getEndpoint())
        .credentialsProvider(
            StaticCredentialsProvider.create(
                AwsBasicCredentials.create(
                    localstack.getAccessKey(),
                    localstack.getSecretKey()
                )
            )
        )
        .region(Region.of(localstack.getRegion()))
        .forcePathStyle(true)  // Required for LocalStack
        .build();

    // Create bucket
    s3.createBucket(b -> b.bucket("my-bucket"));

    // Put object
    s3.putObject(
        p -> p.bucket("my-bucket").key("my-file.txt"),
        RequestBody.fromString("Hello LocalStack!")
    );
} finally {
    localstack.stop();
}

S3-Specific Considerations:

  • Must use path-style access (set forcePathStyle(true) in SDK v2 or withPathStyleAccessEnabled(true) in SDK v1)
  • getEndpoint() returns IP address to ensure path-style access works
  • Bucket names must follow S3 naming rules (lowercase, no underscores in some cases)
  • Some S3 features may not be fully supported (e.g., versioning, lifecycle policies)

SQS Example

import software.amazon.awssdk.services.sqs.SqsClient;
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import org.testcontainers.containers.localstack.LocalStackContainer;
import org.testcontainers.containers.localstack.LocalStackContainer.Service;
import org.testcontainers.utility.DockerImageName;

LocalStackContainer localstack = new LocalStackContainer(
    DockerImageName.parse("localstack/localstack:2.0")
)
    .withServices(Service.SQS);

localstack.start();

try {
    SqsClient sqs = SqsClient.builder()
        .endpointOverride(localstack.getEndpoint())
        .credentialsProvider(
            StaticCredentialsProvider.create(
                AwsBasicCredentials.create(
                    localstack.getAccessKey(),
                    localstack.getSecretKey()
                )
            )
        )
        .region(Region.of(localstack.getRegion()))
        .build();

    // Create queue
    String queueUrl = sqs.createQueue(q -> q.queueName("my-queue"))
        .queueUrl();

    // Send message
    sqs.sendMessage(m -> m
        .queueUrl(queueUrl)
        .messageBody("Hello from SQS!")
    );
} finally {
    localstack.stop();
}

SQS-Specific Considerations:

  • Queue URLs contain the endpoint hostname (network alias in Docker networks)
  • Dead letter queues are supported
  • FIFO queues are supported (queue name must end with .fifo)
  • Message attributes and system attributes are supported
  • Visibility timeout and message retention are configurable

DynamoDB Example

import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.*;
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import org.testcontainers.containers.localstack.LocalStackContainer;
import org.testcontainers.containers.localstack.LocalStackContainer.Service;
import org.testcontainers.utility.DockerImageName;

LocalStackContainer localstack = new LocalStackContainer(
    DockerImageName.parse("localstack/localstack:2.0")
)
    .withServices(Service.DYNAMODB);

localstack.start();

try {
    DynamoDbClient dynamodb = DynamoDbClient.builder()
        .endpointOverride(localstack.getEndpoint())
        .credentialsProvider(
            StaticCredentialsProvider.create(
                AwsBasicCredentials.create(
                    localstack.getAccessKey(),
                    localstack.getSecretKey()
                )
            )
        )
        .region(Region.of(localstack.getRegion()))
        .build();

    // Create table
    dynamodb.createTable(CreateTableRequest.builder()
        .tableName("Users")
        .keySchema(KeySchemaElement.builder()
            .attributeName("id")
            .keyType(KeyType.HASH)
            .build())
        .attributeDefinitions(AttributeDefinition.builder()
            .attributeName("id")
            .attributeType(ScalarAttributeType.S)
            .build())
        .billingMode(BillingMode.PAY_PER_REQUEST)
        .build());

    // Wait for table to be active
    dynamodb.waiter().waitUntilTableExists(
        b -> b.tableName("Users")
    );
} finally {
    localstack.stop();
}

DynamoDB-Specific Considerations:

  • Global secondary indexes (GSI) are supported
  • Local secondary indexes (LSI) are supported
  • Streams require Service.DYNAMODB_STREAMS to be enabled
  • TTL (Time To Live) is supported
  • Transactions are supported
  • Some advanced features may have limitations compared to real AWS

Lambda Example

import software.amazon.awssdk.services.lambda.LambdaClient;
import software.amazon.awssdk.core.SdkBytes;
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import org.testcontainers.containers.localstack.LocalStackContainer;
import org.testcontainers.containers.localstack.LocalStackContainer.Service;
import org.testcontainers.utility.DockerImageName;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;

LocalStackContainer localstack = new LocalStackContainer(
    DockerImageName.parse("localstack/localstack:2.0")
)
    .withServices(Service.LAMBDA);

localstack.start();

try {
    LambdaClient lambda = LambdaClient.builder()
        .endpointOverride(localstack.getEndpoint())
        .credentialsProvider(
            StaticCredentialsProvider.create(
                AwsBasicCredentials.create(
                    localstack.getAccessKey(),
                    localstack.getSecretKey()
                )
            )
        )
        .region(Region.of(localstack.getRegion()))
        .build();

    // Create Lambda function (requires zip file)
    byte[] functionZipBytes = Files.readAllBytes(Paths.get("function.zip"));
    
    lambda.createFunction(f -> f
        .functionName("my-function")
        .runtime(Runtime.PYTHON3_11)
        .handler("index.handler")
        .role("arn:aws:iam::000000000000:role/lambda-role")
        .code(c -> c.zipFile(SdkBytes.fromByteArray(functionZipBytes)))
    );

    // Wait for function to be active
    lambda.waiter().waitUntilFunctionActive(
        b -> b.functionName("my-function")
    );
} catch (IOException e) {
    System.err.println("Failed to read function zip: " + e.getMessage());
    throw new RuntimeException(e);
} finally {
    localstack.stop();
}

Lambda-Specific Considerations:

  • Lambda functions run in Docker containers spawned by LocalStack
  • Container labels are automatically configured for cleanup
  • Supported runtimes: Python, Node.js, Java, Go, .NET, Ruby
  • Environment variables are supported
  • VPC configuration is supported
  • Layers are supported
  • Some advanced features may have limitations

Version Compatibility

Services List Requirement

  • LocalStack < 0.13: At least one service must be specified using withServices()
  • LocalStack >= 0.13: Services list is optional; services start lazily when first accessed
// LocalStack 0.12 - services required
LocalStackContainer localstack12 = new LocalStackContainer(
    DockerImageName.parse("localstack/localstack:0.12")
)
    .withServices(Service.S3);  // Required!

// LocalStack 2.0 - services optional
LocalStackContainer localstack20 = new LocalStackContainer(
    DockerImageName.parse("localstack/localstack:2.0")
);
// No withServices() call needed - S3 will start when first accessed

localstack20.start();

S3Client s3 = S3Client.builder()
    .endpointOverride(localstack20.getEndpoint())
    // ... other configuration
    .build();

// S3 service starts automatically on first API call
s3.createBucket(b -> b.bucket("test"));

Lazy Service Loading

In LocalStack >= 0.13, services start lazily on first API call:

LocalStackContainer localstack = new LocalStackContainer(
    DockerImageName.parse("localstack/localstack:2.0")
);
// No services specified

localstack.start();

// First S3 call may be slower as service starts
S3Client s3 = S3Client.builder()
    .endpointOverride(localstack.getEndpoint())
    // ... configuration
    .build();

// Service starts here (may take a few seconds)
s3.createBucket(b -> b.bucket("test"));

// Subsequent calls are fast
s3.putObject(/* ... */);

LocalStack Pro Services

Some services are only available in LocalStack Pro (the localstack/localstack-pro image):

LocalStackContainer localstackPro = new LocalStackContainer(
    DockerImageName.parse("localstack/localstack-pro:2.0")
)
    .withEnv("LOCALSTACK_API_KEY", System.getenv("LOCALSTACK_API_KEY"))
    .withServices(
        Service.S3,
        EnabledService.named("xray"),          // X-Ray (Pro only)
        EnabledService.named("appsync"),       // AppSync (Pro only)
        EnabledService.named("qldb")           // QLDB (Pro only)
    );

Pro-Only Services:

  • X-Ray
  • AppSync
  • QLDB (Quantum Ledger Database)
  • Some advanced features of standard services

Note: Attempting to use Pro-only services without a valid API key will result in errors.

Service-Specific Edge Cases

S3 Edge Cases

  • Bucket names with dots may cause issues with virtual-hosted style access (use path-style)
  • Large file uploads may timeout (adjust timeout settings)
  • Multipart uploads are supported but may have limitations
  • Versioning is supported but may not match AWS behavior exactly
  • Lifecycle policies are supported but may have limitations

SQS Edge Cases

  • Queue names must be unique within the account
  • FIFO queue names must end with .fifo
  • Message size limit is 256 KB (same as AWS)
  • Visibility timeout must be between 0 and 12 hours
  • Long polling is supported

DynamoDB Edge Cases

  • Table creation may take a few seconds (wait for ACTIVE status)
  • Streams require both Service.DYNAMODB and Service.DYNAMODB_STREAMS
  • Global tables are not fully supported
  • On-demand billing mode is recommended for testing
  • Provisioned capacity mode is supported but may have limitations

Lambda Edge Cases

  • Function code must be provided as a zip file
  • Function size limits apply (same as AWS)
  • Cold starts may be slower than AWS
  • Container reuse may affect function isolation
  • Environment variables are supported but may have size limits

Error Handling for Services

import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.S3Exception;
import software.amazon.awssdk.services.s3.model.NoSuchBucketException;

public void handleServiceErrors(S3Client s3, String bucketName) {
    try {
        s3.createBucket(b -> b.bucket(bucketName));
    } catch (BucketAlreadyExistsException e) {
        // Bucket already exists, continue
        System.out.println("Bucket already exists: " + bucketName);
    } catch (S3Exception e) {
        System.err.println("S3 error: " + e.awsErrorDetails().errorMessage());
        System.err.println("Error code: " + e.awsErrorDetails().errorCode());
        throw e;
    }
}