or run

tessl search
Log in

sqs

tessl install github:itsmostafa/aws-agent-skills --skill sqs

github.com/itsmostafa/aws-agent-skills

AWS SQS message queue service for decoupled architectures. Use when creating queues, configuring dead-letter queues, managing visibility timeouts, implementing FIFO ordering, or integrating with Lambda.

Review Score

81%

Validation Score

12/16

Implementation Score

65%

Activation Score

100%

AWS SQS

Amazon Simple Queue Service (SQS) is a fully managed message queuing service for decoupling and scaling microservices, distributed systems, and serverless applications.

Table of Contents

  • Core Concepts
  • Common Patterns
  • CLI Reference
  • Best Practices
  • Troubleshooting
  • References

Core Concepts

Queue Types

TypeDescriptionUse Case
StandardAt-least-once, best-effort orderingHigh throughput
FIFOExactly-once, strict orderingOrder-sensitive processing

Key Settings

SettingDescriptionDefault
Visibility TimeoutTime message is hidden after receive30 seconds
Message RetentionHow long messages are kept4 days (max 14)
Delay SecondsDelay before message is available0
Max Message SizeMaximum message size256 KB

Dead-Letter Queue (DLQ)

Queue for messages that failed processing after maxReceiveCount attempts.

Common Patterns

Create a Standard Queue

AWS CLI:

aws sqs create-queue \
  --queue-name my-queue \
  --attributes '{
    "VisibilityTimeout": "60",
    "MessageRetentionPeriod": "604800",
    "ReceiveMessageWaitTimeSeconds": "20"
  }'

boto3:

import boto3

sqs = boto3.client('sqs')

response = sqs.create_queue(
    QueueName='my-queue',
    Attributes={
        'VisibilityTimeout': '60',
        'MessageRetentionPeriod': '604800',
        'ReceiveMessageWaitTimeSeconds': '20'  # Long polling
    }
)
queue_url = response['QueueUrl']

Create FIFO Queue

aws sqs create-queue \
  --queue-name my-queue.fifo \
  --attributes '{
    "FifoQueue": "true",
    "ContentBasedDeduplication": "true"
  }'

Configure Dead-Letter Queue

# Create DLQ
aws sqs create-queue --queue-name my-queue-dlq

# Get DLQ ARN
DLQ_ARN=$(aws sqs get-queue-attributes \
  --queue-url https://sqs.us-east-1.amazonaws.com/123456789012/my-queue-dlq \
  --attribute-names QueueArn \
  --query 'Attributes.QueueArn' --output text)

# Set redrive policy on main queue
aws sqs set-queue-attributes \
  --queue-url https://sqs.us-east-1.amazonaws.com/123456789012/my-queue \
  --attributes "{
    \"RedrivePolicy\": \"{\\\"deadLetterTargetArn\\\":\\\"${DLQ_ARN}\\\",\\\"maxReceiveCount\\\":\\\"3\\\"}\"
  }"

Send Messages

import boto3
import json

sqs = boto3.client('sqs')
queue_url = 'https://sqs.us-east-1.amazonaws.com/123456789012/my-queue'

# Send single message
sqs.send_message(
    QueueUrl=queue_url,
    MessageBody=json.dumps({'order_id': '12345', 'action': 'process'}),
    MessageAttributes={
        'MessageType': {
            'DataType': 'String',
            'StringValue': 'Order'
        }
    }
)

# Send to FIFO queue
sqs.send_message(
    QueueUrl='https://sqs.us-east-1.amazonaws.com/123456789012/my-queue.fifo',
    MessageBody=json.dumps({'order_id': '12345'}),
    MessageGroupId='order-12345',
    MessageDeduplicationId='unique-id-12345'
)

# Batch send (up to 10 messages)
sqs.send_message_batch(
    QueueUrl=queue_url,
    Entries=[
        {'Id': '1', 'MessageBody': json.dumps({'id': 1})},
        {'Id': '2', 'MessageBody': json.dumps({'id': 2})},
        {'Id': '3', 'MessageBody': json.dumps({'id': 3})}
    ]
)

Receive and Process Messages

import boto3
import json

sqs = boto3.client('sqs')
queue_url = 'https://sqs.us-east-1.amazonaws.com/123456789012/my-queue'

while True:
    # Long polling (wait up to 20 seconds)
    response = sqs.receive_message(
        QueueUrl=queue_url,
        MaxNumberOfMessages=10,
        WaitTimeSeconds=20,
        MessageAttributeNames=['All'],
        AttributeNames=['All']
    )

    messages = response.get('Messages', [])

    for message in messages:
        try:
            body = json.loads(message['Body'])
            print(f"Processing: {body}")

            # Process message...

            # Delete on success
            sqs.delete_message(
                QueueUrl=queue_url,
                ReceiptHandle=message['ReceiptHandle']
            )
        except Exception as e:
            print(f"Error processing message: {e}")
            # Message will become visible again after visibility timeout

Lambda Integration

# Create event source mapping
aws lambda create-event-source-mapping \
  --function-name my-function \
  --event-source-arn arn:aws:sqs:us-east-1:123456789012:my-queue \
  --batch-size 10 \
  --maximum-batching-window-in-seconds 5

Lambda handler:

def handler(event, context):
    for record in event['Records']:
        body = json.loads(record['body'])
        message_id = record['messageId']

        try:
            process_message(body)
        except Exception as e:
            # Raise to put message back in queue
            raise

    return {'batchItemFailures': []}

CLI Reference

Queue Management

CommandDescription
aws sqs create-queueCreate queue
aws sqs delete-queueDelete queue
aws sqs list-queuesList queues
aws sqs get-queue-urlGet queue URL by name
aws sqs get-queue-attributesGet queue settings
aws sqs set-queue-attributesUpdate queue settings

Messaging

CommandDescription
aws sqs send-messageSend single message
aws sqs send-message-batchSend up to 10 messages
aws sqs receive-messageReceive messages
aws sqs delete-messageDelete message
aws sqs delete-message-batchDelete up to 10 messages
aws sqs purge-queueDelete all messages

Visibility

CommandDescription
aws sqs change-message-visibilityChange timeout
aws sqs change-message-visibility-batchBatch change

Best Practices

Message Processing

  • Use long polling (WaitTimeSeconds=20) to reduce API calls
  • Delete messages promptly after successful processing
  • Configure appropriate visibility timeout (> processing time)
  • Implement idempotent consumers for at-least-once delivery

Dead-Letter Queues

  • Always configure DLQ for production queues
  • Set appropriate maxReceiveCount (usually 3-5)
  • Monitor DLQ depth with CloudWatch alarms
  • Process DLQ messages manually or with automation

FIFO Queues

  • Use message group IDs to partition ordering
  • Enable content-based deduplication or provide dedup IDs
  • Throughput: 300 msgs/sec without batching, 3000 with

Security

  • Use queue policies to control access
  • Enable encryption with SSE-SQS or SSE-KMS
  • Use VPC endpoints for private access

Troubleshooting

Messages Not Being Received

Causes:

  • Short polling returning empty
  • All messages in flight (visibility timeout)
  • Messages delayed (DelaySeconds)

Debug:

# Check queue attributes
aws sqs get-queue-attributes \
  --queue-url $QUEUE_URL \
  --attribute-names All

# Check approximate message counts
aws sqs get-queue-attributes \
  --queue-url $QUEUE_URL \
  --attribute-names \
    ApproximateNumberOfMessages,\
    ApproximateNumberOfMessagesNotVisible,\
    ApproximateNumberOfMessagesDelayed

Messages Going to DLQ

Causes:

  • Processing errors
  • Visibility timeout too short
  • Consumer not deleting messages

Redrive from DLQ:

# Enable redrive allow policy on source queue
aws sqs set-queue-attributes \
  --queue-url $MAIN_QUEUE_URL \
  --attributes '{"RedriveAllowPolicy": "{\"redrivePermission\":\"allowAll\"}"}'

# Start redrive
aws sqs start-message-move-task \
  --source-arn arn:aws:sqs:us-east-1:123456789012:my-queue-dlq \
  --destination-arn arn:aws:sqs:us-east-1:123456789012:my-queue

Duplicate Processing

Solutions:

  • Use FIFO queues for exactly-once
  • Implement idempotency in consumer
  • Track processed message IDs in database

Lambda Not Processing

# Check event source mapping
aws lambda list-event-source-mappings \
  --function-name my-function

# Check for errors
aws lambda get-event-source-mapping \
  --uuid <mapping-uuid>

References