or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

cluster-management.mdconfiguration.mdcore-api.mddata-structures.mdindex.mdsql-service.mdstream-processing.md
tile.json

tessl/maven-com-hazelcast--hazelcast

In-memory distributed computing platform for real-time stream processing and data storage with SQL capabilities

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
mavenpkg:maven/com.hazelcast/hazelcast@5.5.x

To install, run

npx @tessl/cli install tessl/maven-com-hazelcast--hazelcast@5.5.0

index.mddocs/

Hazelcast Java Library

Hazelcast is a comprehensive real-time data platform that provides distributed computing capabilities including in-memory data grids, stream processing, and SQL queries. It enables stateful and fault-tolerant data processing with low-latency access to distributed data structures.

Package Information

<dependency>
    <groupId>com.hazelcast</groupId>
    <artifactId>hazelcast</artifactId>
    <version>5.5.0</version>
</dependency>

Java Compatibility: Java 8+
License: Apache-2.0
Documentation: https://docs.hazelcast.com/

Core Imports

import com.hazelcast.core.Hazelcast;
import com.hazelcast.core.HazelcastInstance;
import com.hazelcast.map.IMap;
import com.hazelcast.collection.IQueue;
import com.hazelcast.client.HazelcastClient;
import com.hazelcast.config.Config;
import com.hazelcast.sql.SqlService;
import com.hazelcast.jet.JetService;

Basic Usage

Creating a Hazelcast Instance

import com.hazelcast.core.Hazelcast;
import com.hazelcast.core.HazelcastInstance;
import com.hazelcast.config.Config;

// Create with default configuration
HazelcastInstance hz = Hazelcast.newHazelcastInstance();

// Create with custom configuration
Config config = new Config();
config.setInstanceName("my-instance");
HazelcastInstance hz = Hazelcast.newHazelcastInstance(config);

// Client connection
import com.hazelcast.client.HazelcastClient;
import com.hazelcast.client.config.ClientConfig;

ClientConfig clientConfig = new ClientConfig();
clientConfig.getNetworkConfig().addAddress("127.0.0.1:5701");
HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);

// Bootstrapped instance for Jet jobs (works locally and in distributed mode)
HazelcastInstance jetHz = Hazelcast.bootstrappedInstance();

Working with Distributed Maps

import com.hazelcast.map.IMap;
import java.util.concurrent.TimeUnit;

// Get distributed map
IMap<String, String> map = hz.getMap("my-map");

// Basic operations
map.put("key1", "value1");
map.put("key2", "value2", 30, TimeUnit.SECONDS); // with TTL
String value = map.get("key1");
boolean exists = map.containsKey("key1");

// Atomic operations
String oldValue = map.putIfAbsent("key3", "value3");
boolean replaced = map.replace("key1", "value1", "new-value1");

// Querying
import com.hazelcast.query.Predicates;
Collection<String> values = map.values(Predicates.like("name", "John%"));

Working with Collections

import com.hazelcast.collection.IQueue;
import com.hazelcast.collection.IList;
import com.hazelcast.collection.ISet;

// Distributed queue
IQueue<String> queue = hz.getQueue("my-queue");
queue.offer("item1");
String item = queue.poll();

// Distributed list
IList<String> list = hz.getList("my-list");
list.add("element1");
list.add("element2");

// Distributed set
ISet<String> set = hz.getSet("my-set");
set.add("unique-element");

Architecture

Hazelcast provides a distributed architecture with several key components:

Cluster Management

  • Members: Server nodes that store data and execute operations
  • Clients: Lightweight connections that access cluster data
  • Partitioning: Automatic data distribution across cluster members
  • Discovery: Multiple mechanisms for cluster formation (TCP/IP, multicast, cloud)

Data Distribution

  • Partitioned Storage: Data automatically distributed across cluster
  • Replication: Configurable backup copies for fault tolerance
  • Near Cache: Local caching for frequently accessed data
  • Persistence: Optional disk storage with MapStore integration

Processing Models

  • Synchronous Operations: Traditional request/response patterns
  • Asynchronous Operations: Non-blocking operations with callbacks
  • Event-Driven: Listeners for data and cluster changes
  • Stream Processing: Real-time data processing with Jet engine

Capabilities

Core API and Instance Management

Basic instance creation, lifecycle management, and core distributed object access.

import com.hazelcast.core.HazelcastInstance;
import com.hazelcast.core.LifecycleService;
import com.hazelcast.cluster.Cluster;

// Instance lifecycle
LifecycleService lifecycle = hz.getLifecycleService();
boolean isRunning = lifecycle.isRunning();

// Cluster access
Cluster cluster = hz.getCluster();
Set<Member> members = cluster.getMembers();

// Graceful shutdown
hz.shutdown();

Core API and Instance Management

Distributed Data Structures

Comprehensive collection of distributed data structures including maps, queues, lists, sets, and specialized structures.

import com.hazelcast.map.IMap;
import com.hazelcast.multimap.MultiMap;
import com.hazelcast.replicatedmap.ReplicatedMap;
import com.hazelcast.ringbuffer.Ringbuffer;

// Various data structures
IMap<String, Object> distributedMap = hz.getMap("cache");
MultiMap<String, String> multiMap = hz.getMultiMap("categories");
ReplicatedMap<String, String> replicatedMap = hz.getReplicatedMap("config");
Ringbuffer<String> ringBuffer = hz.getRingbuffer("events");

Distributed Data Structures

Stream Processing (Jet)

High-performance stream and batch processing engine built into Hazelcast.

import com.hazelcast.jet.JetService;
import com.hazelcast.jet.pipeline.Pipeline;
import com.hazelcast.jet.pipeline.Sinks;
import com.hazelcast.jet.pipeline.Sources;

JetService jet = hz.getJet();

Pipeline pipeline = Pipeline.create();
pipeline.readFrom(Sources.map("source-map"))
        .filter(entry -> entry.getValue().toString().length() > 5)
        .writeTo(Sinks.map("result-map"));

Job job = jet.newJob(pipeline);

Stream Processing

SQL Queries

Distributed SQL engine for querying data across the cluster with standard SQL syntax.

import com.hazelcast.sql.SqlService;
import com.hazelcast.sql.SqlResult;
import com.hazelcast.sql.SqlRow;

SqlService sql = hz.getSql();

// Execute query
SqlResult result = sql.execute("SELECT name, age FROM person WHERE age > ?", 25);

// Process results
for (SqlRow row : result) {
    String name = row.getObject("name");
    Integer age = row.getObject("age");
    System.out.println(name + ": " + age);
}

SQL Queries

Cluster Management

Cluster membership, discovery, state management, and distributed coordination.

import com.hazelcast.cluster.Cluster;
import com.hazelcast.cluster.Member;
import com.hazelcast.cluster.MembershipListener;
import com.hazelcast.partition.PartitionService;

// Cluster operations
Cluster cluster = hz.getCluster();
cluster.addMembershipListener(new MembershipListener() {
    public void memberAdded(MembershipEvent membershipEvent) {
        System.out.println("Member added: " + membershipEvent.getMember());
    }
    // ... other methods
});

// Partition information
PartitionService partitionService = hz.getPartitionService();
Partition partition = partitionService.getPartition("my-key");

Cluster Management

Configuration

Comprehensive configuration system supporting programmatic, XML, and YAML configuration.

import com.hazelcast.config.Config;
import com.hazelcast.config.MapConfig;
import com.hazelcast.config.NetworkConfig;
import com.hazelcast.config.JoinConfig;

Config config = new Config();

// Map configuration
MapConfig mapConfig = new MapConfig("my-map");
mapConfig.setBackupCount(2);
mapConfig.setTimeToLiveSeconds(300);
config.addMapConfig(mapConfig);

// Network configuration
NetworkConfig network = config.getNetworkConfig();
network.setPort(5701);
JoinConfig join = network.getJoin();
join.getMulticastConfig().setEnabled(false);
join.getTcpIpConfig().setEnabled(true).addMember("192.168.1.100");

Configuration

Key Features

  • High Performance: In-memory storage with microsecond latencies
  • Horizontal Scaling: Linear scale-out across commodity hardware
  • Fault Tolerance: Automatic failover and data recovery
  • ACID Compliance: Transactions and consistency guarantees
  • Standard Integration: JCache (JSR-107), Spring, CDI support
  • Cloud Native: Kubernetes operator and cloud discovery
  • Security: Authentication, authorization, TLS encryption
  • Monitoring: JMX metrics and management center integration