or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

avro-integration.mdformat-factory.mdindex.mdprotobuf-integration.mdrowdata-writers.mdschema-utilities.mdvectorized-input.md
tile.json

tessl/maven-org-apache-flink--flink-sql-parquet-2-12

Apache Flink SQL Parquet format support package that provides SQL client integration for reading and writing Parquet files in Flink applications.

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
mavenpkg:maven/org.apache.flink/flink-sql-parquet_2.12@1.14.x

To install, run

npx @tessl/cli install tessl/maven-org-apache-flink--flink-sql-parquet-2-12@1.14.0

index.mddocs/

Flink SQL Parquet

Apache Flink SQL Parquet provides comprehensive support for reading and writing Parquet files in Flink SQL applications. This package bundles Parquet format libraries with proper shading configuration to avoid dependency conflicts in SQL client environments, supporting both batch and streaming data processing workflows.

Package Information

  • Package Name: flink-sql-parquet_2.12
  • Package Type: maven
  • Language: Java
  • Installation: Maven dependency org.apache.flink:flink-sql-parquet_2.12:1.14.6

Core Imports

import org.apache.flink.formats.parquet.ParquetFileFormatFactory;
import org.apache.flink.formats.parquet.ParquetWriterFactory;
import org.apache.flink.formats.parquet.ParquetColumnarRowInputFormat;
import org.apache.flink.formats.parquet.avro.ParquetAvroWriters;
import org.apache.flink.formats.parquet.row.ParquetRowDataBuilder;
import org.apache.flink.formats.parquet.protobuf.ParquetProtoWriters;

Basic Usage

import org.apache.flink.formats.parquet.row.ParquetRowDataBuilder;
import org.apache.flink.formats.parquet.ParquetWriterFactory;
import org.apache.flink.table.types.logical.RowType;
import org.apache.flink.table.data.RowData;
import org.apache.flink.connector.file.sink.FileSink;
import org.apache.flink.core.fs.Path;
import org.apache.hadoop.conf.Configuration;
import org.apache.flink.table.types.logical.LogicalType;

// Create a Parquet writer factory for RowData
RowType rowType = RowType.of(/* field types */);
Configuration hadoopConfig = new Configuration();
boolean utcTimezone = false;

ParquetWriterFactory<RowData> writerFactory = 
    ParquetRowDataBuilder.createWriterFactory(rowType, hadoopConfig, utcTimezone);

// Use with Flink's file sink
Path outputPath = new Path("/output/path");
FileSink<RowData> sink = FileSink
    .forBulkFormat(outputPath, writerFactory)
    .build();

Architecture

Flink SQL Parquet is organized around several key components:

  • Format Factory: SQL table factory integration for automatic format detection
  • Writer APIs: Multiple writer implementations for different data types (RowData, Avro, Protobuf)
  • Input Formats: Vectorized and columnar input formats for high-performance reading
  • Schema Conversion: Utilities for converting between Flink and Parquet schemas
  • Vectorized Processing: Column-oriented data processing for improved performance

Capabilities

Format Factory Integration

SQL table factory that automatically integrates Parquet format with Flink's table ecosystem. Provides transparent support for CREATE TABLE statements with Parquet format.

public class ParquetFileFormatFactory implements BulkReaderFormatFactory, BulkWriterFormatFactory {
    public BulkDecodingFormat<RowData> createDecodingFormat(
        DynamicTableFactory.Context context, 
        ReadableConfig formatOptions
    );
    public EncodingFormat<BulkWriter.Factory<RowData>> createEncodingFormat(
        DynamicTableFactory.Context context, 
        ReadableConfig formatOptions
    );
    public String factoryIdentifier();
}

Format Factory Integration

RowData Writer APIs

High-performance writers for Flink's internal RowData format with comprehensive schema mapping and configuration support.

public class ParquetRowDataBuilder extends ParquetWriter.Builder<RowData, ParquetRowDataBuilder> {
    public static ParquetWriterFactory<RowData> createWriterFactory(
        RowType rowType, 
        Configuration conf, 
        boolean utcTimestamp
    );
}

public class ParquetWriterFactory<T> implements BulkWriter.Factory<T> {
    public ParquetWriterFactory(ParquetBuilder<T> writerBuilder);
    public BulkWriter<T> create(FSDataOutputStream stream) throws IOException;
}

RowData Writers

Avro Integration

Complete Avro integration supporting specific records, generic records, and reflection-based serialization with full schema compatibility.

public class ParquetAvroWriters {
    public static <T extends SpecificRecordBase> ParquetWriterFactory<T> forSpecificRecord(Class<T> type);
    public static ParquetWriterFactory<GenericRecord> forGenericRecord(Schema schema);
    public static <T> ParquetWriterFactory<T> forReflectRecord(Class<T> type);
}

Avro Integration

Protobuf Integration

Seamless Protocol Buffers integration for writing Protobuf messages to Parquet format with automatic schema generation.

public class ParquetProtoWriters {
    public static <T extends Message> ParquetWriterFactory<T> forType(Class<T> type);
}

Protobuf Integration

Vectorized Input Formats

High-performance columnar input formats optimized for analytical workloads with vectorized processing and partition support.

public class ParquetColumnarRowInputFormat<SplitT extends FileSourceSplit> 
    extends ParquetVectorizedInputFormat<RowData, SplitT> {
    
    public static <SplitT extends FileSourceSplit> ParquetColumnarRowInputFormat<SplitT> createPartitionedFormat(
        Configuration hadoopConfig,
        RowType producedRowType,
        List<String> partitionKeys,
        PartitionFieldExtractor<SplitT> extractor,
        int batchSize,
        boolean isUtcTimestamp,
        boolean isCaseSensitive
    );
}

Vectorized Input

Schema Utilities

Utilities for converting between Flink logical types and Parquet schema definitions with full type mapping support.

public class ParquetSchemaConverter {
    public static MessageType convertToParquetMessageType(String name, RowType rowType);
}

public class SerializableConfiguration {
    public SerializableConfiguration(Configuration conf);
    public Configuration conf();
}

Schema Utilities

Configuration Options

public static final ConfigOption<Boolean> UTC_TIMEZONE = 
    key("utc-timezone")
        .booleanType()
        .defaultValue(false)
        .withDescription("Use UTC timezone or local timezone to the conversion between epoch" +
            " time and LocalDateTime. Hive 0.x/1.x/2.x use local timezone. But Hive 3.x" +
            " use UTC timezone");

Common Patterns

Creating Table with Parquet Format

CREATE TABLE my_parquet_table (
    id BIGINT,
    name STRING,
    timestamp_col TIMESTAMP(3)
) WITH (
    'connector' = 'filesystem',
    'path' = '/path/to/parquet/files',
    'format' = 'parquet',
    'parquet.utc-timezone' = 'true'
);

Programmatic Writer Creation

// For RowData
ParquetWriterFactory<RowData> factory = ParquetRowDataBuilder.createWriterFactory(
    rowType, hadoopConfig, true
);

// For Avro specific records
ParquetWriterFactory<MyAvroRecord> avroFactory = 
    ParquetAvroWriters.forSpecificRecord(MyAvroRecord.class);

// For Protobuf messages  
ParquetWriterFactory<MyProtoMessage> protoFactory = 
    ParquetProtoWriters.forType(MyProtoMessage.class);

Types

@FunctionalInterface
public interface ParquetBuilder<T> extends Serializable {
    ParquetWriter<T> createWriter(OutputFile out) throws IOException;
}

public class ParquetBulkWriter<T> implements BulkWriter<T> {
    public ParquetBulkWriter(ParquetWriter<T> parquetWriter);
    public void addElement(T datum) throws IOException;
    public void flush();
    public void finish() throws IOException;
}

public abstract class ParquetVectorizedInputFormat<T, SplitT extends FileSourceSplit> 
    implements BulkFormat<T, SplitT> {
    // Abstract base class for vectorized input formats
}