Apache Flink SQL, Table API, and UDF development for both OSS Flink and Confluent Cloud
95
Does it follow best practices?
Evaluation — 97%
↑ 1.21xAgent success when using this tile
Validation for skill structure
An e-commerce platform needs to build a real-time order enrichment pipeline using Flink SQL. The platform has several data sources:
Orders stream — Kafka topic orders with fields: order_id (STRING), customer_id (STRING), product_id (STRING), quantity (INT), unit_price (DECIMAL(10,2)), currency (STRING), order_time (TIMESTAMP(3)). Orders can arrive up to 5 seconds late.
Shipments stream — Kafka topic shipments with fields: shipment_id (STRING), order_id (STRING), warehouse_id (STRING), ship_time (TIMESTAMP(3)). A shipment for an order always happens within 48 hours of the order.
Currency rates table — Kafka topic currency_rates with fields: currency (STRING), rate_to_usd (DECIMAL(10,6)), update_time (TIMESTAMP(3)). This is a versioned table (primary key on currency) that receives rate updates.
Customer profiles — An external MySQL database table that can be used as a lookup dimension. Fields: customer_id (STRING), name (STRING), tier (STRING), country (STRING).
Customer CDC stream — Kafka topic customer_changes carrying Debezium CDC events from the customer database, with fields: customer_id (INT), name (STRING), email (STRING), updated_at (TIMESTAMP(3)).
The team needs:
Write all DDL and DML statements in a single file.
enrichment.sql — All CREATE TABLE, CREATE VIEW, INSERT INTO, and SELECT statementsInstall with Tessl CLI
npx tessl i gamussa/flink-sql@1.0.0