Apache Flink SQL, Table API, and UDF development for both OSS Flink and Confluent Cloud
95
Does it follow best practices?
Evaluation — 97%
↑ 1.21xAgent success when using this tile
Validation for skill structure
An IoT manufacturing company has sensors deployed across 3 factory floors. Each sensor emits temperature, humidity, and vibration readings every second. The data engineering team needs to build a streaming analytics pipeline that:
The sensor data arrives in a Kafka topic called sensor_readings with fields: sensor_id (STRING), floor_id (INT), temperature (DOUBLE), humidity (DOUBLE), vibration (DOUBLE), reading_time (TIMESTAMP(3)). Readings can arrive up to 10 seconds out of order.
The web app activity arrives in a Kafka topic called app_activity with fields: user_id (STRING), action_type (STRING), page (STRING), activity_time (TIMESTAMP(3)). Activity events can arrive up to 5 seconds late.
The energy data arrives in a Kafka topic called energy_usage with fields: meter_id (STRING), kwh (DOUBLE), measurement_time (TIMESTAMP(3)).
Write all the Flink SQL DDL and queries needed for this pipeline. Put everything in a single file called pipeline.sql.
pipeline.sql — All CREATE TABLE statements with proper configurations and all SELECT/INSERT queriesInstall with Tessl CLI
npx tessl i gamussa/flink-sql