Apache Flink SQL, Table API, and UDF development for both OSS Flink and Confluent Cloud
95
Does it follow best practices?
Evaluation — 97%
↑ 1.21xAgent success when using this tile
Validation for skill structure
A payment processing company needs to build a real-time fraud scoring system that goes beyond what standard Flink SQL can express. They need custom stateful operators that can:
Transaction Velocity Tracker — A stateful function that tracks per-user transaction patterns. For each incoming transaction, it should:
user_id, transaction_id, fraud_score, risk_factors (comma-separated), transaction_amount, avg_amountSession Revenue Calculator — A stateful function that groups checkout events into sessions and emits session summaries when a session expires. Requirements:
user_id, session_start (TIMESTAMP), session_end (TIMESTAMP), event_count (INT), total_revenue (DOUBLE)Write the SQL queries that invoke both functions against input tables, with proper partitioning and time ordering.
The transaction data comes from a table transactions with fields: user_id (STRING), transaction_id (STRING), amount (DOUBLE), merchant_id (STRING), event_time (TIMESTAMP(3)) with a 1-second watermark.
The checkout data comes from a table checkouts with fields: user_id (STRING), item_id (STRING), price (DOUBLE), checkout_time (TIMESTAMP(3)) with a 2-second watermark.
TransactionVelocityTracker.java — The fraud detection PTF implementationSessionRevenueCalculator.java — The session revenue PTF implementationqueries.sql — SQL statements to invoke both PTFs with correct syntaxInstall with Tessl CLI
npx tessl i gamussa/flink-sql