CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/npm-jest-worker

Module for executing heavy tasks under forked processes in parallel, by providing a Promise based interface, minimum overhead, and bound workers.

Pending
Quality

Pending

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Pending

The risk profile of this skill

Overview
Eval results
Files

task-queues.mddocs/

Task Queues

Configurable task scheduling systems that control how method calls are distributed and prioritized across workers. Jest Worker supports pluggable queue implementations to optimize task execution based on different strategies.

Capabilities

Task Queue Interface

Base interface for implementing custom task scheduling strategies.

/**
 * Interface for task queue implementations that manage task distribution
 */
interface TaskQueue {
  /**
   * Enqueue a task for execution by a specific worker or shared queue
   * @param task - The task to be queued
   * @param workerId - Optional worker ID for worker-specific tasks
   */
  enqueue(task: QueueChildMessage, workerId?: number): void;
  
  /**
   * Dequeue the next task for a specific worker
   * @param workerId - ID of the worker requesting a task
   * @returns Next task for the worker or null if none available
   */
  dequeue(workerId: number): QueueChildMessage | null;
}

interface QueueChildMessage {
  request: ChildMessageCall;
  onStart: OnStart;
  onEnd: OnEnd;
  onCustomMessage: OnCustomMessage;
}

FIFO Queue

First-in, first-out task queue that maintains strict ordering across worker-specific and shared queues. This is the default queue implementation.

/**
 * First-in, first-out task queue with cross-queue ordering guarantees
 * Maintains FIFO ordering between worker-specific and shared queues
 */
class FifoQueue implements TaskQueue {
  /** Add task to worker-specific or shared queue */
  enqueue(task: QueueChildMessage, workerId?: number): void;
  
  /** Get next task for worker, respecting FIFO ordering */
  dequeue(workerId: number): QueueChildMessage | null;
}

Usage Examples:

import { Worker, FifoQueue } from "jest-worker";

// Default FIFO behavior (no need to specify)
const worker = new Worker("./worker.js");

// Explicitly using FIFO queue
const fifoWorker = new Worker("./worker.js", {
  taskQueue: new FifoQueue()
});

// Tasks are processed in strict order
await worker.task1(); // Executed first
await worker.task2(); // Executed second
await worker.task3(); // Executed third

Priority Queue

Priority-based task queue that processes tasks according to computed priority values. Lower priority numbers are processed first.

/**
 * Priority queue that processes tasks by computed priority (lower first)
 * FIFO ordering is not guaranteed for tasks with the same priority
 * Worker-specific tasks with same priority as shared tasks are processed first
 */
class PriorityQueue implements TaskQueue {
  /**
   * Create priority queue with custom priority computation
   * @param computePriority - Function to compute task priority
   */
  constructor(computePriority: ComputeTaskPriorityCallback);
  
  /** Add task to appropriate priority queue */
  enqueue(task: QueueChildMessage, workerId?: number): void;
  
  /** Get highest priority task for worker */
  dequeue(workerId: number): QueueChildMessage | null;
}

/**
 * Function to compute priority for tasks
 * @param method - Name of the method being called
 * @param args - Arguments passed to the method
 * @returns Priority number (lower values = higher priority)
 */
type ComputeTaskPriorityCallback = (
  method: string,
  ...args: Array<unknown>
) => number;

Usage Examples:

import { Worker, PriorityQueue } from "jest-worker";

// Priority by file size (smaller files first)
const fileSizeQueue = new PriorityQueue((method, filepath) => {
  if (method === "processFile") {
    const stats = fs.statSync(filepath);
    return stats.size; // Smaller files = lower priority number = higher priority
  }
  return 0; // Default priority for other methods
});

const worker = new Worker("./file-processor.js", {
  taskQueue: fileSizeQueue
});

// Priority by method importance
const methodPriorityQueue = new PriorityQueue((method) => {
  const priorities = {
    criticalTask: 1,    // Highest priority
    normalTask: 5,      // Medium priority
    backgroundTask: 10  // Lowest priority
  };
  return priorities[method] || 5;
});

const priorityWorker = new Worker("./multi-task-worker.js", {
  taskQueue: methodPriorityQueue
});

// These will be executed in priority order, not call order
await priorityWorker.backgroundTask(); // Priority 10 - executed last
await priorityWorker.criticalTask();   // Priority 1 - executed first
await priorityWorker.normalTask();     // Priority 5 - executed second

Custom Queue Implementation

You can implement custom queue strategies by implementing the TaskQueue interface:

class CustomQueue implements TaskQueue {
  private sharedTasks: QueueChildMessage[] = [];
  private workerTasks: Map<number, QueueChildMessage[]> = new Map();
  
  enqueue(task: QueueChildMessage, workerId?: number): void {
    if (workerId !== undefined) {
      if (!this.workerTasks.has(workerId)) {
        this.workerTasks.set(workerId, []);
      }
      this.workerTasks.get(workerId)!.push(task);
    } else {
      this.sharedTasks.push(task);
    }
  }
  
  dequeue(workerId: number): QueueChildMessage | null {
    // Custom logic: prefer worker-specific tasks on weekends
    const isWeekend = new Date().getDay() === 0 || new Date().getDay() === 6;
    
    const workerQueue = this.workerTasks.get(workerId) || [];
    
    if (isWeekend && workerQueue.length > 0) {
      return workerQueue.shift() || null;
    }
    
    // Otherwise, prefer shared tasks
    return this.sharedTasks.shift() || workerQueue.shift() || null;
  }
}

const customWorker = new Worker("./worker.js", {
  taskQueue: new CustomQueue()
});

Queue Behavior Patterns

Worker-Specific vs Shared Tasks

Tasks can be enqueued for specific workers or shared among all workers:

// This concept is handled internally by the Farm class
// When computeWorkerKey returns a key, tasks are bound to specific workers
const boundWorker = new Worker("./caching-worker.js", {
  computeWorkerKey: (method, key) => {
    // Tasks with same key always go to same worker
    return method === "processWithCache" ? key : null;
  }
});

// These will all go to the same worker (bound by key "user123")
await boundWorker.processWithCache("user123", "data1");
await boundWorker.processWithCache("user123", "data2");
await boundWorker.processWithCache("user123", "data3");

Task Priority Strategies

Common priority computation patterns:

// File size priority (smaller first)
const fileSizePriority = (method, filepath) => {
  return fs.statSync(filepath).size;
};

// String length priority (shorter first)  
const stringLengthPriority = (method, text) => {
  return typeof text === "string" ? text.length : 0;
};

// Method-based priority
const methodPriority = (method) => {
  const priorities = { urgent: 1, normal: 5, batch: 10 };
  return priorities[method] || 5;
};

// Time-based priority (older timestamps first)
const timePriority = (method, timestamp) => {
  return typeof timestamp === "number" ? timestamp : Date.now();
};

// Combined priority strategy
const combinedPriority = (method, ...args) => {
  let priority = methodPriority(method);
  
  if (method === "processFile" && args[0]) {
    priority += Math.log(fs.statSync(args[0]).size);
  }
  
  return priority;
};

Worker Scheduling Policies

Control how tasks are assigned to idle workers:

const worker = new Worker("./worker.js", {
  workerSchedulingPolicy: "round-robin" // Default: distribute evenly
});

const orderedWorker = new Worker("./worker.js", {
  workerSchedulingPolicy: "in-order" // Use worker 1 first, then 2, etc.
});

Queue Performance Considerations

FIFO Queue

  • Best for: Maintaining strict task ordering
  • Performance: O(1) enqueue/dequeue operations
  • Memory: Minimal overhead with linked list structure

Priority Queue

  • Best for: Tasks with varying importance levels
  • Performance: O(log n) enqueue/dequeue operations (heap-based)
  • Memory: Higher overhead due to heap maintenance

Custom Queues

  • Best for: Specialized scheduling requirements
  • Performance: Depends on implementation
  • Memory: Varies based on data structures used

Choose the appropriate queue based on your specific use case and performance requirements.

docs

index.md

task-queues.md

worker-communication.md

worker-management.md

tile.json