Run Tasks
This node executes multiple tasks concurrently with configurable parallelism and error handling.
The Run Tasks node processes arrays of items through component instances with controlled concurrency.
The Run Tasks node enables parallel processing of data arrays by creating component instances for each item and managing their execution. It supports configurable concurrency limits, error handling strategies, and progress tracking.
Inputs
Data
| Data | Description |
|---|---|
| Items | Array of items to process |
General
| Data | Description |
|---|---|
| Stop On Failure | Whether to halt all tasks if one fails (default: false) |
| Max Running Tasks | Maximum concurrent tasks (default: 10) |
Actions
| Signal | Description |
|---|---|
| Start | Begins processing the items array |
| Stop | Stops all running tasks |
Outputs
Status
| Data | Description |
|---|---|
| State | Current execution state (idle, running, completed, error) |
| Progress | Completion percentage (0-100) |
| Completed Count | Number of successfully completed tasks |
| Failed Count | Number of failed tasks |
Events
| Signal | Description |
|---|---|
| Started | Triggered when task execution begins |
| Task Completed | Triggered for each successfully completed task |
| Task Failed | Triggered for each failed task |
| All Completed | Triggered when all tasks finish (success or failure) |
| All Succeeded | Triggered when all tasks complete successfully |
Usage
The Run Tasks node is designed for parallel processing scenarios where you need to perform the same operation on multiple data items:
Concurrency Control
- Max Running Tasks: Limits simultaneous execution to prevent resource exhaustion
- Queue Management: Automatically queues additional items when concurrency limit is reached
- Dynamic Scaling: Adjusts active tasks based on available resources
Error Handling Strategies
Continue on Failure (Stop On Failure: false)
- Failed tasks are logged but don't stop other tasks
- Useful for batch operations where partial success is acceptable
- All items are processed regardless of individual failures
Stop on Failure (Stop On Failure: true)
- First failure immediately stops all running and queued tasks
- Useful for critical operations where all items must succeed
- Provides fail-fast behavior for error detection
Example Use Cases
- Batch Data Processing: Process arrays of records through validation/transformation
- File Operations: Upload, download, or process multiple files concurrently
- API Requests: Make parallel HTTP requests with rate limiting
- Image Processing: Resize, convert, or analyze multiple images
- Database Operations: Bulk insert/update operations with controlled concurrency
Component Integration
The Run Tasks node works with component instances to define the processing logic:
- Create Component: Design a component that processes a single item
- Configure Inputs: Map item data to component inputs
- Handle Outputs: Collect results from component outputs
- Error Management: Implement error handling within components
Progress Monitoring
- Real-time Progress: Track completion percentage as tasks execute
- Granular Counts: Monitor successful and failed task counts
- State Transitions: Observe state changes from idle to running to completed
- Event-driven Updates: React to individual task completions
Performance Considerations
- Concurrency Tuning: Adjust Max Running Tasks based on system resources
- Memory Management: Monitor memory usage with large item arrays
- Network Throttling: Use concurrency limits for network-bound operations
- CPU Intensive Tasks: Lower concurrency for CPU-heavy processing
Best Practices
- Error Handling: Always implement proper error handling in processing components
- Resource Management: Set appropriate concurrency limits for your environment
- Progress Feedback: Use progress events to provide user feedback during long operations
- Cleanup: Ensure components properly clean up resources on completion or failure
- Testing: Test with various failure scenarios to validate error handling
Advanced Patterns
Retry Logic: Implement retry mechanisms within processing components Result Aggregation: Collect and combine results from all successful tasks Conditional Processing: Use item properties to determine processing requirements Pipeline Integration: Chain multiple Run Tasks nodes for complex workflows