Skip to main content
← Projects

Async Task Engine

View on GitHub →

Async Task Engine is a production-oriented background job processing system I built to handle asynchronous workloads reliably and at scale. The goal was to design a clean, extensible architecture where tasks can be submitted via an API, queued persistently, processed concurrently, retried safely on failure, and monitored in real time.

The system is built using FastAPI for the REST layer, Redis as the message broker and state store, and Python’s asyncio for concurrent task execution. Clients submit jobs through a REST endpoint, providing structured payloads that describe the task to be executed. Once received, the API assigns a unique job ID and pushes the task into a Redis-backed queue. Redis is used deliberately for both queuing and state persistence, allowing the system to track job states such as pending, processing, completed, and failed without requiring an additional database.

Workers operate asynchronously using asyncio’s event loop. Each worker continuously polls Redis for new tasks and processes them concurrently within a configurable concurrency limit. When a job is pulled from the queue, its status is immediately updated to prevent duplicate execution. The worker then executes the associated task handler and captures any result or exception. Successful executions update the job state and store the result, while failures trigger structured retry logic.

Retry handling is implemented with exponential backoff. Instead of immediately requeueing failed tasks, the engine calculates a future retry timestamp and stores the job in a Redis sorted set keyed by execution time. A lightweight scheduler component monitors this sorted set and promotes jobs back into the main queue when their retry time arrives. This prevents tight failure loops and ensures the system remains stable under transient errors.

Concurrency is configurable, allowing the engine to scale vertically by increasing worker concurrency or horizontally by running multiple worker processes or containers. Since Redis acts as the central broker, multiple workers can safely pull from the same queue without coordination conflicts. This makes the system suitable for distributed deployments.

The architecture cleanly separates concerns: API handling, queue management, worker execution, retry scheduling, and logging are modular components. This separation makes it easy to extend the engine with new task types, modify retry strategies, or replace the queue backend if needed. Structured JSON logging is used throughout to improve observability and debugging in production environments.

Overall, Async Task Engine demonstrates how asynchronous programming, persistent queuing, and fault-tolerant retry strategies can be combined into a reliable background processing system. It reflects practical system design decisions focused on scalability, modularity, and operational robustness rather than minimal prototype functionality.