PowerJob is a new generation of distributed scheduling and computing framework. It supports CRON, API, fixed frequency, fixed delay and other scheduling strategies. It provides workflow to arrange tasks and resolve dependencies. It is simple to use, powerful and has complete documentation, allowing you to easily complete your job. Scheduling and distributed computing of complex tasks.
Easy to use: Provides a front-end Web interface, allowing developers to visually complete the management of scheduled tasks (add, delete, modify, check), monitor task running status, and view running logs.
Perfect timing strategy: supports four timing scheduling strategies: CRON expression, fixed frequency, fixed delay and API.
Rich execution modes: supports four execution modes: stand-alone, broadcast, Map, and MapReduce. The Map/MapReduce processor allows developers to obtain cluster distributed computing capabilities with just a few lines of code.
DAG workflow support: supports online configuration of task dependencies, visual arrangement of tasks, and also supports data transfer between upstream and downstream tasks.
Extensive executor support: supports Spring Bean, built-in/external Java classes, Shell, Python and other processors, with a wide range of applications.
Convenient operation and maintenance: supports online logging function, the logs generated by the executor can be displayed in real time on the front-end console page, reducing debugging costs and greatly improving development efficiency.
Streamlined dependencies: The minimum dependency is only on relational databases (MySQL/Oracle/MS SQLServer...), and the extended dependency is MongoDB (used to store huge online logs).
High availability & high performance: The scheduling server is carefully designed to achieve lock-free scheduling by changing the database lock-based strategy of other scheduling frameworks. Deploying multiple scheduling servers can achieve high availability and performance improvement at the same time (supporting unlimited horizontal expansion).
Failover and recovery: After a task execution fails, retry can be completed according to the configured retry policy. As long as the executor cluster has enough computing nodes, the task can be completed successfully.
Business scenarios with scheduled execution requirements: such as fully synchronizing data every early morning, generating business reports, etc.
There are business scenarios that require all machines to be executed together: such as using broadcast execution mode to clean cluster logs.
There are business scenarios that require distributed processing: for example, a large amount of data needs to be updated, and single-machine execution takes a very long time. Map/MapReduce processors can be used to complete task distribution and mobilize the entire cluster to accelerate calculations.
There are business scenarios that require delaying the execution of certain tasks: such as order expiration processing, etc.
v4.0.1
Features
Support PostgreSQL
Strengthen the front-end console and add worker information such as tags and last online time to facilitate troubleshooting problems that cannot be connected.
BugFix
Fix server cluster master selection problem
Fix the NPE problem that occurs when no worker is connected to the server
Fixed the issue where the front-end console incorrectly displays the worker list