#1
Polling schedulers work, but they are not always efficient. A more scalable and responsive approach is event-driven orchestration, where jobs trigger other jobs automatically after completion. This model reduces delays, simplifies workflows, and improves system responsiveness.

1. Why Event-Driven Orchestration

With scheduler-based polling:
  • Jobs wait until the next scan cycle
  • Unnecessary database checks happen
  • Chained jobs feel slow
Event-driven orchestration solves this by reacting immediately when a job changes state.

2. Defining Job Events

Create domain events to represent job lifecycle changes.
public class JobCompletedEvent {
    private final Long jobId;

    public JobCompletedEvent(Long jobId) {
        this.jobId = jobId;
    }

    public Long getJobId() {
        return jobId;
    }
}
You can also define JobFailedEvent if needed.

3. Publishing Events After Job Execution

Modify your job processor to publish events when a job finishes.
@Autowired
private ApplicationEventPublisher eventPublisher;

private void executeAndFinish(JobQueue job) {
    try {
        executeJob(job);
        job.setStatus("COMPLETED");
        jobRepo.save(job);

        eventPublisher.publishEvent(new JobCompletedEvent(job.getId()));
    } catch (Exception e) {
        job.setStatus("FAILED");
        jobRepo.save(job);
    }
}
Now, every completed job emits an event instantly.

4. Listening for Job Completion Events

Create an event listener to react to completed jobs.
@Component
public class JobEventListener {

    @Autowired
    private JobQueueRepository jobRepo;

    @Autowired
    private JobProcessorService processor;

    @EventListener
    public void onJobCompleted(JobCompletedEvent event) {

        List<JobQueue> dependentJobs =
                jobRepo.findByDependsOnJobIdAndStatus(
                        event.getJobId(), "PENDING"
                );

        dependentJobs.forEach(processor::processSingleJob);
    }
}
This removes the need to wait for the scheduler to detect ready jobs.

5. Processing a Single Job

Add a method to execute one job directly.
@Async("firebirdExecutor")
public void processSingleJob(JobQueue job) {

    job.setStatus("RUNNING");
    jobRepo.save(job);

    try {
        executeJob(job);
        job.setStatus("COMPLETED");
    } catch (Exception e) {
        job.setStatus("FAILED");
    }

    job.setCompletedAt(LocalDateTime.now());
    jobRepo.save(job);
}
This allows jobs to cascade naturally through events.

6. Example Workflow

Import Data
   ↓
Validate Records
   ↓
Aggregate Results
   ↓
Generate Report
Each step starts immediately after the previous job completes—no polling, no delay.

7. Combining Events with Priority

Event-driven systems still respect priority.
Before executing dependent jobs, you can sort them:
dependentJobs.stream()
        .sorted(Comparator.comparingInt(JobQueue::getPriority).reversed())
        .forEach(processor::processSingleJob);
This ensures critical jobs always run first.

8. Reliability and Recovery

If the application restarts, pending jobs remain stored in Firebird.
A startup runner can safely resume processing:
@EventListener(ApplicationReadyEvent.class)
public void resumePendingJobs() {
    processor.processPendingJobs();
}
This guarantees no job is lost.
#ads

image quote pre code