Bulk Job Apex Class: How to Build It, Run It, Test It, and Chain Batch Jobs Properly
Handy guide and example that you can simply toss to your agents for them to refer.
You know that moment when Salesforce works perfectly for one record and then quietly falls apart when you throw 20,000 records at it?
Yeah. That is usually when you need a bulk job.
In Apex, the most common way to handle large-scale processing is with Batch Apex. It lets you process records in chunks instead of trying to do everything in one giant transaction that explodes halfway through.
And honestly, this topic feels confusing until you build one yourself.
Then it clicks.
Batch Apex is really just controlled repetition. Small groups of records. One chunk at a time.
Safe. Predictable. Durable.
This post uses a real example built around three classes:
PrepareColdLeadsBatchReassignColdLeadsBatchColdLeadScheduler
The workflow is simple:
- Find leads that match a target status.
- Flag them for reassignment.
- Chain a second batch to change ownership.
- Schedule the whole thing to run automatically.
That is the goal.
What Is Batch Apex?
Think about washing dishes after dinner.
You do not carry every plate, pan, mug, and spoon at once. You would drop something.
You do one load. Then another. Then another.
Batch Apex works the same way.
Instead of processing 50,000 records in one transaction, Salesforce splits them into smaller chunks called scopes.
Each scope runs independently.
That matters because governor limits reset for each execution chunk. So instead of one massive transaction choking on limits, you get smaller controlled transactions.
A batch class usually implements three methods:
start()execute()finish()
The First Batch: Prepare the Leads
Here is the first class:
public with sharing class PrepareColdLeadsBatch implements Database.Batchable<SObject> {
@TestVisible
static String targetStatusOverride;
public Database.QueryLocator start(Database.BatchableContext bc) {
Datetime thirtyDaysAgo = System.now().addDays(-30);
return Database.getQueryLocator([
SELECT Id,
Status,
LastModifiedDate,
Ready_For_Reassignment__c
FROM Lead
WHERE Status = :getTargetStatus()
AND LastModifiedDate >= :thirtyDaysAgo
]);
}
public void execute(Database.BatchableContext bc, List<Lead> scope) {
for (Lead leadRecord : scope) {
leadRecord.Ready_For_Reassignment__c = true;
}
update scope;
}
public void finish(Database.BatchableContext bc) {
Database.executeBatch(new ReassignColdLeadsBatch(), 200);
}
@TestVisible
static String getTargetStatus() {
return String.isBlank(targetStatusOverride) ? 'Cold' : targetStatusOverride;
}
}
What start() is doing
This is the queue builder.
It selects leads where:
Statusmatches the target statusLastModifiedDateis within the last 30 days
That date filter matters. It keeps the job focused on recent records instead of sweeping through stale data forever.
What execute() is doing
This is the actual batch work.
Each scope comes in as a List<Lead>, and the batch marks every lead as ready for reassignment:
for (Lead leadRecord : scope) {
leadRecord.Ready_For_Reassignment__c = true;
}
update scope;
And this part is important:
Do not do DML inside the loop.
Bad:
for (Lead leadRecord : scope) {
update leadRecord;
}
Better:
update scope;
One DML statement. Cleaner and much safer for limits.
What finish() is doing
This is where the first batch hands off control:
Database.executeBatch(new ReassignColdLeadsBatch(), 200);
That is batch chaining.
The first job finishes all of its chunks, then passes the baton to the second one.
The Second Batch: Reassign the Leads
Here is the second class:
public with sharing class ReassignColdLeadsBatch implements Database.Batchable<SObject> {
@TestVisible
static String targetStatusOverride;
@TestVisible
static Id ownerIdOverride;
public Database.QueryLocator start(Database.BatchableContext bc) {
return Database.getQueryLocator([
SELECT Id,
OwnerId,
Status,
Ready_For_Reassignment__c
FROM Lead
WHERE Ready_For_Reassignment__c = true
AND Status = :getTargetStatus()
]);
}
public void execute(Database.BatchableContext bc, List<Lead> scope) {
Id newOwnerId = getNewLeadOwnerId();
for (Lead leadRecord : scope) {
leadRecord.OwnerId = newOwnerId;
leadRecord.Ready_For_Reassignment__c = false;
}
update scope;
}
public void finish(Database.BatchableContext bc) {
System.debug('Cold lead reassignment completed.');
}
@TestVisible
static String getTargetStatus() {
return String.isBlank(targetStatusOverride) ? 'Cold' : targetStatusOverride;
}
@TestVisible
static Id getNewLeadOwnerId() {
if (ownerIdOverride != null) {
return ownerIdOverride;
}
List<User> reassignmentUsers = [
SELECT Id
FROM User
WHERE IsActive = true
AND Id != :UserInfo.getUserId()
ORDER BY CreatedDate ASC
LIMIT 1
];
return reassignmentUsers.isEmpty() ? UserInfo.getUserId() : reassignmentUsers[0].Id;
}
}
This batch only looks at leads that were flagged by the first batch:
WHERE Ready_For_Reassignment__c = true
AND Status = :getTargetStatus()
Then it does two things:
- changes
OwnerId - clears
Ready_For_Reassignment__c
That last part is easy to miss, but it matters a lot. Clearing the flag prevents the same records from getting picked up again on the next run.
Why Split This into Two Batch Classes?
Because responsibilities grow.
The first batch answers:
Which leads are eligible?
The second batch answers:
Who should own them now?
Those are different decisions.
And reassignment logic always gets more complicated later:
- round robin
- territory routing
- product ownership
- SLA rules
- score-based assignment
Splitting the work early keeps the code from turning into one giant batch class full of tangled conditions.
A Small Detail That Makes Testing Easier
These classes use @TestVisible overrides:
@TestVisible
static String targetStatusOverride;
@TestVisible
static Id ownerIdOverride;
That is a practical design choice.
In tests, you do not want to depend too heavily on production assumptions like:
- the org definitely has a
Coldlead status - there is always a suitable active user to reassign to
So the batch can default to real behavior in production while still letting tests inject controlled values.
That is a much better pattern than writing brittle tests that pass only in one org shape.
How to Run the Batch
From Anonymous Apex:
Database.executeBatch(new PrepareColdLeadsBatch(), 200);
The second parameter is the batch size.
People copy 200 a lot, and that is usually fine for simple field updates. But batch size is not sacred.
If the logic is heavier, try:
50
or:
100
If you are doing callouts, complex transformations, or pulling related data, go smaller.
There is no prize for using the largest batch size possible.
How to Schedule It
The scheduler is intentionally small:
public with sharing class ColdLeadScheduler implements Schedulable {
public void execute(SchedulableContext sc) {
Database.executeBatch(new PrepareColdLeadsBatch(), 200);
}
}
That is usually what you want in a scheduler.
No extra decision-making. No hidden logic. Just kick off the batch.
You can schedule it by running this in Anonymous Apex:
String cronExp = '0 0 2 * * ?';
System.schedule(
'Nightly Cold Lead Processing',
cronExp,
new ColdLeadScheduler()
);
That runs the job every day at 2 AM.
And honestly, there is a reason so many batch jobs run at night.
Users complain less when the heavy lifting happens while they are asleep.
How to See Scheduled and Past Jobs
After you schedule the job, you can verify it from Salesforce Setup.
To see scheduled jobs:
- Go to
Setup. - Search for
Scheduled Jobsin Quick Find. - Open
Scheduled Jobs. - Look for your job name, like
Nightly Cold Lead Processing.
That view shows the scheduled entry, next run time, and basic status.
To see jobs that already ran or are currently running:
- Go to
Setup. - Search for
Apex Jobsin Quick Find. - Open
Apex Jobs. - Find
PrepareColdLeadsBatch,ReassignColdLeadsBatch, or your scheduler-triggered jobs in the list.
This is usually the first place to check when you want to confirm:
- whether the batch actually started
- whether it completed or failed
- how many records were processed
- whether the chained batch ran after the first one
If you are debugging a scheduling issue, check Scheduled Jobs first. If you are debugging processing results, check Apex Jobs.
How to Test Batch Apex Properly
This is where a lot of examples get too shallow.
The goal is not just code coverage.
The goal is confidence.
Test the preparation batch
The preparation test does something smart:
- creates an eligible lead
- creates a non-matching lead
- runs the first batch
- allows the chained second batch to run too
- verifies the eligible lead was reassigned and the non-matching one was left alone
That is stronger than only checking whether a flag briefly became true.
Because in the real flow, the first batch chains the second one. So after Test.stopTest(), the full process has already completed.
Test the reassignment batch directly
The reassignment test isolates the second batch:
- one lead starts flagged
- one lead starts unflagged
- the test injects a known reassignment owner
- only the flagged record should move
That proves the filter logic works, not just the update logic.
Test the scheduler too
There is also a scheduler test that uses:
System.schedule(...)
and then verifies a PrepareColdLeadsBatch async job was queued.
That matters because schedulers are easy to forget in test coverage, even though they are the entry point for the entire automation.
Why Test.startTest() and Test.stopTest() Matter
Batch Apex is asynchronous.
That means your test needs this pattern:
Test.startTest();
Database.executeBatch(new PrepareColdLeadsBatch(), 200);
Test.stopTest();
Test.stopTest() tells Salesforce to run queued async work before the test ends.
Without it, your assertions may execute before the batch does.
That is one of the most common reasons async tests feel confusing at first.
One More Nice Touch: Test Data Factory
The test suite also uses a shared data factory:
BatchLeadTestDataFactory
That factory handles things like:
- choosing an available non-converted lead status
- finding an alternate status for negative tests
- creating leads
- creating users
- resetting overrides after each test
This makes the tests easier to read and much easier to maintain.
And it avoids a common anti-pattern where every test rebuilds the same setup from scratch in slightly different ways.
When Batch Chaining Is the Right Move
Chaining works well when the second job depends on the output of the first.
That is true here:
- batch one decides eligibility
- batch two performs reassignment
The order matters.
That makes finish() the right place to trigger the second job.
When Not to Chain
Do not chain jobs just because you can.
Too many chained async jobs create debugging headaches fast.
You start asking:
- Which batch failed?
- Which job actually triggered this update?
- Why did it run twice?
- Why are there four async jobs stacked together?
If the jobs do not truly depend on each other, separate scheduled jobs are often easier to reason about.
Final Thoughts
Good Batch Apex code usually feels boring.
That is a compliment.
It processes records safely. It respects limits. It stays testable. It does not melt when data volume grows.
That is what good enterprise Apex is supposed to do.
Not clever tricks. Not giant all-in-one classes.
Just durable systems that keep working quietly in the background.