Skip to main content

Cron jobs with Lambda

ยท 20 min read
Ivan Barlog
AWS Solutions Architect @ BeeSolve

In this article I will show you how to implement simple cron job processing using AWS Lambda and EventBridge Scheduler. We'll also look into most common problems you can hit and compare 3 different approaches of how to solve them.

Setting up a cron job with Lambda is very easy. You just set up EventBridge Scheduler1 which invokes your Lambda function recurently. EventBridge Scheduler supports both rate and cron based schedules.

Imagine that you want to run some aggregation which will fetch some data from the database, recalculate something and put the data back into the database. This is pretty common scenario. There are some things which you need to take into the consideration.

When implementing such task it might work differently on your local machine where things are usually much quicker and databases are usually much smaller. Let's specify some rules:

  1. run task every minute
  2. always run just single instance of the task
  3. when another task is running already do nothing

Now we are going to review three different approaches:

  • naive way - "YOLO!"
  • custom locking - "Have I thought of all the edge cases?"
  • smart way - the right way โœ… - "Let's AWS handle the complex locking."

In all of the examples I am going to use sample code. Since we want the edge cases to be shown I am going to schedule task to run every minute and I am going to artificially make the task run for at least 80 seconds which is more than 1 minute.

tip

Full code can be found at @beesolve/lambda-cron-job-example.

Naive approachโ€‹

The first approach you think of might be just simply set up the schedule for every minute and hope everything will work out somehow. Unfortunatelly the world is full of surprises and it almost never is as easy as you can think.

Lambda functions are famous for their great ability to scale. This means that if one request is being handled by one Lambda function instance and we have another request which needs to be handled the entirely new environment is created for that particular request resulting in two requests being processed concurrently.

This breaks rules 2. and 3. so probably is not a good solution. We will try to implement custom locking mechanism in the next section in order to avoid this problem.

Show me the code! ๐Ÿ‘จโ€๐Ÿ’ป

In CDK we've defined NodejsFunction with 2 minutes timeout.

// CDK setup
const naiveHandler = new NodejsFunction(stack, "CronHandlerNaive", {
entry: "src/naive.ts",
timeout: Duration.minutes(2),
runtime: Runtime.NODEJS_24_X,
architecture: Architecture.ARM_64,
logGroup: new LogGroup(stack, "CronHandlerNaiveLogGroup", {
retention: RetentionDays.ONE_WEEK,
removalPolicy: RemovalPolicy.DESTROY,
}),
});

new Schedule(stack, "OneMinuteCronNaive", {
schedule: ScheduleExpression.rate(Duration.minutes(1)),
target: new LambdaInvoke(naiveHandler),
});

The handler itselfs looks like this:

import type { Context } from "aws-lambda";
import { delay } from "./helpers";

export const handler = async (event: any, context: Context) => {
console.log(`${context.awsRequestId} Job started: ${Date.now()}`);
const start = performance.now();
await delay(80); // wait slightly more than a minute
console.log(
`${context.awsRequestId} Job ended. Took: ${performance.now() - start}ms`,
);
};
Show me the logs! ๐Ÿ“œ

The below log shows that two invocations of the same Lambda function have been run at the same time.

INIT_START Runtime Version: nodejs:24.v29 Runtime Version ARN: arn:aws:lambda:eu-central-1::runtime:58a37e8413ed69058c4ac3b1df642118591f17d40def93d6101f867c72cd03c2
START RequestId: f573b457-8591-4bdb-b208-199d7107e252 Version: $LATEST
2026-04-16T14:56:52.701Z f573b457-8591-4bdb-b208-199d7107e252 INFO f573b457-8591-4bdb-b208-199d7107e252 Job started: 1776351412682
INIT_START Runtime Version: nodejs:24.v29 Runtime Version ARN: arn:aws:lambda:eu-central-1::runtime:58a37e8413ed69058c4ac3b1df642118591f17d40def93d6101f867c72cd03c2
START RequestId: 4f7fdb63-3b31-4f86-9653-bd76a3d498df Version: $LATEST
2026-04-16T14:57:52.948Z 4f7fdb63-3b31-4f86-9653-bd76a3d498df INFO 4f7fdb63-3b31-4f86-9653-bd76a3d498df Job started: 1776351472929
2026-04-16T14:58:12.805Z f573b457-8591-4bdb-b208-199d7107e252 INFO f573b457-8591-4bdb-b208-199d7107e252 Job ended. Took: 80080.41460999999ms
END RequestId: f573b457-8591-4bdb-b208-199d7107e252
REPORT RequestId: f573b457-8591-4bdb-b208-199d7107e252 Duration: 80130.32 ms Billed Duration: 80238 ms Memory Size: 128 MB Max Memory Used: 74 MB Init Duration: 107.41 ms
START RequestId: 59f502a4-cf88-46a0-8757-458e1e7134a2 Version: $LATEST
2026-04-16T14:58:52.538Z 59f502a4-cf88-46a0-8757-458e1e7134a2 INFO 59f502a4-cf88-46a0-8757-458e1e7134a2 Job started: 1776351532538
2026-04-16T14:59:13.053Z 4f7fdb63-3b31-4f86-9653-bd76a3d498df INFO 4f7fdb63-3b31-4f86-9653-bd76a3d498df Job ended. Took: 80080.412324ms
END RequestId: 4f7fdb63-3b31-4f86-9653-bd76a3d498df
REPORT RequestId: 4f7fdb63-3b31-4f86-9653-bd76a3d498df Duration: 80145.57 ms Billed Duration: 80258 ms Memory Size: 128 MB Max Memory Used: 74 MB Init Duration: 112.17 ms
2026-04-16T14:59:52.567Z 3e7fc5f8-6c7a-410d-b107-b464f6d86608 INFO 3e7fc5f8-6c7a-410d-b107-b464f6d86608 Job started: 1776351592567
START RequestId: 3e7fc5f8-6c7a-410d-b107-b464f6d86608 Version: $LATEST
2026-04-16T15:00:12.559Z 59f502a4-cf88-46a0-8757-458e1e7134a2 INFO 59f502a4-cf88-46a0-8757-458e1e7134a2 Job ended. Took: 80021.442238ms
END RequestId: 59f502a4-cf88-46a0-8757-458e1e7134a2
REPORT RequestId: 59f502a4-cf88-46a0-8757-458e1e7134a2 Duration: 80024.00 ms Billed Duration: 80024 ms Memory Size: 128 MB Max Memory Used: 75 MB
START RequestId: 40bfcd09-bae5-4307-b9ff-85ca67746a87 Version: $LATEST
2026-04-16T15:00:52.619Z 40bfcd09-bae5-4307-b9ff-85ca67746a87 INFO 40bfcd09-bae5-4307-b9ff-85ca67746a87 Job started: 1776351652619
2026-04-16T15:01:12.648Z 3e7fc5f8-6c7a-410d-b107-b464f6d86608 INFO 3e7fc5f8-6c7a-410d-b107-b464f6d86608 Job ended. Took: 80080.253007ms
END RequestId: 3e7fc5f8-6c7a-410d-b107-b464f6d86608
REPORT RequestId: 3e7fc5f8-6c7a-410d-b107-b464f6d86608 Duration: 80082.90 ms Billed Duration: 80083 ms Memory Size: 128 MB Max Memory Used: 74 MB
START RequestId: aeba51f4-d961-42c3-b043-4b4728ad94c2 Version: $LATEST
2026-04-16T15:01:52.480Z aeba51f4-d961-42c3-b043-4b4728ad94c2 INFO aeba51f4-d961-42c3-b043-4b4728ad94c2 Job started: 1776351712480
2026-04-16T15:02:12.700Z 40bfcd09-bae5-4307-b9ff-85ca67746a87 INFO 40bfcd09-bae5-4307-b9ff-85ca67746a87 Job ended. Took: 80080.26028800002ms
END RequestId: 40bfcd09-bae5-4307-b9ff-85ca67746a87
REPORT RequestId: 40bfcd09-bae5-4307-b9ff-85ca67746a87 Duration: 80083.07 ms Billed Duration: 80084 ms Memory Size: 128 MB Max Memory Used: 75 MB
START RequestId: 38372b63-45ae-4129-b8b5-ba5b3ed62e0f Version: $LATEST
2026-04-16T15:02:52.544Z 38372b63-45ae-4129-b8b5-ba5b3ed62e0f INFO 38372b63-45ae-4129-b8b5-ba5b3ed62e0f Job started: 1776351772544
2026-04-16T15:03:12.561Z aeba51f4-d961-42c3-b043-4b4728ad94c2 INFO aeba51f4-d961-42c3-b043-4b4728ad94c2 Job ended. Took: 80080.26152100001ms
END RequestId: aeba51f4-d961-42c3-b043-4b4728ad94c2
REPORT RequestId: aeba51f4-d961-42c3-b043-4b4728ad94c2 Duration: 80082.93 ms Billed Duration: 80083 ms Memory Size: 128 MB Max Memory Used: 75 MB
2026-04-16T15:04:12.556Z 38372b63-45ae-4129-b8b5-ba5b3ed62e0f INFO 38372b63-45ae-4129-b8b5-ba5b3ed62e0f Job ended. Took: 80011.86170200002ms
END RequestId: 38372b63-45ae-4129-b8b5-ba5b3ed62e0f
REPORT RequestId: 38372b63-45ae-4129-b8b5-ba5b3ed62e0f Duration: 80014.19 ms Billed Duration: 80015 ms Memory Size: 128 MB Max Memory Used: 75 MB

As you can see the "naive" approach is not very reliable. It could work if you can guarantee that the runtime won't exceed 1 minute. Or you can play around with the task rate eg. if you know that the maximum runtime for the task is 3 minutes you can schedule it for running every 4 minutes. This is though not very elegant and it requires additional tuning.

Custom locking approachโ€‹

All the problems explained in the naive approach could be fixed by introducing custom locking mechanism.

Each time the task start running it would try to acquire some kind of lock. If the lock could be acquired the task runs. Once the task is done the lock is released so another task can aquire it. If for any reason another task starts to run and the first task is still running it won't be able to acquire the lock which results in not running. All of our conditions are met.

The problem here is that it requires additional complex engineering. I've provided an example of simple locking mechanism using DynamoDB table. Whenever the Lambda is invoked these steps are followed:

  1. try to acquire lock - conditionally put the record in the DynamoDB table
  2. if condition fails the processing is blocked
  3. if record is put to the table successfully we can start the processing
  4. once processing finishes (or any error is thrown within the processing) we will release the lock by deleting the record from the table

I've also added the owner field to the DynamoDB table's record which is set to AWS Request ID for the current invocation. Based on this I can conditionally only remove locks which the instance owns.

Another thing which we should tackle is the case when some task lock the processing by putting the record into the table but for some reason does not release it. I've added 5 minute time-to-live for the records so if this scenario happens there will be maximum outage of 5 minutes.

This mechanism solves our problem but it is too complex. Every minute Lambda is being invoked even if another Lambda instance is already running. Then each Lambda run performs put and delete operation against the DynamoDB table. Suddenly you need to make sure your table is available and that you haven't made any errros in your code. Also the DynamoDB as distributed system can guarantee only strong consistency for reads not for writes which means that your locking mechanism relies on something which you don't have full control over.

Show me the code! ๐Ÿ‘จโ€๐Ÿ’ป

In order to set up custom locking we need to define DynamoDB table and grant read/write to our handler:

// CDK setup
const table = new TableV2(stack, "Table", {
partitionKey: {
name: "l",
type: AttributeType.STRING,
},
billing: Billing.onDemand(),
encryption: TableEncryptionV2.awsManagedKey(),
timeToLiveAttribute: "ttl",
});

const withLockHandler = new NodejsFunction(stack, "CronHandlerWithLock", {
entry: "src/withLock.ts",
timeout: Duration.minutes(2),
runtime: Runtime.NODEJS_24_X,
architecture: Architecture.ARM_64,
logGroup: new LogGroup(stack, "CronHandlerWithLockLogGroup", {
retention: RetentionDays.ONE_WEEK,
removalPolicy: RemovalPolicy.DESTROY,
}),
environment: {
TABLE_NAME: table.tableName,
},
});
table.grantReadWriteData(withLockHandler);


new Schedule(stack, "OneMinuteCronWithLock", {
schedule: ScheduleExpression.rate(Duration.minutes(1)),
target: new LambdaInvoke(withLockHandler),
});

In the handler we've implemented custom locking mechanism with DynamoDB table.

import {
ConditionalCheckFailedException,
DynamoDBClient,
} from "@aws-sdk/client-dynamodb";
import {
DeleteCommand,
DynamoDBDocumentClient,
PutCommand,
} from "@aws-sdk/lib-dynamodb";
import type { Context } from "aws-lambda";
import { delay } from "./helpers";

const maxTimeToLiveInSeconds = 300;

const dynamodb = DynamoDBDocumentClient.from(new DynamoDBClient());
const tableName = process.env.TABLE_NAME;
if (tableName == null) throw Error(`Missing TABLE_NAME env variable.`);
const key = { l: "l" };

export const handler = async (event: any, context: Context) => {
try {
await acquireLock(context.awsRequestId);

console.log(`${context.awsRequestId} Job started: ${Date.now()}`);
const start = performance.now();
await delay(80); // wait slightly more than a minut
console.log(
`${context.awsRequestId} Job ended. Took: ${performance.now() - start}ms`,
);
} catch (error) {
if (error instanceof ConditionalCheckFailedException) {
console.error("Cannot acquire lock - another job is running.");
}
if (error instanceof Error) {
console.error(error.message);
}
throw error;
} finally {
await releaseLock(context.awsRequestId);
}
};

/**
* Tries to write lock into DynamoDB table
* if lock already exists ConditionalCheckFailedException is thrown
*/
async function acquireLock(owner: string) {
await dynamodb.send(
new PutCommand({
TableName: tableName,
Item: {
...key,
owner,
ttl: Math.floor(Date.now()) + maxTimeToLiveInSeconds,
},
ConditionExpression: "attribute_not_exists(#l)",
ExpressionAttributeNames: { "#l": "l" },
}),
);
}

/**
* Deletes lock record from the DynamoDB table
* so the lock could be acquired by other jobs
*
* The delete is conditioned so job only deletes its own locks.
*/
async function releaseLock(owner: string) {
await dynamodb.send(
new DeleteCommand({
TableName: tableName,
Key: key,
ConditionExpression: "#owner = :owner",
ExpressionAttributeNames: { "#owner": "owner" },
ExpressionAttributeValues: { ":owner": owner },
}),
);
}
Show me the logs! ๐Ÿ“œ

As you can see from the logs the locking mechanism in place works as expected. When there is task inflight the lock is not acquired and the processing is not started.

INIT_START Runtime Version: nodejs:24.v29 Runtime Version ARN: arn:aws:lambda:eu-central-1::runtime:58a37e8413ed69058c4ac3b1df642118591f17d40def93d6101f867c72cd03c2
START RequestId: f069e227-6761-4ce9-8c43-4acfc50f3bd8 Version: $LATEST
2026-04-17T12:29:05.154Z f069e227-6761-4ce9-8c43-4acfc50f3bd8 INFO f069e227-6761-4ce9-8c43-4acfc50f3bd8 Job started: 1776428945154
INIT_START Runtime Version: nodejs:24.v29 Runtime Version ARN: arn:aws:lambda:eu-central-1::runtime:58a37e8413ed69058c4ac3b1df642118591f17d40def93d6101f867c72cd03c2
START RequestId: f069e227-a361-4ce9-8c43-4acfc50f3bd8 Version: $LATEST
2026-04-17T12:29:45.576Z f069e227-a361-4ce9-8c43-4acfc50f3bd8 ERROR Cannot acquire lock - another job is running.
2026-04-17T12:29:45.577Z f069e227-a361-4ce9-8c43-4acfc50f3bd8 ERROR The conditional request failed
2026-04-17T12:29:45.849Z f069e227-a361-4ce9-8c43-4acfc50f3bd8 ERROR Invoke Error
{
"errorType": "ConditionalCheckFailedException",
"errorMessage": "The conditional request failed",
"$fault": "client",
"$metadata": {
"httpStatusCode": 400,
"requestId": "7K0R7THCUQBHSLQV5AFL8LJ4MVVV4KQNSO5AEMVJF66Q9ASUAAJG",
"attempts": 1,
"totalRetryDelay": 0
},
"name": "ConditionalCheckFailedException",
"message": "The conditional request failed",
"__type": "com.amazonaws.dynamodb.v20120810#ConditionalCheckFailedException",
"stack": [
"ConditionalCheckFailedException: The conditional request failed",
" at se.handleError (file:///var/task/withLock.mjs:10:2862)",
" at process.processTicksAndRejections (node:internal/process/task_queues:103:5)",
" at async se.deserializeResponse (file:///var/task/chunk-OEFGZRLG.mjs:2:1024)",
" at async file:///var/task/chunk-SFP5YLEI.mjs:2:5938",
" at async file:///var/task/withLock.mjs:10:23329",
" at async file:///var/task/chunk-SFP5YLEI.mjs:4:790",
" at async file:///var/task/chunk-SFP5YLEI.mjs:10:21472",
" at async file:///var/task/chunk-SFP5YLEI.mjs:3:11016",
" at async releaseLock (file:///var/task/withLock.mjs:10:27179)",
" at async BufferedInvokeProcessor.handler (file:///var/task/withLock.mjs:10:26907)"
]
}

END RequestId: f069e227-a361-4ce9-8c43-4acfc50f3bd8
REPORT RequestId: f069e227-a361-4ce9-8c43-4acfc50f3bd8 Duration: 1291.54 ms Billed Duration: 1459 ms Memory Size: 128 MB Max Memory Used: 86 MB Init Duration: 167.20 ms
2026-04-17T12:30:25.307Z f069e227-6761-4ce9-8c43-4acfc50f3bd8 INFO f069e227-6761-4ce9-8c43-4acfc50f3bd8 Job ended. Took: 79999.775335ms
END RequestId: f069e227-6761-4ce9-8c43-4acfc50f3bd8
REPORT RequestId: f069e227-6761-4ce9-8c43-4acfc50f3bd8 Duration: 81355.77 ms Billed Duration: 81546 ms Memory Size: 128 MB Max Memory Used: 85 MB Init Duration: 189.89 ms
START RequestId: f069e227-df61-4ce9-8c43-4acfc50f3bd8 Version: $LATEST
2026-04-17T12:30:44.346Z f069e227-df61-4ce9-8c43-4acfc50f3bd8 INFO f069e227-df61-4ce9-8c43-4acfc50f3bd8 Job started: 1776429044346
START RequestId: f069e227-a361-4ce9-8c43-4acfc50f3bd8 Version: $LATEST
2026-04-17T12:30:48.070Z f069e227-a361-4ce9-8c43-4acfc50f3bd8 ERROR Cannot acquire lock - another job is running.
2026-04-17T12:30:48.070Z f069e227-a361-4ce9-8c43-4acfc50f3bd8 ERROR The conditional request failed
2026-04-17T12:30:48.130Z f069e227-a361-4ce9-8c43-4acfc50f3bd8 ERROR Invoke Error
{
"errorType": "ConditionalCheckFailedException",
"errorMessage": "The conditional request failed",
"$fault": "client",
"$metadata": {
"httpStatusCode": 400,
"requestId": "IV6L9VF0TGI7FN2263LKVQQJCJVV4KQNSO5AEMVJF66Q9ASUAAJG",
"attempts": 1,
"totalRetryDelay": 0
},
"name": "ConditionalCheckFailedException",
"message": "The conditional request failed",
"__type": "com.amazonaws.dynamodb.v20120810#ConditionalCheckFailedException",
"stack": [
"ConditionalCheckFailedException: The conditional request failed",
" at se.handleError (file:///var/task/withLock.mjs:10:2862)",
" at process.processTicksAndRejections (node:internal/process/task_queues:103:5)",
" at async se.deserializeResponse (file:///var/task/chunk-OEFGZRLG.mjs:2:1024)",
" at async file:///var/task/chunk-SFP5YLEI.mjs:2:5938",
" at async file:///var/task/withLock.mjs:10:23329",
" at async file:///var/task/chunk-SFP5YLEI.mjs:4:790",
" at async file:///var/task/chunk-SFP5YLEI.mjs:10:21472",
" at async file:///var/task/chunk-SFP5YLEI.mjs:3:11016",
" at async releaseLock (file:///var/task/withLock.mjs:10:27179)",
" at async BufferedInvokeProcessor.handler (file:///var/task/withLock.mjs:10:26907)"
]
}

END RequestId: f069e227-a361-4ce9-8c43-4acfc50f3bd8
REPORT RequestId: f069e227-a361-4ce9-8c43-4acfc50f3bd8 Duration: 146.00 ms Billed Duration: 146 ms Memory Size: 128 MB Max Memory Used: 86 MB
START RequestId: f069e228-1b61-4ce9-8c43-4acfc50f3bd8 Version: $LATEST
2026-04-17T12:31:44.330Z f069e228-1b61-4ce9-8c43-4acfc50f3bd8 ERROR Cannot acquire lock - another job is running.
2026-04-17T12:31:44.330Z f069e228-1b61-4ce9-8c43-4acfc50f3bd8 ERROR The conditional request failed
2026-04-17T12:31:44.390Z f069e228-1b61-4ce9-8c43-4acfc50f3bd8 ERROR Invoke Error
{
"errorType": "ConditionalCheckFailedException",
"errorMessage": "The conditional request failed",
"$fault": "client",
"$metadata": {
"httpStatusCode": 400,
"requestId": "CI3PP3M3PDTAQKOIHTOLL2PS5RVV4KQNSO5AEMVJF66Q9ASUAAJG",
"attempts": 1,
"totalRetryDelay": 0
},
"name": "ConditionalCheckFailedException",
"message": "The conditional request failed",
"__type": "com.amazonaws.dynamodb.v20120810#ConditionalCheckFailedException",
"stack": [
"ConditionalCheckFailedException: The conditional request failed",
" at se.handleError (file:///var/task/withLock.mjs:10:2862)",
" at process.processTicksAndRejections (node:internal/process/task_queues:103:5)",
" at async se.deserializeResponse (file:///var/task/chunk-OEFGZRLG.mjs:2:1024)",
" at async file:///var/task/chunk-SFP5YLEI.mjs:2:5938",
" at async file:///var/task/withLock.mjs:10:23329",
" at async file:///var/task/chunk-SFP5YLEI.mjs:4:790",
" at async file:///var/task/chunk-SFP5YLEI.mjs:10:21472",
" at async file:///var/task/chunk-SFP5YLEI.mjs:3:11016",
" at async releaseLock (file:///var/task/withLock.mjs:10:27179)",
" at async BufferedInvokeProcessor.handler (file:///var/task/withLock.mjs:10:26907)"
]
}

END RequestId: f069e228-1b61-4ce9-8c43-4acfc50f3bd8
REPORT RequestId: f069e228-1b61-4ce9-8c43-4acfc50f3bd8 Duration: 137.07 ms Billed Duration: 138 ms Memory Size: 128 MB Max Memory Used: 86 MB
2026-04-17T12:32:04.353Z f069e227-df61-4ce9-8c43-4acfc50f3bd8 INFO f069e227-df61-4ce9-8c43-4acfc50f3bd8 Job ended. Took: 80007.510042ms
END RequestId: f069e227-df61-4ce9-8c43-4acfc50f3bd8
REPORT RequestId: f069e227-df61-4ce9-8c43-4acfc50f3bd8 Duration: 80077.90 ms Billed Duration: 80078 ms Memory Size: 128 MB Max Memory Used: 86 MB
START RequestId: f069e228-1b61-4ce9-8c43-4acfc50f3bd8 Version: $LATEST
2026-04-17T12:32:38.605Z f069e228-1b61-4ce9-8c43-4acfc50f3bd8 INFO f069e228-1b61-4ce9-8c43-4acfc50f3bd8 Job started: 1776429158605
START RequestId: f069e227-a361-4ce9-8c43-4acfc50f3bd8 Version: $LATEST
2026-04-17T12:32:44.332Z f069e227-a361-4ce9-8c43-4acfc50f3bd8 ERROR Cannot acquire lock - another job is running.
2026-04-17T12:32:44.333Z f069e227-a361-4ce9-8c43-4acfc50f3bd8 ERROR The conditional request failed
2026-04-17T12:32:44.393Z f069e227-a361-4ce9-8c43-4acfc50f3bd8 ERROR Invoke Error
{
"errorType": "ConditionalCheckFailedException",
"errorMessage": "The conditional request failed",
"$fault": "client",
"$metadata": {
"httpStatusCode": 400,
"requestId": "KU820UIBKNA5GAA5P42H3JH8ABVV4KQNSO5AEMVJF66Q9ASUAAJG",
"attempts": 1,
"totalRetryDelay": 0
},
"name": "ConditionalCheckFailedException",
"message": "The conditional request failed",
"__type": "com.amazonaws.dynamodb.v20120810#ConditionalCheckFailedException",
"stack": [
"ConditionalCheckFailedException: The conditional request failed",
" at se.handleError (file:///var/task/withLock.mjs:10:2862)",
" at process.processTicksAndRejections (node:internal/process/task_queues:103:5)",
" at async se.deserializeResponse (file:///var/task/chunk-OEFGZRLG.mjs:2:1024)",
" at async file:///var/task/chunk-SFP5YLEI.mjs:2:5938",
" at async file:///var/task/withLock.mjs:10:23329",
" at async file:///var/task/chunk-SFP5YLEI.mjs:4:790",
" at async file:///var/task/chunk-SFP5YLEI.mjs:10:21472",
" at async file:///var/task/chunk-SFP5YLEI.mjs:3:11016",
" at async releaseLock (file:///var/task/withLock.mjs:10:27179)",
" at async BufferedInvokeProcessor.handler (file:///var/task/withLock.mjs:10:26907)"
]
}

END RequestId: f069e227-a361-4ce9-8c43-4acfc50f3bd8
REPORT RequestId: f069e227-a361-4ce9-8c43-4acfc50f3bd8 Duration: 194.55 ms Billed Duration: 195 ms Memory Size: 128 MB Max Memory Used: 86 MB
2026-04-17T12:33:58.611Z f069e228-1b61-4ce9-8c43-4acfc50f3bd8 INFO f069e228-1b61-4ce9-8c43-4acfc50f3bd8 Job ended. Took: 80006.59873300002ms
END RequestId: f069e228-1b61-4ce9-8c43-4acfc50f3bd8
REPORT RequestId: f069e228-1b61-4ce9-8c43-4acfc50f3bd8 Duration: 80112.59 ms Billed Duration: 80113 ms Memory Size: 128 MB Max Memory Used: 86 MB

If there only would be something better and easier which we could do. Let's see if we can in the next section.

tip

If you are using OS level cron jobs and you have problems with concurrency I recommend you to use solo utility for locking the jobs. I've been using this for years and it works perfectly!

Smart approachโ€‹

As always the best way is to do nothing and let someone else to be responsible for task at hand.

Locking is hard. You can try to create custom locking mechanism like I did in previous section but it might not work as expected everytime you need it.

Fortunatelly you can set up the reserved concurrency in the Lambda service. This is usually used for limiting how many invocations your Lambda function can concurrently handle which is exactly something we want in our scenario. We want only 1 function invocation at the time - so let's set reserved concurrency to 1 and let AWS handle the locking for us.

This is in my opinion the best approach since you rely on AWS and their smart engineers who collectively have definitelly more experience in the field of locking than you have. It is also super simple eg. no complexities2, just simple one-line setting. And finally there is no additional cost of invoking Lambda functions which shouldn't run in the first place or requesting DynamoDB table.

Also the best part is that we can reuse our naive handler code as we don't need to do any changes in our implementation.

Show me the code! ๐Ÿ‘จโ€๐Ÿ’ป

In CDK we've defined NodejsFunction with 2 minutes timeout.

// CDK setup
const smartHandler = new NodejsFunction(stack, "CronHandlerSmart", {
entry: "src/naive.ts",
timeout: Duration.minutes(2),
runtime: Runtime.NODEJS_24_X,
architecture: Architecture.ARM_64,
logGroup: new LogGroup(stack, "CronHandlerSmartLogGroup", {
retention: RetentionDays.ONE_WEEK,
removalPolicy: RemovalPolicy.DESTROY,
}),
reservedConcurrentExecutions: 1
});

new Schedule(stack, "OneMinuteCronSmart", {
schedule: ScheduleExpression.rate(Duration.minutes(1)),
target: new LambdaInvoke(smartHandler),
enabled: false,
});

We can reuse our naive handler here as we don't need to do any code changes.

Show me the logs! ๐Ÿ“œ

The log only ever show single function invocation. The log does not contain information about reserved concurrency being performed as it is done on another level.

INIT_START Runtime Version: nodejs:24.v29 Runtime Version ARN: arn:aws:lambda:eu-central-1::runtime:58a37e8413ed69058c4ac3b1df642118591f17d40def93d6101f867c72cd03c2
START RequestId: e137fad9-0884-4aa6-a019-adcbae5a5e8c Version: $LATEST
2026-04-16T16:05:06.554Z e137fad9-0884-4aa6-a019-adcbae5a5e8c INFO e137fad9-0884-4aa6-a019-adcbae5a5e8c Job started: 1776355506535
2026-04-16T16:06:26.654Z e137fad9-0884-4aa6-a019-adcbae5a5e8c INFO e137fad9-0884-4aa6-a019-adcbae5a5e8c Job ended. Took: 80076.438621ms
END RequestId: e137fad9-0884-4aa6-a019-adcbae5a5e8c
REPORT RequestId: e137fad9-0884-4aa6-a019-adcbae5a5e8c Duration: 80135.43 ms Billed Duration: 80252 ms Memory Size: 128 MB Max Memory Used: 74 MB Init Duration: 115.74 ms
2026-04-16T16:06:39.096Z 0549eefb-049d-470d-be42-14fc9ca0beb0 INFO 0549eefb-049d-470d-be42-14fc9ca0beb0 Job started: 1776355599096
START RequestId: 0549eefb-049d-470d-be42-14fc9ca0beb0 Version: $LATEST
2026-04-16T16:07:59.177Z 0549eefb-049d-470d-be42-14fc9ca0beb0 INFO 0549eefb-049d-470d-be42-14fc9ca0beb0 Job ended. Took: 80080.252244ms
END RequestId: 0549eefb-049d-470d-be42-14fc9ca0beb0
REPORT RequestId: 0549eefb-049d-470d-be42-14fc9ca0beb0 Duration: 80082.79 ms Billed Duration: 80083 ms Memory Size: 128 MB Max Memory Used: 74 MB
2026-04-16T16:08:06.143Z 5f98342a-4f1e-40cb-968d-6feda885d1c6 INFO 5f98342a-4f1e-40cb-968d-6feda885d1c6 Job started: 1776355686143
START RequestId: 5f98342a-4f1e-40cb-968d-6feda885d1c6 Version: $LATEST
2026-04-16T16:09:26.181Z 5f98342a-4f1e-40cb-968d-6feda885d1c6 INFO 5f98342a-4f1e-40cb-968d-6feda885d1c6 Job ended. Took: 80037.21447100001ms
END RequestId: 5f98342a-4f1e-40cb-968d-6feda885d1c6
REPORT RequestId: 5f98342a-4f1e-40cb-968d-6feda885d1c6 Duration: 80039.73 ms Billed Duration: 80040 ms Memory Size: 128 MB Max Memory Used: 75 MB
START RequestId: 8dc2078f-8903-45ce-8c8f-921825844e50 Version: $LATEST
2026-04-16T16:09:36.041Z 8dc2078f-8903-45ce-8c8f-921825844e50 INFO 8dc2078f-8903-45ce-8c8f-921825844e50 Job started: 1776355776041
2026-04-16T16:10:56.121Z 8dc2078f-8903-45ce-8c8f-921825844e50 INFO 8dc2078f-8903-45ce-8c8f-921825844e50 Job ended. Took: 80080.25003599998ms
END RequestId: 8dc2078f-8903-45ce-8c8f-921825844e50
REPORT RequestId: 8dc2078f-8903-45ce-8c8f-921825844e50 Duration: 80082.43 ms Billed Duration: 80083 ms Memory Size: 128 MB Max Memory Used: 75 MB
START RequestId: eb7e4432-037d-42f3-9ba5-76d9cbdbf5d6 Version: $LATEST
2026-04-16T16:11:16.537Z eb7e4432-037d-42f3-9ba5-76d9cbdbf5d6 INFO eb7e4432-037d-42f3-9ba5-76d9cbdbf5d6 Job started: 1776355876537
2026-04-16T16:12:36.628Z eb7e4432-037d-42f3-9ba5-76d9cbdbf5d6 INFO eb7e4432-037d-42f3-9ba5-76d9cbdbf5d6 Job ended. Took: 80080.440932ms
END RequestId: eb7e4432-037d-42f3-9ba5-76d9cbdbf5d6
REPORT RequestId: eb7e4432-037d-42f3-9ba5-76d9cbdbf5d6 Duration: 80093.59 ms Billed Duration: 80094 ms Memory Size: 128 MB Max Memory Used: 75 MB

Conclusionโ€‹

As I already mentioned sometimes the best way is to rely on your partner - in this case AWS. Also usually the simplest solutions are the best.

Let me know what is your experience with Lambda functions and cron jobs ๐Ÿ™‚

Footnotesโ€‹

  1. the same thing could be implemented with EventBridge Rules which are now considered to be legacy. โ†ฉ

  2. at least it is simple for you as a consumer, I get there is some complex code behind he scenes ๐Ÿ™ƒ โ†ฉ