Running Cron Tasks with Kubernetes

V Ananda Raj
2 min readMay 7, 2024

--

Before you begin

To interact with your Kubernetes cluster, you’ll need two things in place: a Kubernetes cluster itself and the kubectl command-line tool configured to communicate with that cluster.

Creating a CronJob

The Cron jobs require a config file. Here is a manifest for a CronJob that runs the command php artisan update:products every day for a Laravel project:

apiVersion: batch/v1
kind: CronJob
metadata:
name: update-products-admin # put a name of choice here
namespace: namespace_abc # replace ur namespace name here
labels:
app: app_name # replace app name here
spec:
schedule: "0 0 * * *" # Cron expression for running at midnight every day
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
concurrencyPolicy: Forbid
suspend: false
jobTemplate:
spec:
template:
metadata:
labels:
app: app_name # replace app name here
spec:
containers:
- name: update-products-admin
image: image-name-here:latest # replace the image name here, say like alpine
imagePullPolicy: IfNotPresent
command: ["php", "artisan", "update:products"] # This depends on the action you want to perform
restartPolicy: Never

Below is a snippet of the pods running at present before adding the cron.

$ kubectl get pods -n namespace_abc --show-labels
NAME READY STATUS RESTARTS AGE LABELS
app-admin-deployment-7dc9254446-vgrlh 1/1 Running 0 68m app=app_name-admin,pod-template-hash=7dc9254446
app-frontend-deployment-84633664d6-s6tjd 1/1 Running 0 2d18h app=app_name,pod-template-hash=84633664d6

Here is a manifest for a CronJob that runs a simple demonstration task every minute from the Kubernetes documentation.

apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure

Now, run the CronJob with the below command:

$ kubectl apply -f cron_update_products.yaml 
cronjob.batch/update-products-admin created

Verifying the CronJob

We can get the CronJob status as below:

$ kubectl get cronjob -n namespace_abc
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
update-products-admin 0 0 * * * False 0 <none> 6s

From the results of the command we can see, the cron job has not scheduled or run any jobs yet. We can watch for the job as:

$ kubectl get jobs --watch -n namespace_abc
NAME COMPLETIONS DURATION AGE
update-products-admin-28562970 1/1 4m41s 4m41s

To find the pods that the last scheduled job created.

# Replace "update-products-admin-28562970" with the job name in your system
$ kubectl get pods --selector=job-name=update-products-admin-28562970 --output=jsonpath={.items[*].metadata.name}

To get the logs from the pod:

kubectl logs $pod_name

Deleting a CronJob

You can delete the created job as below:

$ kubectl delete cronjob update-products-admin -n namespace_abc
cronjob.batch "update-products-admin" deleted

Bonus

We can add crons at Docker file as well if the containers are build with Docker.

Copy over the cron to a file crontab_file

*/5 * * * * cd /var/www/html && source /var/www/html/.env && /usr/local/bin/php artisan update:products ; echo "update ran completed" >> /var/log/cron_update_products.log 2>&1

Now add the below lines in the Dockerfile

COPY crontab_file /var/spool/cron/crontabs/root
RUN chmod 0644 /var/spool/cron/crontabs/root
RUN chown root:crontab /var/spool/cron/crontabs/root
RUN crontab /var/spool/cron/crontabs/root
RUN touch /var/log/cron.log

CMD ["sh", "-c", "cron"]

--

--

No responses yet