Kubernetes Cronjob Only Runs Half the Time

0 votes

I have cron job where i want it to be executed every 15 mins. The problem is that instead of veing executes every 15 mins, it gets executed every half an hour.

Whats the issue here?

This is my config file

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  labels:
    type: f-c
  name: f-c-p
  namespace: extract
spec:
  concurrencyPolicy: Forbid
  failedJobsHistoryLimit: 1
  jobTemplate:
    metadata:
      creationTimestamp: null
    spec:
      template:
        metadata:
          creationTimestamp: null
          labels:
            type: f-c
        spec:
          containers:
          - args:
            - /f_c.sh
            image: identifier.amazonaws.com/extract_transform:latest
            imagePullPolicy: Always
            env:
            - name: ENV
              value: prod
            - name: SLACK_TOKEN
              valueFrom:
                secretKeyRef:
                  key: slack_token
                  name: api-tokens
            - name: AWS_ACCESS_KEY_ID
              valueFrom:
                secretKeyRef:
                  key: aws_access_key_id
                  name: api-tokens
            - name: AWS_SECRET_ACCESS_KEY
              valueFrom:
                secretKeyRef:
                  key: aws_secret_access_key
                  name: api-tokens
            - name: F_ACCESS_TOKEN
              valueFrom:
                secretKeyRef:
                  key: f_access_token
                  name: api-tokens
            name: s-f-c
            resources: {}
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
          dnsPolicy: ClusterFirst
          restartPolicy: Never
          schedulerName: default-scheduler
          securityContext: {}
          terminationGracePeriodSeconds: 30
  schedule: '*/15 * * * *'
  successfulJobsHistoryLimit: 1
  suspend: false
status: {}
Sep 18, 2018 in Kubernetes by lina
• 8,220 points
627 views

1 answer to this question.

0 votes
The external circumstances prevents them from running as intended.

On the original cluster there were ~20k scheduled jobs. The built-in scheduler for Kubernetes is not yet capable of handling this volume consistently.

The maximum number of jobs that can be reliably run (within a minute of the time intended) may depend on the size of your master nodes.
answered Sep 18, 2018 by Kalgi
• 52,350 points

Related Questions In Kubernetes

+2 votes
1 answer
+1 vote
4 answers
0 votes
1 answer

Customize the routing logic in kubernetes ingress controller

Try building your own customized image based on ...READ MORE

answered Sep 7, 2018 in Kubernetes by Kalgi
• 52,350 points
1,323 views
0 votes
1 answer

Kubernetes Ingress Path only works with /

Have you changed the context path in ...READ MORE

answered Sep 7, 2018 in Kubernetes by Kalgi
• 52,350 points
2,714 views
+1 vote
1 answer
0 votes
3 answers

Error while joining cluster with node

Hi Kalgi after following above steps it ...READ MORE

answered Jan 17, 2019 in Others by anonymous
15,456 views
+15 votes
2 answers

Git management technique when there are multiple customers and need multiple customization?

Consider this - In 'extended' Git-Flow, (Git-Multi-Flow, ...READ MORE

answered Mar 27, 2018 in DevOps & Agile by DragonLord999
• 8,450 points
4,024 views
0 votes
1 answer

Error saying "The specified bucket does not exist" in kubernetes

Bucket is created in another region. Looks like ...READ MORE

answered Aug 31, 2018 in Kubernetes by Kalgi
• 52,350 points
4,105 views
0 votes
1 answer

Running a cronjob in kubernetes

You will find the CronJobresource in the batch/v1beta1 API group. ...READ MORE

answered Sep 6, 2018 in Kubernetes by Kalgi
• 52,350 points
761 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP