Posted on 13 December 2020, updated on 21 December 2023.
On the 20th of November, the new Docker hub rate limit became effective. If you’re an anonymous user of Docker, you won’t be allowed to make more than 100 container images requests (the famous ‘docker pull'
instruction) in 6 hours or 200 requests if you are a free user.
If you think you don't pull images that often, you could be surprised and face the following error message:
You have reached your pull rate limit.
You may increase the limit by authenticating and upgrading.
Consequences when you're using some Cloud providers CI/CD tools or Kubernetes managed services
When you're using Cloud Build (GCP) or Code Pipeline (AWS) tools as your CI and/or CD tools you're actually using shared runners. The code executing your pipelines is running on a VM used by many other pipelines. Therefore, during the docker image build when the runtime is pulling the node, php, or whatever alpine docker image, it is not the first time in 6 hours that the VM is requesting the docker hub anonymously but with its own IP. You could be the 5th, the 25th but also the 100th and so the request will be rejected. If you run again the pipeline the code could be executed on another VM, and so the request will succeed. It works but it's not a reliable process.
Regarding your Kubernetes clusters, you could face this Docker error when pods are starting if you have several deployments with several replicas. Besides you're using some images you may not think about like the weaving image if you're using this CNI.
Cloud Providers, as GCP, use cache to mitigate the issue but this solution is not bulletproof.
How to deal with this limit?
The first solution would be to use the docker authentication, but it only double the limit if you stay in the free tier and it implies more credentials to take care of. You can simply host the base images in your Cloud Provider account.
Let's take the following example with an AWS infrastructure with a Node application running :
- Create a new ECR called
node-alpine
through the console or with Terraform depending on your IaC maturity - Pull locally the node image, the one at the top of your Dockerfile
docker pull node:12-alpine
- Tag it with your ECR url
docker tag node:12-alpine <AWS_ACCOUNT_ID>dkr.ecr.<AWS_REGION>.amazonaws.com/node-alpine:12
- Push this image to your ECR:
docker push <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/node-alpine:12
- Modify the first line of your Dockerfile from
FROM node:12-alpine
toFROM node:<AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/node-alpine:12
Don't hesitate to ask if you want the equivalent GCP procedure!
This solution implies two things. First, you can not use the latest
image anymore, which is actually a great thing. Indeed, using the latest
tag can cause bugs or downtime if the version changes and your code is not ready for it.
Secondly, if you're a Kubernetes user and you've decided to move all the Docker images needed for your cluster, you will have to take care of the upgrade of more images than before and watch for the security issues and patches.