Today I was experiencing some failures in CI with the message Error response from daemon: all predefined address pools have been fully subnetted. Have I solved the issue? Nope. But my blog, my rules. I found a workaround I want to write down before I forget about it.

While investigating this issue, I found several resources that suggest modifying Docker config to increase the number of networks available. I’m not going to pretend I understand it all. I did this for my homelab a while ago, but how to do this for a Forgejo Actions docker-in-docker setup?

I didn’t have the time to investigate the issue, but I could get into the dind (docker-in-docker) container, run docker network ls and confirm there were plenty of networks:

NETWORK ID     NAME                                        DRIVER    SCOPE
0aa32f34b641   WORKFLOW-0febb6edf9ce00f972310629804356d4   bridge    local
4d7a7f591a11   WORKFLOW-6ae224aa3687a85d6f25adde9b46c19f   bridge    local
8cccbbba38ee   WORKFLOW-8cf00791dc20765932eaf71b5a5a5d7e   bridge    local
d01eb0dd2c4e   WORKFLOW-9e620af02e1f423ba37887442e7bece7   bridge    local
f8463324d26b   WORKFLOW-19ca2082193c77dc90be2245ec44f17c   bridge    local
[...]

this would be fine if there were as many running jobs, but my Forgejo Actions concurrency is limited to ~5 or so, not 32, so something was definitely off. I ran docker ps within dind to see the running containers:

CONTAINER ID   IMAGE                           COMMAND                  CREATED       STATUS          PORTS     NAMES
c8cb73fdaee5   moby/buildkit:buildx-stable-1   "buildkitd --oci-wor…"   7 days ago    Up 13 minutes             buildx_buildkit_builder-1fb2e2c1-4e5a-4974-a542-653d8344b1590
8ce130d2dc43   moby/buildkit:buildx-stable-1   "buildkitd --oci-wor…"   8 days ago    Up 13 minutes             buildx_buildkit_builder-5262d6e9-4dd9-419d-93f9-14d07fb1ebfd0
eec52d0cc9de   moby/buildkit:buildx-stable-1   "buildkitd --oci-wor…"   8 days ago    Up 13 minutes             buildx_buildkit_builder-86305831-2272-4e52-9bd5-c7d406e46d3e0
e4fba3d99c97   moby/buildkit:buildx-stable-1   "buildkitd --oci-wor…"   9 days ago    Up 13 minutes             buildx_buildkit_builder-f409254c-15ec-4b0a-afeb-3bb1d32ffaa60
3b92ae7c3fa1   moby/buildkit:buildx-stable-1   "buildkitd --oci-wor…"   9 days ago    Up 14 minutes             buildx_buildkit_builder-ee01e6d2-642c-4b55-85e1-84cc0db86cb10
04a54a17bc16   moby/buildkit:buildx-stable-1   "buildkitd --oci-wor…"   9 days ago    Up 13 minutes             buildx_buildkit_builder-51d85866-c4b1-46f8-87f3-523fec1211ad0
ae985b471c3a   moby/buildkit:buildx-stable-1   "buildkitd --oci-wor…"   10 days ago   Up 13 minutes             buildx_buildkit_builder-9c0c35db-5ee2-42ef-b370-b2b7aadb029a0
[...]

and saw many of them had been created days ago. I don’t know if these containers are kept alive for performance reasons, but for sure they were not being used on my CI jobs at the moment. Note the STATUS is recent because I had just installed docker upgrades which restarted the contianers.

So I stopped the containers with docker ps --format= | xargs docker stop and cleared the dangling networks with docker network prune, and then I could rerun my CI jobs.

Note this is just a workaround that worked for me. I’m not endorsing this, nor suggesting it’s a good idea. This is more of a note-to-self than advice for others.

Happy coding!