All actions are pulled using the version hash, versions are handled by
dependabot, and we'll have control over which actions get updated.
Replaces Trivy scanner with Docker Scout, we have recently begun analyzing the
images there, and the action will keep us in sync about the problems on GHCS
dashboard.
Building the container currently does not work properly.
When rebuilding several times with `make container`, `version_frozen.py`
is recreated, which wouldn't be an issue if the file’s timestamp was constant.
Now, when creating `version_frozen.py`, it will have the same timestamp as the
commit when it was created. (`version_frozen.py` is moved to a dedicated layer).
Reusing "builder" cache when building "dist" could be slow
(CD reports 2 seconds, but locally I've seen it take up to 10 seconds),
so the Dockerfile is now split and we save a couple steps
by importing the "builder" image directly.
The last changes made it possible to remove the layer cache in "builder",
since the overhead is now greater than building the layers from scratch.
Until now, all "dist" layers were squashed into a single layer,
which in most cases is a good idea
(except for storage/delivery pricing/overhead), but in our case,
since we manage the entire pipeline, we can ignore this
and share layers between builds.
This means (for example) that if we change files unrelated to the container
in several consecutive commits (documentation changes), we don't have to push
the entire image to registry, but only the different layers
(`version_frozen.py` in this example).
The same applies when pulling, as only the layers that have changed
compared to the local layers will be downloaded (that's the theory,
we'll see if this works as expected or if we need to tweak something else).
The action does not take into account all cases of how an image is stored, causing errors like the ones below on image pull. I exclude `base` until I find a solution.
*Error: internal error: unable to copy from source ...: initializing source ...: reading manifest ... in ghcr.io/searxng/base: manifest unknown*
Instead of using Wolfi base images from cgr.dev and making that mess on the Dockerfile, why don't we build the base images ourselves from Wolfi repos with apko? The intention of this is to simplify the main Dockerfile and avoid having to patch the base image every time, it also simplifies some steps like image ownership management and provides extremely fast builds.
Currently, we have 1100~ cache images uploaded to GHCR that weigh more than 300 MB each (most of them are layers from the second phase of the Dockerfile that were uploaded by mistake, read below). To avoid problems, I have set up a new job in a new workflow to be run weekly purging all images older than 1 week, but leaving always the 100 most recent ones.
Only the builder images should be uploaded to cache, the actual behaviour not only slows down the time for building the container, but also wastes lots of space by saving large and useless layers to GHCR that will never be used again.