I've narrowed the permissions and rotated the token for the deploy account on
DockerHub registry. I replaced the secret ref in GitHub so that it's available
organization wide. No further actions are necessary.
This commit replaces `pip` in container builds with `uv` pip compat
with a 1:1 parity. The only thing that changes is the installation speed of the
wheels, which seems to be considerably faster, although I haven't been able to
properly quantify this yet.
uv also gives us more tools to manage the cache. We can revert the prior cache
changes in `container.yml` as we won't have duplicated wheels anymore.
All actions are pulled using the version hash, versions are handled by
dependabot, and we'll have control over which actions get updated.
Replaces Trivy scanner with Docker Scout, we have recently begun analyzing the
images there, and the action will keep us in sync about the problems on GHCS
dashboard.
Due to current limitations of `actions/cache`, the cache cannot be overwritten.
In our case, we need to accumulate cached wheels from different architectures.
To solve this, we simply delete the key before storing the cache again.
Building the container currently does not work properly.
When rebuilding several times with `make container`, `version_frozen.py`
is recreated, which wouldn't be an issue if the file’s timestamp was constant.
Now, when creating `version_frozen.py`, it will have the same timestamp as the
commit when it was created. (`version_frozen.py` is moved to a dedicated layer).
Reusing "builder" cache when building "dist" could be slow
(CD reports 2 seconds, but locally I've seen it take up to 10 seconds),
so the Dockerfile is now split and we save a couple steps
by importing the "builder" image directly.
The last changes made it possible to remove the layer cache in "builder",
since the overhead is now greater than building the layers from scratch.
Until now, all "dist" layers were squashed into a single layer,
which in most cases is a good idea
(except for storage/delivery pricing/overhead), but in our case,
since we manage the entire pipeline, we can ignore this
and share layers between builds.
This means (for example) that if we change files unrelated to the container
in several consecutive commits (documentation changes), we don't have to push
the entire image to registry, but only the different layers
(`version_frozen.py` in this example).
The same applies when pulling, as only the layers that have changed
compared to the local layers will be downloaded (that's the theory,
we'll see if this works as expected or if we need to tweak something else).
I'm not too pleased to reverse this, but issues like https://github.com/searxng/searxng/issues/4792 have not been foreseen, and we can't just turn away. It has become apparent over the last weeks that there are still quite a few people with an incompatible CPU or having SearXNG on some random VM provider who can't (or won't) modify the configuration of their machines to expose the features needed for x86_64v2 march.
As I don't want to trash the work with apko and base images, I thought about trying building Alpine again now that we have all the container related workflow refactored.
There will still be the discussion of whether to use musl and its drawbacks, but right now I don't know any other alternatives.
The nice part of this is that both Dockerfiles (mainline and legacy) can now be unified under the same umbrella again.
Closes https://github.com/searxng/searxng/issues/4792
Closes https://github.com/searxng/searxng/issues/4753
Instead of using Wolfi base images from cgr.dev and making that mess on the Dockerfile, why don't we build the base images ourselves from Wolfi repos with apko? The intention of this is to simplify the main Dockerfile and avoid having to patch the base image every time, it also simplifies some steps like image ownership management and provides extremely fast builds.
`checker.yml` and `integration.yml` are the only workflows that are currently safe to be executed simultaneously, the others present a risk that the order of completion may not be expected. The ones that are chained from `integration.yml` can be called as many times as `integration.yml` workflows are running at that moment, the same with the trigger "workflow_dispatch".
This can be fatal for workflows like `container.yml` that use a centralized cache to store and load the candidate images in a common tag called "searxng-<arch>".
* For example, a `container.yml` workflow is executed after being chained from `integration.yml` (called "~1"), and seconds later it may be triggered again because another PR merged some breaking changes (called "~2"). While "~1" has already passed the test job successfully and is about to start the release job, "~2" finishes building the container and overwrites the references on the common tag. When "~1" in the release job loads the images using the common tag, it will load the container of "~2" instead of "~1" having skipped the whole test job process.
The example is only set for the container workflow, but the other workflows might occur in a similar way.
When making the container rework, I unknowingly deleted the section where an env with the same name as the secret was defined on the job scope, making it look like it was originally defined as an organization env.
Since we can't validate the secrets in a condition directly, it's better to let docker/login-action take care of failing the entire job if the credentials are invalid.
Reported in: https://github.com/searxng/searxng/issues/4777
env.DOCKERHUB_USERNAME shouldn't be an empty string as it's defined and set (I think, I can't see this). Even if wasn't defined, GitHub Org/Repo wide envs/secrets should return an empty string (?)
container.yml will run after integration.yml COMPLETES successfully and in master branch.
Style changes, cleanup and improved integration with CI by leveraging the use of
shared cache between all workflows.
* Podman is now supported to build the container images (Docker also received a refactor, merging both build and buildx)
* Container images are being built by Buildah instead of Docker BuildKit.
* Container images are tested before release.
* Splitting "modern" (amd64 & arm64) and "legacy" (armv7) arches on different Dockerfiles allowing future optimizations.