I've narrowed the permissions and rotated the token for the deploy account on
DockerHub registry. I replaced the secret ref in GitHub so that it's available
organization wide. No further actions are necessary.
This commit replaces `pip` in container builds with `uv` pip compat
with a 1:1 parity. The only thing that changes is the installation speed of the
wheels, which seems to be considerably faster, although I haven't been able to
properly quantify this yet.
uv also gives us more tools to manage the cache. We can revert the prior cache
changes in `container.yml` as we won't have duplicated wheels anymore.
TypeScript is a superset of JavaScript, converting the entire theme to
TypeScript allows us to receive much more feedback on possible issues made in
package updates or our own typos, furthermore, it allows to transpile properly
to lower specs. This PR couldn't be done in smaller commits, a lot of work
needed to make everything *work properly*:
- A browser baseline has been set that requires minimum **Chromium 93, Firefox
92 and Safari 15** (proper visuals/operation on older browser versions is not
guaranteed)
- LightningCSS now handles minification and prefix creation for CSS.
- All hardcoded polyfills and support for previous browser baseline versions
have been removed.
- Convert codebase to TypeScript.
- Convert IIFE to ESM, handling globals with IIFE is cumbersome, ESM is the
standard for virtually any use of JS nowadays.
- Vite now builds the theme without the need for `vite-plugin-static-copy`.
- `searxng.ready` now accepts an array of conditions for the callback to be
executed.
- Replace `leaflet` with `ol` as there were some issues with proper Vite
bundling.
- Merged `head` with `main` script, as head was too small now.
- Add `assertElement` to properly check the existence of critical DOM elements.
- `searxng.on` renamed to `searxng.listen` with some handling improvements.
All actions are pulled using the version hash, versions are handled by
dependabot, and we'll have control over which actions get updated.
Replaces Trivy scanner with Docker Scout, we have recently begun analyzing the
images there, and the action will keep us in sync about the problems on GHCS
dashboard.
Due to current limitations of `actions/cache`, the cache cannot be overwritten.
In our case, we need to accumulate cached wheels from different architectures.
To solve this, we simply delete the key before storing the cache again.
Building the container currently does not work properly.
When rebuilding several times with `make container`, `version_frozen.py`
is recreated, which wouldn't be an issue if the file’s timestamp was constant.
Now, when creating `version_frozen.py`, it will have the same timestamp as the
commit when it was created. (`version_frozen.py` is moved to a dedicated layer).
Reusing "builder" cache when building "dist" could be slow
(CD reports 2 seconds, but locally I've seen it take up to 10 seconds),
so the Dockerfile is now split and we save a couple steps
by importing the "builder" image directly.
The last changes made it possible to remove the layer cache in "builder",
since the overhead is now greater than building the layers from scratch.
Until now, all "dist" layers were squashed into a single layer,
which in most cases is a good idea
(except for storage/delivery pricing/overhead), but in our case,
since we manage the entire pipeline, we can ignore this
and share layers between builds.
This means (for example) that if we change files unrelated to the container
in several consecutive commits (documentation changes), we don't have to push
the entire image to registry, but only the different layers
(`version_frozen.py` in this example).
The same applies when pulling, as only the layers that have changed
compared to the local layers will be downloaded (that's the theory,
we'll see if this works as expected or if we need to tweak something else).
On demand, the tracker data is loaded directly into the cache, so that the
maintenance of this data via PRs is no longer necessary.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
The action does not take into account all cases of how an image is stored, causing errors like the ones below on image pull. I exclude `base` until I find a solution.
*Error: internal error: unable to copy from source ...: initializing source ...: reading manifest ... in ghcr.io/searxng/base: manifest unknown*
I'm not too pleased to reverse this, but issues like https://github.com/searxng/searxng/issues/4792 have not been foreseen, and we can't just turn away. It has become apparent over the last weeks that there are still quite a few people with an incompatible CPU or having SearXNG on some random VM provider who can't (or won't) modify the configuration of their machines to expose the features needed for x86_64v2 march.
As I don't want to trash the work with apko and base images, I thought about trying building Alpine again now that we have all the container related workflow refactored.
There will still be the discussion of whether to use musl and its drawbacks, but right now I don't know any other alternatives.
The nice part of this is that both Dockerfiles (mainline and legacy) can now be unified under the same umbrella again.
Closes https://github.com/searxng/searxng/issues/4792
Closes https://github.com/searxng/searxng/issues/4753
checker.yml
1. The checker is not yet of sufficient quality to allow the results of the
check to be evaluated / we do not evaluate them ourselves.
2. The checker sends hundreds of requests to the search engines and causes
problems there / we either overload small providers or we train their bot
defenses to use the SearXNG signature.
documentation.yml
Building the documentation and deploying it on GH-docs of a clones (GH forks) is
generally not desirable either --> We have >2k clones, but we only need one
up-to-date documentation and that is the one from the master branch of the
searxng/searxng repo.
If search engines like Google start linking to the documentation in the clones,
SearXNG users may no longer find the original documentation or be lost in the
flood of options.
Related:
- https://github.com/searxng/searxng/issues/4847
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Instead of using Wolfi base images from cgr.dev and making that mess on the Dockerfile, why don't we build the base images ourselves from Wolfi repos with apko? The intention of this is to simplify the main Dockerfile and avoid having to patch the base image every time, it also simplifies some steps like image ownership management and provides extremely fast builds.
If the workflow is executed with the "workflow_dispatch" trigger, the user who executed the workflow becomes the author of the commit on the PR, this is not intended.
It also reverts the body param so that the default text of the action does not appear.
`checker.yml` and `integration.yml` are the only workflows that are currently safe to be executed simultaneously, the others present a risk that the order of completion may not be expected. The ones that are chained from `integration.yml` can be called as many times as `integration.yml` workflows are running at that moment, the same with the trigger "workflow_dispatch".
This can be fatal for workflows like `container.yml` that use a centralized cache to store and load the candidate images in a common tag called "searxng-<arch>".
* For example, a `container.yml` workflow is executed after being chained from `integration.yml` (called "~1"), and seconds later it may be triggered again because another PR merged some breaking changes (called "~2"). While "~1" has already passed the test job successfully and is about to start the release job, "~2" finishes building the container and overwrites the references on the common tag. When "~1" in the release job loads the images using the common tag, it will load the container of "~2" instead of "~1" having skipped the whole test job process.
The example is only set for the container workflow, but the other workflows might occur in a similar way.