As stated in .. and other posts, the defaults of uWSGI not suitable for a
productive environment. To give just one example, the workers run indefinitely
and the memory leaks aggregate.
- "Configuring uWSGI for Production: The defaults are all wrong" EuroPython 2019 [1]
- "Configuring uWSGI for Production Deployment" [2]
- "When Paul has tested some PR on his instance, we could clearly see a memory
leak over a week: the memory never dropped to the initial value. Same for my
instance using Docker." [3]
[1] https://av.tib.eu/media/44810
[2] https://www.bloomberg.com/company/stories/configuring-uwsgi-production-deployment/
[3] https://github.com/searxng/searxng/pull/3443#issuecomment-2094347004
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Another trip into the hell of dependencies: docutils tends to put major changes
in minor patches: the executables have been renamed / e.g.
rst2html.py --> rts2html
so we have to use docutils at least from version 0.21.2, but this version of
docutils is only supported by myst-parser from version 3.0.1 on.
Additionally, docutils decided to drop python 3.8 in version 0.21 [1]
Further, linuxdoc needed an update to cope with docutils 0.21 [2]
[1] https://docutils.sourceforge.io/RELEASE-NOTES.html#release-0-21-2024-04-09
[2] https://github.com/return42/linuxdoc/pull/36
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
In the past, some files were tested with the standard profile, others with a
profile in which most of the messages were switched off ... some files were not
checked at all.
- ``PYLINT_SEARXNG_DISABLE_OPTION`` has been abolished
- the distinction ``# lint: pylint`` is no longer necessary
- the pylint tasks have been reduced from three to two
1. ./searx/engines -> lint engines with additional builtins
2. ./searx ./searxng_extra ./tests -> lint all other python files
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Add a leading tag (in square brackets) about the scope/type to commit messages
from automated tasks (commits from CI).
dependantbot::
[upd] pypi: Bump .. from .. to ..
[upd] npm: Bump .. from .. to .. in /searx/static/themes/simple
Weblate translation updates::
[l10n] update translations from Weblate
updates of ./data::
[data] update searx.data ...
build commit of gh-pages::
[doc] build from commit ...
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
babel.Locale.parse loads more than 60MB in RAM. The only purpose is to get:
LOCALE_NAMES - searx.data.LOCALES["LOCALE_NAMES"]
RTL_LOCALES - searx.data.LOCALES["RTL_LOCALES"]
This commit calls babel.Locale.parse when the translations are update from
weblate and stored in::
searx/data/locales.json
This file can be build by::
./manage data.locales
By store these variables in searx.data when the translations are updated we save
round about 65MB (usually 4 worker = 260MB of RAM saved.
Suggested-by: https://github.com/searxng/searxng/discussions/2633#discussioncomment-8490494
Co-authored-by: Markus Heiser <markus.heiser@darmarit.de>
All the environments defined in ./utils/brand.env are generated on the fly, so
there is no longer a need to define the brand environment in this file and all
the workflows to handle this file.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
This allow to read settings on the fly even without virtualenv. The ultimate
goal of the commit is to remove utils/brand.env from the git repository.
The code includes a tiny yaml parser that **should** be good enough. The code
read searx/settings.yml directly (and ignore the environment variables).
yq [1] is a more reliable alternative but this require to download a binary from
github which is not great.
[1] https://github.com/mikefarah/yq/#install
* Docker: add UWSGI_WORKERS and UWSGI_THREAD.
UWSGI_WORKERS specifies the number of process.
UWSGI_THREADS specifies the number of threads.
The Docker convention is to specify the whole configuration
through environment variables. While not done in SearXNG, these two
additional variables allows admins to skip uwsgi.ini
In additional, https://github.com/searxng/preview-environments starts Docker
without additional files through searxng-helm-chat.
Each instance consumes 1Go of RAM which is a lot especially when there are a
lot of instances / pull requests.
* [scripts] add environments UWSGI_WORKERS and UWSGI_THREADS
- UWSGI_WORKERS specifies the number of process.
- UWSGI_THREADS specifies the number of threads.
Templates for uwsgi scripts can be tested by::
UWSGI_WORKERS=8 UWSGI_THREADS=9 \
./utils/searxng.sh --cmd\
eval "echo \"$(cat utils/templates/etc/uwsgi/*/searxng.ini*)\""\
| grep "workers\|threads"
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
---------
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Co-authored-by: Markus Heiser <markus.heiser@darmarit.de>
Caching files on the client side for more than a day can confuse the end user
when updating static files[1].
Depending on the way of providing a SearXNG instance via HTTP, there are several
ways to optimize the access to the /static files. However, since we don't know
what optimization an admin has provided for his static files, we should have
moderate settings in the defaults that run robustly in a wide variety of
installations.
In this sense, all caches on the client side should be cleared after one day at
the latest. So far the files were cached for one year on client side; as soon
as changes are made to the static files (with the option `static_use_hash:
true`) the old static files are kept for one year on the CLient side / which can
also be evaluated as unnecessary caching.
[1] https://github.com/searxng/searxng/discussions/2821
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
BTW force modularization of the ./mange script into sub modules:
- utils/lib_sxng_data.sh
- utils/lib_sxng_node.sh
- utils/lib_sxng_static.sh
- utils/lib_sxng_test.sh
- utils/lib_sxng_themes.sh
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Partial reverse engineering of the Google engines including a improved language
and region handling based on the engine.traits_v1 data.
When ever possible the implementations of the Google engines try to make use of
the async REST APIs. The get_lang_info() has been generalized to a
get_google_info() function / especially the region handling has been improved by
adding the cr parameter.
searx/data/engine_traits.json
Add data type "traits_v1" generated by the fetch_traits() functions from:
- Google (WEB),
- Google images,
- Google news,
- Google scholar and
- Google videos
and remove data from obsolete data type "supported_languages".
A traits.custom type that maps region codes to *supported_domains* is fetched
from https://www.google.com/supported_domains
searx/autocomplete.py:
Reversed engineered autocomplete from Google WEB. Supports Google's languages and
subdomains. The old API suggestqueries.google.com/complete has been replaced
by the async REST API: https://{subdomain}/complete/search?{args}
searx/engines/google.py
Reverse engineering and extensive testing ..
- fetch_traits(): Fetch languages & regions from Google properties.
- always use the async REST API (formally known as 'use_mobile_ui')
- use *supported_domains* from traits
- improved the result list by fetching './/div[@data-content-feature]'
and parsing the type of the various *content features* --> thumbnails are
added
searx/engines/google_images.py
Reverse engineering and extensive testing ..
- fetch_traits(): Fetch languages & regions from Google properties.
- use *supported_domains* from traits
- if exists, freshness_date is added to the result
- issue 1864: result list has been improved a lot (due to the new cr parameter)
searx/engines/google_news.py
Reverse engineering and extensive testing ..
- fetch_traits(): Fetch languages & regions from Google properties.
*supported_domains* is not needed but a ceid list has been added.
- different region handling compared to Google WEB
- fixed for various languages & regions (due to the new ceid parameter) /
avoid CONSENT page
- Google News do no longer support time range
- result list has been fixed: XPath of pub_date and pub_origin
searx/engines/google_videos.py
- fetch_traits(): Fetch languages & regions from Google properties.
- use *supported_domains* from traits
- add paging support
- implement a async request ('asearch': 'arc' & 'async':
'use_ac:true,_fmt:html')
- simplified code (thanks to '_fmt:html' request)
- issue 1359: fixed xpath of video length data
searx/engines/google_scholar.py
- fetch_traits(): Fetch languages & regions from Google properties.
- use *supported_domains* from traits
- request(): include patents & citations
- response(): fixed CAPTCHA detection (Scholar has its own CATCHA manager)
- hardening XPath to iterate over results
- fixed XPath of pub_type (has been change from gs_ct1 to gs_cgt2 class)
- issue 1769 fixed: new request implementation is no longer incompatible
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
$ make nvm.install
INFO: install (update) NVM at /800GBPCIex4/share/SearXNG/.nvm
INFO: already cloned at: /800GBPCIex4/share/SearXNG/.nvm
|| Fetching origin
INFO: checkout v0.39.1
|| HEAD is now at 9600617 v0.39.1
make: *** [Makefile:96: nvm.install] Error 1
Without this fix we need to set VERBOSE environment to avoid the 'Error 1':
$ VERBOSE=0 make nvm.install
BTW: fix an issue if there are any leftovers in ${NVM_DIR} from previos
installations
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
There's already precedence for not using GNUism sha256sum longopts as
seen in searxng/utils/lib_go.sh so update lib.sh to not use them either.
A nice side effect is now the sha256sum usage doesn't care if you're
using BSD sha256sum or GNU sha256sum which makes this work under FreeBSD.
settings.yml:
* The default URL was unix:///usr/local/searxng-redis/run/redis.sock?db=0
* The default URL is now "false"
The default URL makes the log difficult to deal with:
if the admin didn't install a Redis instance, the logs record a false error.
It worked before because SearXNG initialized the Redis connection when the limiter started.
In this commit, SearXNG initializes Redis in searx/webapp.py
so various components can use Redis without taking care of the initialization step.
Since ./utils/searxng.sh is implemented, the old installation procedures from
filtron, morty and searx can be removed.
For users who want to upgrade, the procedures for removing old installations
have still been retained.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Git v2.35.2 closes an security issue, it is no longer possible that root uses a
git repo that is owned by someone else, the error message is::
fatal: unsafe repository ('/share/darmarit.org/cache/searxng' is owned by someone else)
The fix is to run the `git diff --name-only` not as root in a sudo command.
[1] https://github.blog/2022-04-12-git-security-vulnerability-announced/
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
[1] https://docs.fedoraproject.org/en-US/releases/eol/
[2] https://docs.fedoraproject.org/en-US/releases/f35/
Tested by::
# build the container ..
$ sudo -H ./utils/lxc.sh build searx-fedora35
# open a shell in the container
$ sudo -H ./utils/lxc.sh cmd searx-fedora35 bash
[root@searx-fedora35 SearXNG]#
# install a complete SearXNG suite ..
[root@searx-fedora35 SearXNG]# ./utils/searx.sh install all
...
# install apache to export the SearXNG instance by HTTP
[root@searx-fedora35 SearXNG]# ./utils/searx.sh apache install
...
INFO: got 200 from http://10.174.184.94/searx
To build wheel `python3-devel` needs to be added to SEARX_PACKAGES_fedora::
|searx| × Building wheel for setproctitle (pyproject.toml) did not run successfully.
|searx| │ exit code: 1
...
|searx| In file included from src/spt.h:15,
|searx| from src/setproctitle.c:14:
|searx| src/spt_python.h:16:10: fatal error: Python.h: No such file or directory
|searx| 16 | #include <Python.h>
|searx| | ^~~~~~~~~~
|searx| compilation terminated.
|searx| error: command '/usr/bin/gcc' failed with exit code 1
|searx| [end of output]
...
|searx| ERROR: Failed building wheel for setproctitle
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>