Compare commits

...

6 Commits

Author SHA1 Message Date
Markus Heiser e956b25190
Merge 1c9b28968d into 0f9694c90b 2024-11-24 00:09:11 +01:00
Markus Heiser 0f9694c90b [clean] Internet Archive Scholar search API no longer exists
Engine was added in #2733 but the API does no longer exists. Related:

- https://github.com/searxng/searxng/issues/4038

Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
2024-11-23 17:59:38 +01:00
Markus Heiser ccc4f30b20 [doc] update quantities on the intro page
The quantities on the intro page were partly out of date / example; we already
have 210 engines and not just 70. To avoid having to change the quantities
manually in the future, they are now calculated from the jinja context

Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
2024-11-23 16:33:08 +01:00
Markus Heiser c4b874e9b0 [fix] engine Library of Congress: fix API URL loc.gov -> www.loc.gov
Avoid HTTP 404 and redirects. Requests to the JSON/YAML API use the base url [1]

    https://www.loc.gov/{endpoint}/?fo=json

[1] https://www.loc.gov/apis/json-and-yaml/requests/

Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
2024-11-23 13:02:24 +01:00
Markus Heiser 7c4e4ebd40 [log] warning with URL in case of 'raise_for_httperror'
In order to be able to implement error handling, it is necessary to know which
URL triggered the exception / the URL has not yet been logged.

Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
2024-11-23 11:33:19 +01:00
Markus Heiser 1c9b28968d [mod] botdetection: HTTP Fetch Metadata Request Headers
HTTP Fetch Metadata Request Headers [1][2] are used to detect bot requests. Bots
with invalid *Fetch Metadata* will be redirected to the intro (`index`)  page.

[1] https://www.w3.org/TR/fetch-metadata/
[2] https://developer.mozilla.org/en-US/docs/Glossary/Fetch_metadata_request_header

Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
2024-10-27 13:42:57 +01:00
10 changed files with 101 additions and 103 deletions

View File

@ -4,26 +4,31 @@ Welcome to SearXNG
*Search without being tracked.* *Search without being tracked.*
SearXNG is a free internet metasearch engine which aggregates results from more .. jinja:: searx
than 70 search services. Users are neither tracked nor profiled. Additionally,
SearXNG can be used over Tor for online anonymity. SearXNG is a free internet metasearch engine which aggregates results from up
to {{engines | length}} :ref:`search services <configured engines>`. Users
are neither tracked nor profiled. Additionally, SearXNG can be used over Tor
for online anonymity.
Get started with SearXNG by using one of the instances listed at searx.space_. Get started with SearXNG by using one of the instances listed at searx.space_.
If you don't trust anyone, you can set up your own, see :ref:`installation`. If you don't trust anyone, you can set up your own, see :ref:`installation`.
.. sidebar:: features .. jinja:: searx
- :ref:`self hosted <installation>` .. sidebar:: features
- :ref:`no user tracking / no profiling <SearXNG protect privacy>`
- script & cookies are optional - :ref:`self hosted <installation>`
- secure, encrypted connections - :ref:`no user tracking / no profiling <SearXNG protect privacy>`
- :ref:`about 200 search engines <configured engines>` - script & cookies are optional
- `about 60 translations <https://translate.codeberg.org/projects/searxng/searxng/>`_ - secure, encrypted connections
- about 100 `well maintained <https://uptime.searxng.org/>`__ instances on searx.space_ - :ref:`{{engines | length}} search engines <configured engines>`
- :ref:`easy integration of search engines <demo online engine>` - `58 translations <https://translate.codeberg.org/projects/searxng/searxng/>`_
- professional development: `CI <https://github.com/searxng/searxng/actions>`_, - about 70 `well maintained <https://uptime.searxng.org/>`__ instances on searx.space_
`quality assurance <https://dev.searxng.org/>`_ & - :ref:`easy integration of search engines <demo online engine>`
`automated tested UI <https://dev.searxng.org/screenshots.html>`_ - professional development: `CI <https://github.com/searxng/searxng/actions>`_,
`quality assurance <https://dev.searxng.org/>`_ &
`automated tested UI <https://dev.searxng.org/screenshots.html>`_
.. sidebar:: be a part .. sidebar:: be a part

View File

@ -53,6 +53,9 @@ Probe HTTP headers
.. automodule:: searx.botdetection.http_user_agent .. automodule:: searx.botdetection.http_user_agent
:members: :members:
.. automodule:: searx.botdetection.sec_fetch
:members:
.. _botdetection config: .. _botdetection config:
Config Config

View File

@ -31,6 +31,9 @@ def dump_request(request: flask.Request):
+ " || Content-Length: %s" % request.headers.get('Content-Length') + " || Content-Length: %s" % request.headers.get('Content-Length')
+ " || Connection: %s" % request.headers.get('Connection') + " || Connection: %s" % request.headers.get('Connection')
+ " || User-Agent: %s" % request.headers.get('User-Agent') + " || User-Agent: %s" % request.headers.get('User-Agent')
+ " || Sec-Fetch-Site: %s" % request.headers.get('Sec-Fetch-Site')
+ " || Sec-Fetch-Mode: %s" % request.headers.get('Sec-Fetch-Mode')
+ " || Sec-Fetch-Dest: %s" % request.headers.get('Sec-Fetch-Dest')
) )

View File

@ -0,0 +1,59 @@
# SPDX-License-Identifier: AGPL-3.0-or-later
"""
Method ``http_sec_fetch``
-------------------------
The ``http_sec_fetch`` method protect resources from web attacks with `Fetch
Metadata`_. A request is filtered out in case of:
- http header Sec-Fetch-Mode_ is invalid
- http header Sec-Fetch-Dest_ is invalid
.. _Fetch Metadata:
https://developer.mozilla.org/en-US/docs/Glossary/Fetch_metadata_request_header
.. Sec-Fetch-Dest:
https://developer.mozilla.org/en-US/docs/Web/API/Request/destination
.. Sec-Fetch-Mode:
https://developer.mozilla.org/en-US/docs/Web/API/Request/mode
"""
# pylint: disable=unused-argument
from __future__ import annotations
from ipaddress import (
IPv4Network,
IPv6Network,
)
import flask
import werkzeug
from . import config
from ._helpers import logger
def filter_request(
network: IPv4Network | IPv6Network,
request: flask.Request,
cfg: config.Config,
) -> werkzeug.Response | None:
val = request.headers.get("Sec-Fetch-Mode", "")
if val != "navigate":
logger.debug("invalid Sec-Fetch-Mode '%s'", val)
return flask.redirect(flask.url_for('index'), code=302)
val = request.headers.get("Sec-Fetch-Site", "")
if val not in ('same-origin', 'same-site', 'none'):
logger.debug("invalid Sec-Fetch-Site '%s'", val)
flask.redirect(flask.url_for('index'), code=302)
val = request.headers.get("Sec-Fetch-Dest", "")
if val != "document":
logger.debug("invalid Sec-Fetch-Dest '%s'", val)
flask.redirect(flask.url_for('index'), code=302)
return None

View File

@ -1,71 +0,0 @@
# SPDX-License-Identifier: AGPL-3.0-or-later
"""Internet Archive scholar(science)
"""
from datetime import datetime
from urllib.parse import urlencode
from searx.utils import html_to_text
about = {
"website": "https://scholar.archive.org/",
"wikidata_id": "Q115667709",
"official_api_documentation": "https://scholar.archive.org/api/redoc",
"use_official_api": True,
"require_api_key": False,
"results": "JSON",
}
categories = ['science', 'scientific publications']
paging = True
base_url = "https://scholar.archive.org"
results_per_page = 15
def request(query, params):
args = {
"q": query,
"limit": results_per_page,
"offset": (params["pageno"] - 1) * results_per_page,
}
params["url"] = f"{base_url}/search?{urlencode(args)}"
params["headers"]["Accept"] = "application/json"
return params
def response(resp):
results = []
json = resp.json()
for result in json["results"]:
publishedDate, content, doi = None, '', None
if result['biblio'].get('release_date'):
publishedDate = datetime.strptime(result['biblio']['release_date'], "%Y-%m-%d")
if len(result['abstracts']) > 0:
content = result['abstracts'][0].get('body')
elif len(result['_highlights']) > 0:
content = result['_highlights'][0]
if len(result['releases']) > 0:
doi = result['releases'][0].get('doi')
results.append(
{
'template': 'paper.html',
'url': result['fulltext']['access_url'],
'title': result['biblio'].get('title') or result['biblio'].get('container_name'),
'content': html_to_text(content),
'publisher': result['biblio'].get('publisher'),
'doi': doi,
'journal': result['biblio'].get('container_name'),
'authors': result['biblio'].get('contrib_names'),
'tags': result['tags'],
'publishedDate': publishedDate,
'issns': result['biblio'].get('issns'),
'pdf_url': result['fulltext'].get('access_url'),
}
)
return results

View File

@ -27,7 +27,7 @@ categories = ['images']
paging = True paging = True
endpoint = 'photos' endpoint = 'photos'
base_url = 'https://loc.gov' base_url = 'https://www.loc.gov'
search_string = "/{endpoint}/?sp={page}&{query}&fo=json" search_string = "/{endpoint}/?sp={page}&{query}&fo=json"

View File

@ -111,6 +111,7 @@ from searx.botdetection import (
http_accept_encoding, http_accept_encoding,
http_accept_language, http_accept_language,
http_user_agent, http_user_agent,
http_sec_fetch,
ip_limit, ip_limit,
ip_lists, ip_lists,
get_network, get_network,
@ -178,16 +179,17 @@ def filter_request(request: flask.Request) -> werkzeug.Response | None:
logger.error("BLOCK %s: matched BLOCKLIST - %s", network.compressed, msg) logger.error("BLOCK %s: matched BLOCKLIST - %s", network.compressed, msg)
return flask.make_response(('IP is on BLOCKLIST - %s' % msg, 429)) return flask.make_response(('IP is on BLOCKLIST - %s' % msg, 429))
# methods applied on / # methods applied on all requests
for func in [ for func in [
http_user_agent, http_user_agent,
]: ]:
val = func.filter_request(network, request, cfg) val = func.filter_request(network, request, cfg)
if val is not None: if val is not None:
logger.debug(f"NOT OK ({func.__name__}): {network}: %s", dump_request(flask.request))
return val return val
# methods applied on /search # methods applied on /search requests
if request.path == '/search': if request.path == '/search':
@ -196,12 +198,15 @@ def filter_request(request: flask.Request) -> werkzeug.Response | None:
http_accept_encoding, http_accept_encoding,
http_accept_language, http_accept_language,
http_user_agent, http_user_agent,
http_sec_fetch,
ip_limit, ip_limit,
]: ]:
val = func.filter_request(network, request, cfg) val = func.filter_request(network, request, cfg)
if val is not None: if val is not None:
logger.debug(f"NOT OK ({func.__name__}): {network}: %s", dump_request(flask.request))
return val return val
logger.debug(f"OK {network}: %s", dump_request(flask.request)) logger.debug(f"OK: {network}: %s", dump_request(flask.request))
return None return None

View File

@ -233,8 +233,7 @@ class Network:
del kwargs['raise_for_httperror'] del kwargs['raise_for_httperror']
return do_raise_for_httperror return do_raise_for_httperror
@staticmethod def patch_response(self, response, do_raise_for_httperror):
def patch_response(response, do_raise_for_httperror):
if isinstance(response, httpx.Response): if isinstance(response, httpx.Response):
# requests compatibility (response is not streamed) # requests compatibility (response is not streamed)
# see also https://www.python-httpx.org/compatibility/#checking-for-4xx5xx-responses # see also https://www.python-httpx.org/compatibility/#checking-for-4xx5xx-responses
@ -242,8 +241,11 @@ class Network:
# raise an exception # raise an exception
if do_raise_for_httperror: if do_raise_for_httperror:
raise_for_httperror(response) try:
raise_for_httperror(response)
except:
self._logger.warning(f"HTTP Request failed: {response.request.method} {response.request.url}")
raise
return response return response
def is_valid_response(self, response): def is_valid_response(self, response):
@ -269,7 +271,7 @@ class Network:
else: else:
response = await client.request(method, url, **kwargs) response = await client.request(method, url, **kwargs)
if self.is_valid_response(response) or retries <= 0: if self.is_valid_response(response) or retries <= 0:
return Network.patch_response(response, do_raise_for_httperror) return self.patch_response(response, do_raise_for_httperror)
except httpx.RemoteProtocolError as e: except httpx.RemoteProtocolError as e:
if not was_disconnected: if not was_disconnected:
# the server has closed the connection: # the server has closed the connection:

View File

@ -137,9 +137,6 @@ class OnlineProcessor(EngineProcessor):
self.engine.request(query, params) self.engine.request(query, params)
# ignoring empty urls # ignoring empty urls
if params['url'] is None:
return None
if not params['url']: if not params['url']:
return None return None

View File

@ -1622,11 +1622,6 @@ engines:
api_site: 'askubuntu' api_site: 'askubuntu'
categories: [it, q&a] categories: [it, q&a]
- name: internetarchivescholar
engine: internet_archive_scholar
shortcut: ias
timeout: 15.0
- name: superuser - name: superuser
engine: stackexchange engine: stackexchange
shortcut: su shortcut: su