datetime.parser.parse() does not know the Spanish date format which
leads to a ValueError. Fixes#1870
Traceback (most recent call last):
File "/usr/local/searx/searx/search.py", line 160, in search_one_http_request_safe
search_results = search_one_http_request(engine, query, request_params)
File "/usr/local/searx/searx/search.py", line 97, in search_one_http_request
return engine.response(response)
File "/usr/local/searx/searx/engines/startpage.py", line 102, in response
published_date = parser.parse(date_string, dayfirst=True)
File "/usr/local/searx/searx-ve/lib/python3.6/site-packages/dateutil/parser/_parser.py", line 1358, in parse
return DEFAULTPARSER.parse(timestr, **kwargs)
File "/usr/local/searx/searx-ve/lib/python3.6/site-packages/dateutil/parser/_parser.py", line 649, in parse
raise ValueError("Unknown string format:", timestr)
ValueError: ('Unknown string format:', '24 Ene 2013')
When selecting other languages than 'en', bing-video did not handle the language
correct and gave very bad results. Since User-Agent is normaly rotated in
searx, the behavior of a !biv search was unpredictable and paging was broken.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
The bing_news bug (discussed in #1838) was caused by wrong language tags, which
was fixed e0c99d9d / no need to change the bing_news search string.
closes: https://github.com/asciimoo/searx/issues/1838
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
To get meaningfull diffs, the json file has to be sorted. Before applying any
further content patch, the json file needs a inital sort (without changing any
content).
Sorted by::
import sys, json
with open('engines_languages.json') as f:
j = json.load(f)
with open('engines_languages.json', 'w') as f:
json.dump(j, f, indent=2, sort_keys=True)
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
When results are fetched from any programming related documentation site
(like git-scm.com, docs.python.org etc), content in Info box is shown as
raw HTML code.
This change addresses the issue by using "safe" filter feature provided by
Django. See,
- https://docs.djangoproject.com/en/3.0/ref/templates/builtins/#safe
- Searx issue tracker (issue #1649), for more information.
Resolves: #1649
When results are fetched from any programming related documentation site
(like git-scm.com, docs.python.org etc), content in Info box is shown as
raw HTML code.
This change addresses the issue by using "safe" filter feature provided by
Django. See,
- https://docs.djangoproject.com/en/3.0/ref/templates/builtins/#safe
- Searx issue tracker (issue #1649), for more information.
Resolves: #1649
In low width devices like mobile, tablet etc, info box is present at
bottom of the page.
This change addresses the issue by rearranging column grids for low
width devices and move side bar at top of the page. See
- https://getbootstrap.com/docs/3.3/css/#grid-column-ordering.
- and Searx issue tracker (issue#1777), for more information.
Effect: Along with Info, Suggestion and Link boxes also move to top of
the page.
Resolves: #1777
Infinite scroll adds a `hr` tag to split up the sections loaded by it.
The vim bindings `j` and `k`, which jump to the next and previous result
respectively, search for a **direct** sibling with the class `result`.
With the `hr` between results a direct sibling cannot be found. To fix
this we remove the restriction of it having to be a direct sibling.
Adding a CR in some files and in others not, is a good starting point for a
DOS+Unix mess we all have already seen in many projects.
Patch fixes all files matching (even those comming from grunt's build)::
find ./searx -exec file {} \; | grep CR
BTW: Same with mixing TAB and SPACE indent styles in one and the same file. So
if sources are tuched here in this patch, its also fixed.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Fix this error while travis build::
/home/travis/build/asciimoo/searx/searx/engines/duckduckgo_definitions.py:21:44: E225 missing whitespace around operator
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Add image format and source information to display - needs changes to engines to actually display something.
Displays result.source (website from which the image was taken) and result.img_format (image type and size).
Result is styled with result-format and result-source classes. See PR #1566 for an example of an engine which has the necessary changes.
Strip <span class="highlight">...</span> in the oscar image template.
This PR fixes the result count from bing which was throwing an (hidden) error and add a validation to avoid reading more results than avalaible.
For example :
If there is 100 results from some search and we try to get results from 120 to 130, Bing will send back the results from 0 to 10 and no error. If we compare results count with the first parameter of the request we can avoid this "invalid" results.
The new url parameter "timeout_limit" set timeout limit defined in second.
Example "timeout_limit=1.5" means the timeout limit is 1.5 seconds.
In addition, the query can start with <[number] to set the timeout limit.
For number between 0 and 99, the unit is the second :
Example: "<30 searx" means the timeout limit is 3 seconds
For number above 100, the unit is the millisecond:
Example: "<850 searx" means the timeout is 850 milliseconds.
In addition, there is a new optional setting: outgoing.max_request_timeout.
If not set, the user timeout can't go above searx configuration (as before: the max timeout of selected engine for a query).
If the value is set, the user can set a timeout between 0 and max_request_timeout using
<[number] or timeout_limit query parameter.
Related to #1077
Updated version of PR #1413 from @isj-privacore
Characters that were not ASCII were incorrectly decoded.
Add an helper function: searx.utils.ecma_unescape (Python implementation of unescape Javascript function).
* Search URL is https://www.wikidata.org/w/index.php?{query}&ns0=1 (with ns0=1 at the end to avoid an HTTP redirection)
* url_detail: remove the disabletidy=1 deprecated parameter
* Add eval_xpath function: compile once for all xpath.
* Add get_id_cache: retrieve all HTML with an id, avoid the slow to procress dynamic xpath '//div[@id="{propertyid}"]'.replace('{propertyid}')
* Create an etree.HTMLParser() instead of using the global one (see #1575)
Fetch complete JSON data block, use legend to extract images.
Unquote urlencoded strings.
Add image description as 'content'.
Add 'img_format' and 'source' data (needs PR #1567 to enable this data to be displayed).
Show images which lack ownerid instead of discarding them.
use data from embedded JSON to improve results (e.g. real page title), add image format and source info (see PR #1567), improve paging logic (it now works)