From 3959d43a5c3412704f21b4c69aa723e71e2134e8 Mon Sep 17 00:00:00 2001
From: Michael Peter Christen
Date: Tue, 3 Aug 2021 16:57:24 +0200
Subject: [PATCH] fixed doku link
---
htroot/CrawlStartExpert.html | 4 +-
htroot/IndexFederated_p.html | 2 +-
locales/master.lng.xlf | 2 +-
locales/ru.lng | 2 +-
startYACY.sh | 268 +++++++++---------
.../document/parser/GenericXMLParserTest.java | 100 +++----
6 files changed, 191 insertions(+), 187 deletions(-)
diff --git a/htroot/CrawlStartExpert.html b/htroot/CrawlStartExpert.html
index e3cdb0d25..4093df5a7 100644
--- a/htroot/CrawlStartExpert.html
+++ b/htroot/CrawlStartExpert.html
@@ -217,7 +217,7 @@
#%env/templates/submenuIndexCreate.template%#
-

+
Click on this API button to see a documentation of the POST request parameter for crawl starts.
@@ -228,7 +228,7 @@
You can define URLs as start points for Web page crawling and start crawling here.
"Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links.
This is repeated as long as specified under "Crawling Depth".
- A crawl can also be started using wget and the post arguments for this web page.
+ A crawl can also be started using wget and the post arguments for this web page.