From 1b279d7a7e00418f66dc348d6a6bb66f335f39d2 Mon Sep 17 00:00:00 2001 From: Michael Peter Christen Date: Fri, 27 Jun 2014 15:12:53 +0200 Subject: [PATCH] fixed external link --- htroot/CrawlStartExpert.html | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/htroot/CrawlStartExpert.html b/htroot/CrawlStartExpert.html index d44bffe1a..f38e18b78 100644 --- a/htroot/CrawlStartExpert.html +++ b/htroot/CrawlStartExpert.html @@ -204,7 +204,7 @@ #%env/templates/submenuIndexCreate.template%#
-API +API Click on this API button to see a documentation of the POST request parameter for crawl starts.
@@ -215,7 +215,7 @@ You can define URLs as start points for Web page crawling and start crawling here. "Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links. This is repeated as long as specified under "Crawling Depth". - A crawl can also be started using wget and the post arguments for this web page. + A crawl can also be started using wget and the post arguments for this web page.