ScraperWiki: brief description

ScraperWiki: brief description

ScraperWiki is a hosted environment for writing automated processes to scan public websites and extract structured information from the pages they’ve published. It handles all of the boilerplate code that you normally have to write to handle crawling websites, gives you a simple online editor for your Ruby, Python, or PHP scripts, and automatically runs your crawler as a background process.

What I really like, though, is the way that most of the scripts are published on the site, so new users have a lot of existing examples to start with, and as websites change their structures, popular older scrapers can be updated by the community.

Вас заинтересует / Intresting for you:

WSDL Basics and Why it Learnin...
WSDL Basics and Why it Learnin... 324 views sepia Tue, 11 Sep 2018, 08:15:26
Choosing Web Technologies for ...
Choosing Web Technologies for ... 275 views Masha Mon, 27 Aug 2018, 18:32:20
Web Applications vs. Client-Se...
Web Applications vs. Client-Se... 326 views Валерий Павлюков Mon, 27 Aug 2018, 13:18:10
Selecting a Web Hosting for yo...
Selecting a Web Hosting for yo... 370 views Masha Mon, 27 Aug 2018, 19:06:22

Comments on ScraperWiki: brief description

Be the first to comment
Please login to comment