ScraperWiki: brief description

ScraperWiki: brief description
Svetlana Komarova

Svetlana Komarova

The author. System Administrator, Oracle DBA. Information technology, internet, telecom. More details.

ScraperWiki is a hosted environment for writing automated processes to scan public websites and extract structured information from the pages they’ve published. It handles all of the boilerplate code that you normally have to write to handle crawling websites, gives you a simple online editor for your Ruby, Python, or PHP scripts, and automatically runs your crawler as a background process.

What I really like, though, is the way that most of the scripts are published on the site, so new users have a lot of existing examples to start with, and as websites change their structures, popular older scrapers can be updated by the community.

Вас заинтересует / Intresting for you:

The history of Google at a gla...
The history of Google at a gla... 303 views Doctor Tue, 16 Feb 2021, 19:17:44
How to setup a powerful websit...
How to setup a powerful websit... 298 views Doctor Sun, 28 Feb 2021, 18:23:58
CSS: Flex Container Properties...
CSS: Flex Container Properties... 2626 views Боба Sat, 09 Nov 2019, 07:41:19
Web Applications vs. Client-Se...
Web Applications vs. Client-Se... 1184 views Валерий Павлюков Mon, 27 Aug 2018, 13:18:10