ScraperWiki: brief description

ScraperWiki: brief description
Светлана Комарова

Светлана Комарова

Автор статьи. Системный администратор, Oracle DBA. Информационные технологии, интернет, телеком. Подробнее.

ScraperWiki is a hosted environment for writing automated processes to scan public websites and extract structured information from the pages they’ve published. It handles all of the boilerplate code that you normally have to write to handle crawling websites, gives you a simple online editor for your Ruby, Python, or PHP scripts, and automatically runs your crawler as a background process.

What I really like, though, is the way that most of the scripts are published on the site, so new users have a lot of existing examples to start with, and as websites change their structures, popular older scrapers can be updated by the community.

Вас заинтересует / Intresting for you:

The Future of WordPress: how p...
The Future of WordPress: how p... 570 views dbstalker Mon, 31 Jan 2022, 17:11:40
Selecting a Web Hosting for yo...
Selecting a Web Hosting for yo... 1569 views Masha Mon, 27 Aug 2018, 19:06:22
Working with a database in Wor...
Working with a database in Wor... 3850 views Андрей Васенин Tue, 16 Nov 2021, 16:23:40
Configuring and Using Magento ...
Configuring and Using Magento ... 811 views Гвен Sun, 20 Mar 2022, 07:00:30
Comments (0)
There are no comments posted here yet
Leave your comments
Posting as Guest
×
Suggested Locations