Other answers

The best way IMHO to learn web crawling and scraping is to download and run an open-source crawler such as Nutch or Heritrix.  They are pretty simple to use and very shortly you will have some crawled data to play with. For scraping, the best thing to do is to write a simple web agent, which is a simple program that fetches source HTML of web pages and processes it. Most modern scripting programming languages e.g. PHP/Python/Perl include built-in primitives and libraries for getting source HTML in a single line of code. The easiest way to process obtained HTML is to use regular expression string matching facilities of these languages which are very easy to use and quite comprehensive. Regular expressions are only the first step, there are many other ways to do  more detailed and faster processing.

Borislav Agapiev

Most important tools in scraping is html parser(neko html, Mozilla parser) and query language(xpath, regexp, GATE, Tregexp). On this basis, can be constructed complex rule-based systems with/without machine learning. Many web test frameworks are useful for building scraping tools like HTMLUnit, Selenium

Yura Koroliov

The best way to learn web crawling is to learn python scrapy framework. It is very simple to use and for crawling heavy ajax/java-script sites we can use phantom js along with scrapy. Scrapy crawling is faster than any other platforms, since it uses asynchronous operations (on top of Twisted). Scrapy has better and faster support for parsing (x)html on top of libxml2. Scrapy is a mature framework with full unicode, redirection handling, gzipped responses, odd encoding, integrated http cache etc. I suggest reading the scrapy documentation which is the best way to learn it in a programmers perspective. Here is the link http://scrapy.org/doc/ . For an introduction and to grab some basic ideas read http://quadloops.com/scrapy-python-web-scrapping-framework/.

Tony Paul

I think it depends on how you want to do web scraping and web crawling.You can learn to master a programming language or, to master some web scrapers.You may get some ideas from these articles:http://www.octoparse.com/blog/scraping-websites-what-for/http://www.octoparse.com/blog/extract-text-from-html-document/“1. Programming languageFor those simple HTML documents, people who have basic coding knowledge can choose to write a program to remove all HTML tags and retain only the text inside HTML files, using regular expression or XPath. There are several widely used programming languages such as C#, Java, Python, JS, PHP, Go and NodeJs that are available for computer users. You can pick a suitable one to start your project. Some of these languages have their own parser for HTML that are available and free online and you will know more about these HTML parsers by click here https://en.wikipedia.org/wiki/Comparison_of_HTML_parsers.It is worth mentioning that the code you write can only be used for one type of web page, that means different types of web pages needs to write different code. Besides, you need to test your code after you have written your program, and it takes longer time for who have no experience to write code and test the code.2. Web data extraction toolsThere are many powerful web extraction tools such as http://import.io, mozenda, Octoparse that are available for computer users to harvest almost everything on the web page, including the text, links, images, etc. You can convert what you get into structured data format.You don’ t need to write any code, so it’s especially good for those who have no coding experience. In most cases, you don’t need to write regular expression or XPath. The visualization enable users to better interact with the web page. It’s easy to check and export the data without any IDE.”I found some useful web scraping tools that may help you better fetch what you need. :)http://www.octoparse.com/?quhttp://Dataextractionservices.comhttp://Habiledata.comhttp://Computyne.comhttp://datahut.co/http://Datoin.comhttp://grepsr.comhttp://priceonomics.comhttp://promptcloud.comhttp://scrapehero.comhttp://scrapinghub.comhttp://thewebminer.comhttp://vnpglobal.comhttp://webscraping.comhttp://webrobots.iohttp://80legs.comhttp://apifier.comhttp://cloudscrape.comhttp://datafiniti.nethttp://DataScraping.cohttp://diffbot.comhttp://fminer.comhttp://GooSeeker.comhttp://Import.iohttp://moreover.comhttp://mozenda.comhttp://parsehub.comhttp://scrape.ithttp://spinn3r.comhttp://thepriceminer.comhttp://uipath.comhttp://webcrawling.orghttp://webrobots.io

Daisy Hung

There are hundreds of scraping tools available in the market, but as I continuously mention one tool that is very good compare to other scraping tools. Because it is very easy to understand, it keeps updating with the new version, gives expected output and lot of user-friendly features. It is called as “Easy Data Feed” and it is available on Easy Data Feed - Web Data Extraction Scraping Software These are some features of this tool: You can do data manipulate. Multiple profiles. You can add custom values. You can convert Measurements. You can read about how to use it here: http://www.easydatafeed.com/open-source/ They also have developers, you can hire them to do the job for you, and their Skype is “easydatafeed”

Nor Rieh

It depends on what you are wanting to learn and why? if you just want an ability to functionally scrape sites then there isn't much need to learn these days with http://import.io companies like Import•io and outwithub. http://support.import.io has some good articles on how to use there tool to get data from websites. If you want to learn how to program scraper wiki is good, and i've heard python is THE language to use. The is no shortage of tutorials out there, on navigating xpath and why NOT to use regex on HTML: http://blog.codinghorror.com/parsing-html-the-cthulhu-way/

Daniel Cave

Manning, Raghaven, and Schütze's Introduction to Information Retrieval has a couple interesting chapters on basic web search, including crawling.  You can find a copy of the book online at http://nlp.stanford.edu/IR-book/information-retrieval-book.html

Anonymous

Web scraping can be pretty tricky, so it's actually really helpful to use web scraping tools to assist you as well as teach you more about it! http://www.kimonolabs.com is a great free web scraping tool that uses CSS selectors to save the data structure of properties you wish to scrape and then uses them to automatically extract the data for you. It also lets you modify your results with a customizable JS function so you can do adjustments beyond HTML tag delinations. Though Kimono is very user friendly, it's also super powerful and people have used it for complex projects like this awesome interactive map of no fly zones: http://builtwith.kimonolabs.com/post/101375093149/mapping-no-fly-zones-this-awesome-interactive-map P.S. I totally work here, but it really is an awesome way to get more familiar with web scraping and has taught me so much about data structuring on the web!

Laura Nguyen

Related Q & A:

Just Added Q & A:

Find solution

For every problem there is a solution! Proved by Solucija.

  • Got an issue and looking for advice?

  • Ask Solucija to search every corner of the Web for help.

  • Get workable solutions and helpful tips in a moment.

Just ask Solucija about an issue you face and immediately get a list of ready solutions, answers and tips from other Internet users. We always provide the most suitable and complete answer to your question at the top, along with a few good alternatives below.