Web scraping in 2018 — forget HTML, use XHRs, metadata or JavaScript variables
Web scraping in 2018 is not only about processing HTML. Learn modern ways to extract data from web pages using XHRs, Schema.org Microdata, JSON Linked Data, JavaScript variables and more.
If you want to get data from some website into a structured form, you would probably start by parsing the website’s HTML using CSS selectors. That’s a common approach and will work on most websites. But there are typically much easier ways to extract data, mainly on modern dynamic websites. In this article I’m going to describe the four methods I always check are available on a website I want to scrape: Schema.org Microdata, JSON Linked Data, internal JavaScript variables, and XHRs.
Let’s start with an example: scraping basic restaurant info with reviews on Yelp. A straightforward way is to get HTML and parse it using libraries like Cheerio (server-side jQuery) or Beautiful Soup (in Python). Alternatively, you can load the page in a headless browser like PhantomJS or headless Chrome and scrape data by evaluating JavaScript in the context of the page. This lets you use libraries like jQuery to access the data.
To figure out CSS selectors, you can use Chrome DevTools (the easiest way is to right-click on the text you want to get and choose Inspect). The image below shows this approach for restaurant’s address.
To scrape this address, you can use a jQuery selector such as
$('.street-address').text().trim()
and get this result:
“3486 Mission StSan Francisco, CA 94110”
This isn’t ideal, so you would probably use something like
Good enough, so we could continue to use this method with other selectors. Or, we can analyze the website and try to get the same data elsewhere much easier. Some websites make this easy for us by supporting semantic web approaches such as Schema.org Microdata or JSON for Linked Data.
Schema.org Microdata
If you look for ”3486 Mission St” in the example page’s source code, you would also find it in a hidden element with Schema.org Metadata:
Then you can use a generic parser to get these data. Here is one example using jQuery:
If you run this function in the context of our page, you would also get basic info about the restaurant along with other data:
Here you can find the complete output from the schemaOrgParser() function. By the way, you can also find there the last 20 Yelp reviews of the restaurant.
JSON-LD
Similar to Schema.org, some websites are using JSON for Linking Data to structure their content. Just look for a script with application/ld+json type. We’re lucky on Yelp, here’s how it looks on our page:
Since the data are already in JSON, we can easily extract them with these two lines of JavaScript code:
var jsonLD = $('script[type="application/ld+json"]');
return JSON.parse(jsonLD.html());
And get almost the same data we got using Schema.org Microdata:
The full result (also including the latest reviews) is available here.
Internal JavaScript variables
This is my favorite. You have to load the page in a headless browser such as PhantomJS or headless Chrome (typically with Puppeteer), but once it’s loaded, you can just return the variable available on a page.
If you look for a restaurant name on our Yelp page, you can also find it in the following <script> tag on the page:
The script sets some custom data to the adroll_custom_data variable when the page is loaded. So, within the page context, you can just call:
This does not include all the data we got using previous methods, but you also get geographic coordinates, which can’t be found elsewhere on the web page except in the map data.
For example, on booli.se (a Swedish portal with real estate offers), you can get hundreds of attributes for every offer listed using one internal JS variable called property:
All you have to do is:
return property;
And here is the result you would get. Using this approach on Apify, you can scrape all of booli’s more than 40k listings with all their attributes on this portal using a crawler written in just 20 lines of JavaScript code:
This crawler and many others are available at the Apify Store, so you can find the best solution for your use case, copy it to your account, and hit the Run button.
If you want to explore more options for web scraping and crawling in JavaScript, have a look at Apify SDK — an open-source library that enables the development of data extraction and web automation jobs (not only) with headless Chrome and Puppeteer.
XHRs
Another favorite, although it typically involves more hacking.
If the website loads data dynamically, it typically uses XMLHttpRequests (XHRs). When the browser is using an API to get data from the server, why can’t we use it in just the same way?
Here’s a nice example — topshop.com. This website uses infinite scroll and every time you scroll down to last products it calls this XHR:
You can check this in the Network section of Chrome DevTools (XHR tab):
This XHR is a GET request which even works outside of the website. You can try it here.
If you scroll down in a category page again and compare new XHR with the previous one, the only change is in Noparameter (No=20 and No=40). This parameter obviously means offset in the product list. So, if you want to get all products listed there (more than 4,000 in the clothing category), you can just iterate the Noparameter by 20 and send a GET request for each of them. So you get 20 products with all attributes from each request. Here’s a complete crawler for that on Apify:
This crawler is available in the Apify library, and you can check the example output here.
That was easy, but what if the website uses POST requests that only work within the context of the loaded page (secured by cookies, headers, tokens, etc.)? Then you can use a headless browser, load the first page and send the POST requests from there.
This approach can be used to get app reviews from Google Play Store. In the image below, you can see a POST request along with its data, which is being sent for each 40 reviews you want to view.
To send a POST request we can use jQuery.ajax() and call it from the context of the page loaded in a headless browser. To get more than 40 reviews, you can iterate the page parameter as we did in the previous example. XHR in this case returns HTML data instead of JSON, so we have to parse them and use selectors.
Here’s a complete crawler that extracts 4,000 reviews for a given app on Google Play Store in a few seconds:
BTW, in a browser console you see the token that is sent within POST data. It also works without this token, but if you want to add it there, you can scrape it first using window._uc variable:
This approach can be used for scraping data from Airbnb. You don’t even need to open the page in a headless browser, you just need to get the token first and then use their API:
With this token you can use the Airbnb API to get prices, availability or reviews. Here’s an example GET request to get the first seven reviews for a given listing:
(note that this probably doesn’t work now due to the expired token).
Then you can just iterate through offset and get all reviews for any listing you want. Here’s a complete crawler for that (example output available here):
BTW, here’s a crawler in the Apify library which scrapes info about all listings from Airbnb for a given area.
Page analyzer
And now here’s a little reward for anyone who has read this far. At the beginning of this post I mentioned that I always check the availability of the methods described above. But I don’t have to check manually anymore. We’ve just launched a tool called Page Analyzer which does this automatically for us. And for you as well — it’s free for everyone! It runs on Apify and uses this Apify Actoract (also available on GitHub).
Page Analyzer is available here. Just enter the URL of the page you want to check and click Analyze: