Have you ever wanted to get a specific data from another website but there’s no API available for it? That’s where Web Scraping comes in, if the data is not made available by the website we can just scrape it from the website itself.
But before we dive in let us first define what web scraping is. According to Wikipedia:
Web scraping (web harvesting or web data extraction) is a computer software technique of extracting information from websites. Usually, such software programs simulate human exploration of the World Wide Web by either implementing low-level Hypertext Transfer Protocol (HTTP), or embedding a fully-fledged web browser, such as Internet Explorer or Mozilla Firefox.
So yes, web scraping lets us extract information from websites. But the thing is there are some legal issues regarding web scraping. Some consider it as an act of trespassing to the website where you are scraping the data from. That’s why it is wise to read the terms of service of the specific website that you want to scrape because you might be doing something illegal without knowing it. You can read more about it in this Wikipedia page.
Web Scraping Techniques
There are many techniques in web scraping as mentioned in the Wikipedia page earlier. But I will only discuss the following:
- Document Parsing
- Regular Expressions
Document parsing is the process of converting HTML into DOM (Document Object Model) in which we can traverse through. Here’s an example on how we can scrape data from a public website:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
What we did with the code above was to get the html returned from the url of the website that we want to scrape. In this case the website is pokemondb.net.
1 2 3
Then we declare a new DOM Document, this is used for converting the html string returned from
file_get_contents into an actual Document Object Model which we can traverse through:
1 2 3
Then we disable libxml errors so that they won’t be outputted on the screen, instead they will be buffered and stored:
1 2 3
Next we check if there’s an actual html that has been returned:
1 2 3 4
Next we use the
loadHTML() function from the new instance of
DOMDocument that we created earlier to load the html that was returned. Simply use the html that was returned as the argument:
1 2 3
Then we clear the errors if any. Most of the time yucky html causes these errors. Examples of yucky html are inline styling (style attributes embedded in elements), invalid attributes and invalid elements. Elements and attributes are considered invalid if they are not part of the HTML specification for the doctype used in the specific page.
1 2 3
Next we declare a new instance of
DOMXpath. This allows us to do some queries with the DOM Document that we created.
This requires an instance of the DOM Document as its argument.
1 2 3
Finally, we simply write the query for the specific elements that we want to get. If you have used jQuery before then this process is similar to what you do when you select elements from the DOM.
What were selecting here is all the h2 tags which has an id, we make the location of the h2 unspecific by using double slashes
// right before the element that we want to select. The value of the id also doesn’t matter as long as there’s an id then it will get selected. The
nodeValue attribute contains the text inside the h2 that was selected.
1 2 3 4 5 6 7 8 9 10
This results to the following text printed out in the screen:
1 2 3 4 5
Let’s do one more example with the document parsing before we move on to regular expressions. This time were going to get a list of all pokemons along with their specific type (E.g Fire, Grass, Water).
First let’s examine what we have on pokemondb.net/evolution so that we know what particular element to query.
As you can see from the screenshot, the information that we want to get is contained within a span element with a class of
infocard-tall. Yes, the space there is included. When using XPath to query spaces are included if they are present, otherwise it wouldn’t work.
Converting what we know into actual query, we come up with this:
This selects all the span elements which has a class of
infocard-tall. It doesn’t matter where in the document the span is because we used the double forward slash before the actual element.
Once were inside the span we have to get to the actual elements which directly contains the data that we want. And that is the name and the type of the pokemon. As you can see from the screenshot below the name of the pokemon is directly contained within an
anchor element with a class of
ent-name. And the types are stored within a
small element with a class of
We can then use that knowledge to come up with the following code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
There’s nothing new with the code that we have above except for using query inside the
We use this particular line of code to get the name of the pokemon, you might notice that we specified a second argument when we used the
query method. The second argument is the current row, we use it to specify the scope of the query. This means that were limiting the scope of the query to that of the current row.
1 2 3
The results would be something like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
Aside from document parsing we can also use regular expressions to scrape the data that we want from a specific webpage. Regular expressions are useful if we only want to scrape actual content and not HTML elements because its difficult if not impossible to match all the possibilities of how an HTML element might have been written. Consider the following example:
1 2 3 4 5 6 7 8 9
The code above is basically the same thing written in a bunch of ways. It would be difficult to scrape all the external stylesheets in a page using regular expressions as we would need to target every possible way that it can be written. So instead of using regular expressions we use document parsing to get all the external stylesheets. This is just one of the many cases in which regular expressions can’t be used in scraping.
The main advantage of using regular expressions is its speed. The whole process of converting an HTML document into DOM and then traversing the DOM takes time especially if there are lots of elements which matches the query that you specify. This is not the case with regular expressions as you’re only working with strings and patterns with it, no conversion and traversing takes place so its very fast.
Ok enough with the explanations, here’s an example on how to use regular expressions in scraping.
Here we are specifically looking for URL’s which begins with
https://safelinking.net/ and followed by any instances of letters from A to Z and its lowercase version or any instances of numbers. Remember that we need to escape forward slashes and periods using a backslash. We then use the
preg_match_all() function to get all the matches of the specific pattern that were looking for. The
preg_match_all() function takes the pattern as its first argument, then the actual string where we want to find the pattern as its second argument, then the third argument would be the variable that will store the actual matches.
1 2 3 4 5 6 7 8
The code above will output the following:
1 2 3 4 5 6 7
Web Scraping Tools
You can also use some web scraping tools to make your life easier. Here are some of the PHP libraries that you can use for scraping.
Simple HTML Dom
To make web scraping easier you can use libraries such as simple html DOM. Here’s an example of getting the names of the pokemon using simple html DOM:
1 2 3 4 5 6 7 8
The syntax is more simple so the code that you have to write is lesser plus there are also some convenience functions and attributes which you can use. An example is the plaintext attribute which extracts all the text from a web page:
1 2 3
You can also use ganon for web scraping which packs features such as support for html5, jQuery like syntax, manipulation of elements and their attributes.
Here’s an example on how to use ganon to get all the images that are in a table element:
1 2 3 4 5 6 7 8 9 10 11
That’s it for this tutorial! You have learned the basics of web scraping in PHP. You can take your adventures to the next level by scraping non-public parts of websites or scraping content that is dynamically generated.