In the previous posts, i have shown ways of scraping web pages using Beautiful Soup. Beautiful Soup is a brilliant HTML parser and helps in parsing the HTML easily. I normally use urllib2 module to open a URL.  Later i use this URL open handle to create a Beautiful Soup Object. The normal script that i does is as below.

import urllib2

from bs4 import BeautifulSoup

url = "http://www.regulations.gov/#!docketBrowser;rpp=50;po=0;dct=PS;D=OSHA-2013-0020"'
page = urllib2.urlopen(url) 
soup = BeautifulSoup(page)

The above code of using urllib2 module to open the webpage and then using BeautifulSoup will work like a charm, if the page content is plain HTML without any content being load through JavaScript. But recently, i came across this website (regulations.gov), where JavaScript is enabled. This meant that the normal way of opening a webpage to get the page won’t work.  The website specifically checks if JavaScript is enabled and renders the page based on the Java Script being run after the page load. I used urllib2 to open the webpage, but it seems to contain only the message (“You need to have javascript enabled to view this page” ) . I was clueless on how to scrape this webpage.  Searched a lot and there where lot of possibilities. Among them where Selenium and PhantomJs. Phantomjs impressed me as it was a headless browser and i don’t need to have extra drivers or web browsers installed as in the case with Selenium. So this was the plan.

  1. Use phantomjs to open the page.
  2. Save it as a  local file using phantom js File System Module Api.
  3. Later use this local file handle to create BeautifulSoup object and then parse the page.

PhantomJs Script to load the Web Page Below is the script i used to load the web page using PhantomJs and save it as a local file.

var page = require('webpage').create();
var fs = require('fs');// File System Module
var args = system.args;
var output = './temp_htmls/test1.html'; // path for saving the local file 
page.open('http://www.regulations.gov/#!docketBrowser;rpp=50;po=0;dct=PS;D=OSHA-2013-0020', function() { // open the file 
  fs.write(output,page.content,'w'); // Write the page to the local file using page.content
  phantom.exit(); // exit PhantomJs
});

Here we have opened the page using PhantomJs and then saved locally. On inspecting the content of the file, we can see that the JavaScript was run and there is no error message regarding JavaScript requirement. Later we will open the local file and then scrape it using the code below.

page_name="./temp_htmls/test1.html"
local_page = open(page_name,"r")
soup = bs(local_page,"lxml")

If you want to learn and understand more examples and sample codes, I have authored a book on Beautiful Soup 4 and you can find more details here

Getting Started with Beautiful Soup

Happy Scraping :-)

Packt have kindly provided 3 free eBook copies of my newly published book, Getting Started with Beautiful Soup.

Getting Started with Beautiful Soup

How to Enter?

All you need to do is head on over to the book page (http://www.packtpub.com/getting-started-with-beautiful-soup/book)and look through the product description of the book and drop a line via the comments below this post to let us know what interests you the most about this book. It’s that simple.

The winners will be chosen from the comments below and contacted by email so please use a valid email address when you post your comment.

The contest ends when I run out of free copies to give away. Good luck!

Update 1 ( 5th April 2014 )

—————

Packtpub has provided 2 more copies to give away, that mean 5 copies to be won :-). Contest will end soon.

Update 2 (7th April 2014 )

———-

The first winner is Cm. Congratulations! Someone from Packt will be in contact with you shortly.

Update 3 ( 11th April 2014)

The second winner is Elizabeth.  Congratulations! Someone from Packt will be in contact with you shortly.

Update 4 ( 16th April 2014)

The third winner is Joel.  Congratulations! Someone from Packt will be in contact with you shortly.

Update 5  ( 16th April 2014)

The Fourth winner is Graham O’ Malley.  Congratulations! Someone from Packt will be in contact with you shortly.

Update 6  ( 21st April 2014)

The Final winner is Nick Fiorentini.  Congratulations! Someone from Packt will be in contact with you shortly.

The contest is now closed.  Thanks every one for the participation.

I was thinking  about porting my blog post about scraping a website using Beautiful Soup from Python 2.7 and Beautiful Soup 3 to Python 3 and Beautiful Soup 4.  Thanks to Steve for his code which made it easy for me. In this blog post i will be scraping the same website using Beautiful Soup 4.

Task : Extract all U.S university name and url from University of texas website as a csv ( comma-seperated values ) format.
Dependencies : Python and Beautiful Soup 4

Script with Explanation

  • Importing Beautiful Soup 4
from bs4 import BeautifulSoup

This is major difference between Beautiful Soup 3 and 4. In 3 it was just

from BeautifulSoup import BeautifulSoup

But in Bs4 it is entirely different.

Next we will import the request module for opening the Url

import urllib.request

We now need to open the page at the above Url.

url="http://www.utexas.edu/world/univ/alpha/"
page = urllib.request.urlopen(url)
  • Creating the Soup
soup = BeautifulSoup(page.read())
  • Finding the pattern in the page

Web scraping will be effective only if we can find patterns used in the websites for the contents. For example in the university of texas website if you view the source of the page then you can see that all university names have a common format like as shown below in the screeshshot

View Source

From the pattern we can see that all universities will be within <a> tag with css class institution. So we need to find all the <a> tags whose class is institution to find all the universities. We can use Beautiful Soup 4 find_all() method to accomplish this.

universities=soup.find_all('a',class_='institution')

In the above code line, we used find_all method in Beautiful Soup 4 to find all the universities. We found all the <a> tags with class institution. In Beautiful Soup 4 we can use the keyword argument class_ to search based on the css classes. We can iterate over each university by using the code below

for university in universities:
    print(university['href']+","+university.string)

In the above page each University name is stored as the string of the <a> tag and URL  are stored as the href property of the <a> tag. So in the above code, by using university.string we will get each university name and using university['href'] we will get the university URL.

  • Putting it all together

The script for scraping University of texas website using Beautiful Soup 4 is as below.

from bs4 import BeautifulSoup
import urllib.request
url="http://www.utexas.edu/world/univ/alpha/"
page = urllib.request.urlopen(url)
soup = BeautifulSoup(page.read())
universities=soup.find_all('a',class_='institution')
for university in universities:
    print(university['href']+","+university.string)

If you want to learn and understand more examples and sample codes, I have authored a book on Beautiful Soup 4 and you can find more details here

Getting Started with Beautiful Soup

 Update 1: 

You have a Chance to Win Free Copies of Getting Started with Beautiful Soup by Packt Publishing. Please follow the instruction in this post

Happy Scraping :-)