How to perform web scraping with Python!

web scraping with python
 
 

Guide to Web Scraping with Python

In this blog post we’ll get you started with web scraping and Python. Before we begin, here are some important rules to follow and understand:

  1. Always be respectful and try to get premission to scrape, do not bombard a website with scraping requests, otherwise your IP address may be blocked!
  2. Be aware that websites change often, meaning your code could go from working to totally broken from one day to the next.
  3. Pretty much every web scraping project of interest is a unique and custom job, so try your best to generalize the skills learned here.

OK, let’s get started with the basics!

 

Your FREE Guide to Become a Data Scientist

Discover the path to becoming a data scientist with our comprehensive free guide! Unlock your potential in this in-demand field and access valuable resources to kickstart your journey.

Don’t wait, download now and transform your career!

Basic components of a WebSite

HTML

HTML stands for Hypertext Markup Language and every website on the internet uses it to display information. Even the jupyter notebook system uses it to display this information in your browser. If you right click on a website and select “View Page Source” you can see the raw HTML of a web page. This is the information that Python will be looking at to grab information from. Let’s take a look at a simple webpage’s HTML:

 
 
<!DOCTYPE html>  
<html>  
    <head>
        <title>Title on Browser Tab</title>
    </head>
    <body>
        <h1> Website Header </h1>
        <p> Some Paragraph </p>
    <body>
</html>
 
 

Let’s breakdown these components.

Every indicates a specific block type on the webpage:

1.<DOCTYPE html> HTML documents will always start with this type declaration, letting the browser know its an HTML file.
2. The component blocks of the HTML document are placed between <html> and </html>.
3. Meta data and script connections (like a link to a CSS file or a JS file) are often placed in the <head> block.
4. The <title> tag block defines the title of the webpage (its what shows up in the tab of a website you're visiting).
5. Is between <body> and </body> tags are the blocks that will be visible to the site visitor.
6. Headings are defined by the <h1> through <h6> tags, where the number represents the size of the heading.
7. Paragraphs are defined by the <p> tag, this is essentially just normal text on the website.

There are many more tags than just these, such as <a> for hyperlinks, <table> for tables, <tr> for table rows, and <td> for table columns, and more!
 
 

CSS

CSS stands for Cascading Style Sheets, this is what gives “style” to a website, including colors and fonts, and even some animations! CSS uses tags such as id or class to connect an HTML element to a CSS feature, such as a particular color. id is a unique id for an HTML tag and must be unique within the HTML document, basically a single use connection. class defines a general style that can then be linked to multiple HTML tags. Basically if you only want a single html tag to be red, you would use an id tag, if you wanted several HTML tags/blocks to be red, you would create a class in your CSS doc and then link it to the rest of these blocks.

 
 

Scraping Guidelines

Keep in mind you should always have permission for the website you are scraping! Check a websites terms and conditions for more info. Also keep in mind that a computer can send requests to a website very fast, so a website may block your computer’s ip address if you send too many requests too quickly. Lastly, websites change all the time! You will most likely need to update your code often for long term web-scraping jobs.

 
 

Web Scraping with Python

There are a few libraries you will need, you can go to your command line and install them with conda install (if you are using anaconda distribution), or pip install for other python distributions.

conda install requests
conda install lxml
conda install bs4

if you are not using the Anaconda Installation, you can use pip install instead of conda install, for example:

pip install requests
pip install lxml
pip install bs4

Now let’s see what we can do with these libraries.


 
 

Example Task: Grabbing the title of a page

Let’s start very simple, we will grab the title of a page. Remember that this is the HTML block with the title tag. For this task we will use www.example.com which is a website specifically made to serve as an example domain. Let’s go through the main steps:

 
In [1]:
import requests
 
In [2]:
# Step 1: Use the requests library to grab the page
# Note, this may fail if you have a firewall blocking Python/Jupyter 
# Note sometimes you need to run this twice if it fails the first time
res = requests.get("http://www.example.com")
 
 

This object is a requests.models.Response object and it actually contains the information from the website, for example:

 
In [3]:
type(res)
 
Out[3]:
requests.models.Response
 
In [4]:
res.text
 
Out[4]:
'<!doctype html>\n<html>\n<head>\n    <title>Example Domain</title>\n\n    <meta charset="utf-8" />\n    <meta http-equiv="Content-type" content="text/html; charset=utf-8" />\n    <meta name="viewport" content="width=device-width, initial-scale=1" />\n    <style type="text/css">\n    body {\n        background-color: #f0f0f2;\n        margin: 0;\n        padding: 0;\n        font-family: -apple-system, system-ui, BlinkMacSystemFont, "Segoe UI", "Open Sans", "Helvetica Neue", Helvetica, Arial, sans-serif;\n        \n    }\n    div {\n        width: 600px;\n        margin: 5em auto;\n        padding: 2em;\n        background-color: #fdfdff;\n        border-radius: 0.5em;\n        box-shadow: 2px 3px 7px 2px rgba(0,0,0,0.02);\n    }\n    a:link, a:visited {\n        color: #38488f;\n        text-decoration: none;\n    }\n    @media (max-width: 700px) {\n        div {\n            margin: 0 auto;\n            width: auto;\n        }\n    }\n    </style>    \n</head>\n\n<body>\n<div>\n    <h1>Example Domain</h1>\n    <p>This domain is for use in illustrative examples in documents. You may use this\n    domain in literature without prior coordination or asking for permission.</p>\n    <p><a href="https://www.iana.org/domains/example">More information...</a></p>\n</div>\n</body>\n</html>\n'
 
 

Now we use BeautifulSoup to analyze the extracted page. Technically we could use our own custom script to loook for items in the string of res.text but the BeautifulSoup library already has lots of built-in tools and methods to grab information from a string of this nature (basically an HTML file). Using BeautifulSoup we can create a “soup” object that contains all the “ingredients” of the webpage. Don’t ask me about the weird library names, I didn’t choose them! 🙂

 
In [5]:
import bs4
 
In [6]:
soup = bs4.BeautifulSoup(res.text,"lxml")
 
In [7]:
soup
 
Out[7]:
<!DOCTYPE html>
<html>
<head>
<title>Example Domain</title>
<meta charset="utf-8"/>
<meta content="text/html; charset=utf-8" http-equiv="Content-type"/>
<meta content="width=device-width, initial-scale=1" name="viewport"/>
<style type="text/css">
    body {
        background-color: #f0f0f2;
        margin: 0;
        padding: 0;
        font-family: -apple-system, system-ui, BlinkMacSystemFont, "Segoe UI", "Open Sans", "Helvetica Neue", Helvetica, Arial, sans-serif;
        
    }
    div {
        width: 600px;
        margin: 5em auto;
        padding: 2em;
        background-color: #fdfdff;
        border-radius: 0.5em;
        box-shadow: 2px 3px 7px 2px rgba(0,0,0,0.02);
    }
    a:link, a:visited {
        color: #38488f;
        text-decoration: none;
    }
    @media (max-width: 700px) {
        div {
            margin: 0 auto;
            width: auto;
        }
    }
    </style>
</head>
<body>
<div>
<h1>Example Domain</h1>
<p>This domain is for use in illustrative examples in documents. You may use this
    domain in literature without prior coordination or asking for permission.</p>
<p><a href="https://www.iana.org/domains/example">More information...</a></p>
</div>
</body>
</html>
 
 

Now let’s use the .select() method to grab elements. We are looking for the ‘title’ tag, so we will pass in ‘title’

 
In [8]:
soup.select('title')
 
Out[8]:
[<title>Example Domain</title>]
 
 

Notice what is returned here, its actually a list containing all the title elements (along with their tags). You can use indexing or even looping to grab the elements from the list. Since this object it still a specialized tag, we cna use method calls to grab just the text.

 
In [9]:
title_tag = soup.select('title')
 
In [10]:
title_tag[0]
 
Out[10]:
<title>Example Domain</title>
 
In [11]:
type(title_tag[0])
 
Out[11]:
bs4.element.Tag
 
In [12]:
title_tag[0].getText()
 
Out[12]:
'Example Domain'
 
 

Example Task: Grabbing all elements of a class

Let’s try to grab all the section headings of the Wikipedia Article on the Enigma Machine from this URL: https://en.wikipedia.org/wiki/Enigma_machine

 
In [13]:
# First get the request
res = requests.get('https://en.wikipedia.org/wiki/Enigma_machine')
 
In [14]:
# Create a soup from request
soup = bs4.BeautifulSoup(res.text,"lxml")
 
 

Now its time to figure out what we are actually looking for. Inspect the element on the page to see that the section headers have the class “mw-headline”. Because this is a class and not a straight tag, we need to adhere to some syntax for CSS. In this case

 
 
Syntax to pass to the .select() methodMatch Results
soup.select('div')All elements with the <div> tag
soup.select('#some_id')The HTML element containing the id attribute of some_id
soup.select('.notice')All the HTML elements with the CSS class named notice
soup.select('div span')Any elements named <span> that are within an element named <div>
soup.select('div > span')Any elements named <span> that are directly within an element named <div>, with no other element in between
 
In [15]:
# note depending on your IP Address, 
# this class may be called something different
soup.select(".toctext")
 
Out[15]:
[<span class="toctext">History</span>,
 <span class="toctext">Breaking Enigma</span>,
 <span class="toctext">Design</span>,
 <span class="toctext">Electrical pathway</span>,
 <span class="toctext">Rotors</span>,
 <span class="toctext">Stepping</span>,
 <span class="toctext">Turnover</span>,
 <span class="toctext">Entry wheel</span>,
 <span class="toctext">Reflector</span>,
 <span class="toctext">Plugboard</span>,
 <span class="toctext">Accessories</span>,
 <span class="toctext"><i>Schreibmax</i></span>,
 <span class="toctext"><i>Fernlesegerät</i></span>,
 <span class="toctext"><i>Uhr</i></span>,
 <span class="toctext">Mathematical analysis</span>,
 <span class="toctext">Operation</span>,
 <span class="toctext">Basic operation</span>,
 <span class="toctext">Details</span>,
 <span class="toctext">Indicator</span>,
 <span class="toctext">Additional details</span>,
 <span class="toctext">Example enciphering process</span>,
 <span class="toctext">Models</span>,
 <span class="toctext">Commercial Enigma</span>,
 <span class="toctext">Enigma A (1923)</span>,
 <span class="toctext">Enigma B (1924)</span>,
 <span class="toctext">Enigma C (1926)</span>,
 <span class="toctext">Enigma D (1927)</span>,
 <span class="toctext">"Navy Cipher D"</span>,
 <span class="toctext">Enigma H (1929)</span>,
 <span class="toctext">Enigma K</span>,
 <span class="toctext">Military Enigma</span>,
 <span class="toctext">Funkschlüssel C</span>,
 <span class="toctext">Enigma G (1928–1930)</span>,
 <span class="toctext">Wehrmacht Enigma I (1930–1938)</span>,
 <span class="toctext">M3 (1934)</span>,
 <span class="toctext">Two extra rotors (1938)</span>,
 <span class="toctext">M4 (1942)</span>,
 <span class="toctext">Surviving machines</span>,
 <span class="toctext">Derivatives</span>,
 <span class="toctext">Simulators</span>,
 <span class="toctext">See also</span>,
 <span class="toctext">Notes</span>,
 <span class="toctext">References</span>,
 <span class="toctext">Bibliography</span>,
 <span class="toctext">Further reading</span>,
 <span class="toctext">External links</span>]
 
In [16]:
for item in soup.select(".toctext"):
    print(item.text)
 
 
History
Breaking Enigma
Design
Electrical pathway
Rotors
Stepping
Turnover
Entry wheel
Reflector
Plugboard
Accessories
Schreibmax
Fernlesegerät
Uhr
Mathematical analysis
Operation
Basic operation
Details
Indicator
Additional details
Example enciphering process
Models
Commercial Enigma
Enigma A (1923)
Enigma B (1924)
Enigma C (1926)
Enigma D (1927)
"Navy Cipher D"
Enigma H (1929)
Enigma K
Military Enigma
Funkschlüssel C
Enigma G (1928–1930)
Wehrmacht Enigma I (1930–1938)
M3 (1934)
Two extra rotors (1938)
M4 (1942)
Surviving machines
Derivatives
Simulators
See also
Notes
References
Bibliography
Further reading
External links
 
 

Example Task: Getting an Image from a Website

Let’s attempt to grab the image from this wikipedia article: https://en.wikipedia.org/wiki/Extreme_ironing

 
In [25]:
res = requests.get("https://en.wikipedia.org/wiki/Extreme_ironing")
 
In [26]:
soup = bs4.BeautifulSoup(res.text,'lxml')
 
In [27]:
image_info = soup.select('.thumbimage')
 
In [28]:
image_info
 
Out[28]:
[<img alt="" class="thumbimage" data-file-height="1280" data-file-width="960" decoding="async" height="293" src="//upload.wikimedia.org/wikipedia/commons/thumb/d/dc/Extermeironingrivelin.jpg/220px-Extermeironingrivelin.jpg" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/d/dc/Extermeironingrivelin.jpg/330px-Extermeironingrivelin.jpg 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/d/dc/Extermeironingrivelin.jpg/440px-Extermeironingrivelin.jpg 2x" width="220"/>,
 <img alt="" class="thumbimage" data-file-height="371" data-file-width="494" decoding="async" height="165" src="//upload.wikimedia.org/wikipedia/commons/thumb/3/37/Highlander411_extreme_ironing.jpg/220px-Highlander411_extreme_ironing.jpg" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/3/37/Highlander411_extreme_ironing.jpg/330px-Highlander411_extreme_ironing.jpg 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/3/37/Highlander411_extreme_ironing.jpg/440px-Highlander411_extreme_ironing.jpg 2x" width="220"/>]
 
In [29]:
len(image_info)
 
Out[29]:
2
 
In [30]:
our_image = image_info[0]
 
In [31]:
type(computer)
 
Out[31]:
bs4.element.Tag
 
 

You can make dictionary like calls for parts of the Tag, in this case, we are interested in the src , or “source” of the image, which should be its own .jpg or .png link:

 
In [32]:
our_image['src']
 
Out[32]:
'//upload.wikimedia.org/wikipedia/commons/thumb/d/dc/Extermeironingrivelin.jpg/220px-Extermeironingrivelin.jpg'
 
 

We can actually display it with a markdown cell with the following:

<img src='//upload.wikimedia.org/wikipedia/commons/thumb/d/dc/Extermeironingrivelin.jpg/220px-Extermeironingrivelin.jpg'>
 
 

 
 

Now that you have the actual src link, you can grab the image with requests and get along with the .content attribute. Note how we had to add https:// before the link, if you don’t do this, requests will complain (but it gives you a pretty descriptive error code).

 
In [38]:
image_link = requests.get('http://upload.wikimedia.org/wikipedia/commons/thumb/d/dc/Extermeironingrivelin.jpg/220px-Extermeironingrivelin.jpg')
 
In [39]:
# The raw content (its a binary file, meaning we will need to use binary read/write methods for saving it)
# image_link.content
 
 

Let’s write this to a file:=, not the ‘wb’ call to denote a binary writing of the file.

 
In [40]:
f = open('my_new_file_name.jpg','wb')
 
In [41]:
f.write(image_link.content)
 
Out[41]:
11033
 
In [42]:
f.close()
 
 

Now we can display this file right here in the notebook as markdown using:

<img src="'my_new_file_name.jpg'>

Just write the above line in a new markdown cell and it will display the image we just downloaded!

 
 
 

Example Project – Working with Multiple Pages and Items

Let’s show a more realistic example of scraping a full site. The website: http://books.toscrape.com/index.html is specifically designed for people to scrape it. Let’s try to get the title of every book that has a 2 star rating and at the end just have a Python list with all their titles.

We will do the following:

  1. Figure out the URL structure to go through every page
  2. Scrap every page in the catalogue
  3. Figure out what tag/class represents the Star rating
  4. Filter by that star rating using an if statement
  5. Store the results to a list
 
 

We can see that the URL structure is the following:

http://books.toscrape.com/catalogue/page-1.html
 
In [35]:
base_url = 'http://books.toscrape.com/catalogue/page-{}.html'
 
 

We can then fill in the page number with .format()

 
In [36]:
res = requests.get(base_url.format('1'))
 
 

Now let’s grab the products (books) from the get request result:

 
In [37]:
soup = bs4.BeautifulSoup(res.text,"lxml")
 
In [ ]:
soup.select(".product_pod")
 
 

Now we can see that each book has the product_pod class. We can select any tag with this class, and then further reduce it by its rating.

 
In [39]:
products = soup.select(".product_pod")
 
In [40]:
example = products[0]
 
In [41]:
type(example)
 
Out[41]:
bs4.element.Tag
 
In [42]:
example.attrs
 
Out[42]:
{'class': ['product_pod']}
 
 

Now by inspecting the site we can see that the class we want is class=’star-rating Two’ , if you click on this in your browser, you’ll notice it displays the space as a . , so that means we want to search for “.star-rating.Two”

 
In [43]:
list(example.children)
 
Out[43]:
['\n', <div class="image_container">
 <a href="a-light-in-the-attic_1000/index.html"><img alt="A Light in the Attic" class="thumbnail" src="../media/cache/2c/da/2cdad67c44b002e7ead0cc35693c0e8b.jpg"/></a>
 </div>, '\n', <p class="star-rating Three">
 <i class="icon-star"></i>
 <i class="icon-star"></i>
 <i class="icon-star"></i>
 <i class="icon-star"></i>
 <i class="icon-star"></i>
 </p>, '\n', <h3><a href="a-light-in-the-attic_1000/index.html" title="A Light in the Attic">A Light in the ...</a></h3>, '\n', <div class="product_price">
 <p class="price_color">£51.77</p>
 <p class="instock availability">
 <i class="icon-ok"></i>
     
         In stock
     
 </p>
 <form>
 <button class="btn btn-primary btn-block" data-loading-text="Adding..." type="submit">Add to basket</button>
 </form>
 </div>, '\n']
 
In [44]:
example.select('.star-rating.Three')
 
Out[44]:
[<p class="star-rating Three">
 <i class="icon-star"></i>
 <i class="icon-star"></i>
 <i class="icon-star"></i>
 <i class="icon-star"></i>
 <i class="icon-star"></i>
 </p>]
 
 

But we are looking for 2 stars, so it looks like we can just check to see if something was returned

 
In [45]:
example.select('.star-rating.Two')
 
Out[45]:
[]
 
 

Alternatively, we can just quickly check the text string to see if “star-rating Two” is in it. Either approach is fine (there are also many other alternative approaches!)

Now let’s see how we can get the title if we have a 2-star match:

 
In [46]:
example.select('a')
 
Out[46]:
[<a href="a-light-in-the-attic_1000/index.html"><img alt="A Light in the Attic" class="thumbnail" src="../media/cache/2c/da/2cdad67c44b002e7ead0cc35693c0e8b.jpg"/></a>,
 <a href="a-light-in-the-attic_1000/index.html" title="A Light in the Attic">A Light in the ...</a>]
 
In [47]:
example.select('a')[1]
 
Out[47]:
<a href="a-light-in-the-attic_1000/index.html" title="A Light in the Attic">A Light in the ...</a>
 
In [48]:
example.select('a')[1]['title']
 
Out[48]:
'A Light in the Attic'
 
 

Okay, let’s give it a shot by combining all the ideas we’ve talked about! (this should take about 20-60 seconds to complete running. Be aware a firwall may prevent this script from running. Also if you are getting a no response error, maybe try adding a sleep step with time.sleep(1).

 
In [49]:
two_star_titles = []

for n in range(1,51):

    scrape_url = base_url.format(n)
    res = requests.get(scrape_url)
    
    soup = bs4.BeautifulSoup(res.text,"lxml")
    books = soup.select(".product_pod")
    
    for book in books:
        if len(book.select('.star-rating.Two')) != 0:
            two_star_titles.append(book.select('a')[1]['title'])
 
In [ ]:
two_star_titles
 
 

Excellent! You should now have the tools necessary to scrape any websites that interest you! Keep in mind, the more complex the website, the harder it will be to scrape. Always ask for permission!

Pierian Training
Pierian Training
Pierian Training is a leading provider of high-quality technology training, with a focus on data science and cloud computing. Pierian Training offers live instructor-led training, self-paced online video courses, and private group and cohort training programs to support enterprises looking to upskill their employees.

You May Also Like

Data Science, Tutorials

Guide to NLTK – Natural Language Toolkit for Python

Introduction Natural Language Processing (NLP) lies at the heart of countless applications we use every day, from voice assistants to spam filters and machine translation. It allows machines to understand, interpret, and generate human language, bridging the gap between humans and computers. Within the vast landscape of NLP tools and techniques, the Natural Language Toolkit […]

Machine Learning, Tutorials

GridSearchCV with Scikit-Learn and Python

Introduction In the world of machine learning, finding the optimal set of hyperparameters for a model can significantly impact its performance and accuracy. However, searching through all possible combinations manually can be an incredibly time-consuming and error-prone process. This is where GridSearchCV, a powerful tool provided by Scikit-Learn library in Python, comes to the rescue. […]

Python Basics, Tutorials

Plotting Time Series in Python: A Complete Guide

Introduction Time series data is a type of data that is collected over time at regular intervals. It can be used to analyze trends, patterns, and behaviors over time. In order to effectively analyze time series data, it is important to visualize it in a way that is easy to understand. This is where plotting […]