Web Scraping Examples



For example, I sometimes have to copy and paste a table from a web page into Google Sheets or fetch the article title or product name from a web page into Google Sheets. Since I spend a lot of my time in Google Sheets anyway, I thought of figuring out if I could scrape the data from the websites and extract the data into the cells in Google Sheets. Web scraping (also called web data extraction or data scraping) provides a solution for those who want to get access to structured web data in an automated fashion. Web scraping is useful if the public website you want to get data from doesn’t have an API, or it does but provides only limited access to the data. Malicious web scraping examples Web scraping is considered malicious when data is extracted without the permission of website owners. The two most common use cases are price scraping and content theft.

Table of Contents

Introduction to web scraping

Django Web Scraping Examples

Web scraping is one of the tools at a developer’s disposal when looking to gather data from the internet. While consuming data via an API has become commonplace, most of the websites online don’t have an API for delivering data to consumers. In order to access the data they’re looking for, web scrapers and crawlers read a website’s pages and feeds, analyzing the site’s structure and markup language for clues. Generally speaking, information collected from scraping is fed into other programs for validation, cleaning, and input into a datastore or its fed onto other processes such as natural language processing (NLP) toolchains or machine learning (ML) models. There are a few Python packages we could use to illustrate with, but we’ll focus on Scrapy for these examples. Scrapy makes it very easy for us to quickly prototype and develop web scrapers with Python.

Scrapy vs. Selenium and Beautiful Soup

If you’re interested in getting into Python’s other packages for web scraping, we’ve laid it out here:

Scrapy concepts

Before we start looking at specific examples and use cases, let’s brush up a bit on Scrapy and how it works.

Spiders: Scrapy uses Spiders to define how a site (or a bunch of sites) should be scraped for information. Scrapy lets us determine how we want the spider to crawl, what information we want to extract, and how we can extract it. Specifically, Spiders are Python classes where we’ll put all of our custom logic and behavior.

Selectors:Selectors are Scrapy’s mechanisms for finding data within the website’s pages. They’re called selectors because they provide an interface for “selecting” certain parts of the HTML page, and these selectors can be in either CSS or XPath expressions.

Items:Items are the data that is extracted from selectors in a common data model. Since our goal is a structured result from unstructured inputs, Scrapy provides an Item class which we can use to define how our scraped data should be structured and what fields it should have.

Reddit-less front page

Suppose we love the images posted to Reddit, but don’t want any of the comments or self posts. We can use Scrapy to make a Reddit Spider that will fetch all the photos from the front page and put them on our own HTML page which we can then browse instead of Reddit.

To start, we’ll create a RedditSpider which we can use traverse the front page and handle custom behavior.

Above, we’ve defined a RedditSpider, inheriting Scrapy’s Spider. We’ve named it reddit and have populated the class’ start_urls attribute with a URL to Reddit from which we’ll extract the images.

At this point, we’ll need to begin defining our parsing logic. We need to figure out an expression that the RedditSpider can use to determine whether it’s found an image. If we look at Reddit’s robots.txt file, we can see that our spider can’t crawl any comment pages without being in violation of the robots.txt file, so we’ll need to grab our image URLs without following through to the comment pages.

By looking at Reddit, we can see that external links are included on the homepage directly next to the post’s title. We’ll update RedditSpider to include a parser to grab this URL. Reddit includes the external URL as a link on the page, so we should be able to just loop through the links on the page and find URLs that are for images.

In a parse method on our RedditSpider class, I’ve started to define how we’ll be parsing our response for results. To start, we grab all of the href attributes from the page’s links using a basic XPath selector. Now that we’re enumerating the page’s links, we can start to analyze the links for images.

To actually access the text information from the link’s href attribute, we use Scrapy’s .get() function which will return the link destination as a string. Next, we check to see if the URL contains an image file extension. We use Python’s any() built-in function for this. This isn’t all-encompassing for all image file extensions, but it’s a start. From here we can push our images into a local HTML file for viewing.

To start, we begin collecting the HTML file contents as a string which will be written to a file called frontpage.html at the end of the process. You’ll notice that instead of pulling the image location from the ‘//a/@href/‘, we’ve updated our links selector to use the image’s src attribute: ‘//img/@src’. This will give us more consistent results, and select only images.

As our RedditSpider’s parser finds images it builds a link with a preview image and dumps the string to our html variable. Once we’ve collected all of the images and generated the HTML, we open the local HTML file (or create it) and overwrite it with our new HTML content before closing the file again with page.close(). If we run scrapy runspider reddit.py, we can see that this file is built properly and contains images from Reddit’s front page.

But, it looks like it contains all of the images from Reddit’s front page – not just user-posted content. Let’s update our parse command a bit to blacklist certain domains from our results.

If we look at frontpage.html, we can see that most of Reddit’s assets come from redditstatic.com and redditmedia.com. We’ll just filter those results out and retain everything else. With these updates, our RedditSpider class now looks like the below:

We’re simply adding our domain whitelist to an exclusionary any()expression. These statements could be tweaked to read from a separate configuration file, local database, or cache – if need be.

Kite is a plugin for PyCharm, Atom, Vim, VSCode, Sublime Text, and IntelliJ that uses machine learning to provide you with code completions in real time sorted by relevance. Start coding faster today.

Extracting Amazon price data

If you’re running an ecommerce website, intelligence is key. With Scrapy we can easily automate the process of collecting information about our competitors, our market, or our listings.

For this task, we’ll extract pricing data from search listings on Amazon and use the results to provide some basic insights. If we visit Amazon’s search results page and inspect it, we notice that Amazon stores the price in a series of divs, most notably using a class called .a-offscreen. We can formulate a CSS selector that extracts the price off the page:

With this CSS selector in mind, let’s build our AmazonSpider.

A few things to note about our AmazonSpider class: convert_money(): This helper simply converts strings formatted like ‘$45.67’ and casts them to a Python Decimal type which can be used for computations and avoids issues with locale by not including a ‘$’ anywhere in the regular expression. getall(): The .getall() function is a Scrapy function that works similar to the .get() function we used before, but this returns all the extracted values as a list which we can work with. Running the command scrapy runspider amazon.py in the project folder will dump output resembling the following:

It’s easy to imagine building a dashboard that allows you to store scraped values in a datastore and visualize data as you see fit.

Considerations at scale

As you build more web crawlers and you continue to follow more advanced scraping workflows you’ll likely notice a few things:

  1. Sites change, now more than ever.
  2. Getting consistent results across thousands of pages is tricky.
  3. Performance considerations can be crucial.

Sites change, now more than ever

On occasion, AliExpress for example, will return a login page rather than search listings. Sometimes Amazon will decide to raise a Captcha, or Twitter will return an error. While these errors can sometimes simply be flickers, others will require a complete re-architecture of your web scrapers. Nowadays, modern front-end frameworks are oftentimes pre-compiled for the browser which can mangle class names and ID strings, sometimes a designer or developer will change an HTML class name during a redesign. It’s important that our Scrapy crawlers are resilient, but keep in mind that changes will occur over time.

Getting consistent results across thousands of pages is tricky

Slight variations of user-inputted text can really add up. Think of all of the different spellings and capitalizations you may encounter in just usernames. Pre-processing text, normalizing text, and standardizing text before performing an action or storing the value is best practice before most NLP or ML software processes for best results.

Performance considerations can be crucial

You’ll want to make sure you’re operating at least moderately efficiently before attempting to process 10,000 websites from your laptop one night. As your dataset grows it becomes more and more costly to manipulate it in terms of memory or processing power. In a similar regard, you may want to extract the text from one news article at a time, rather than downloading all 10,000 articles at once. As we’ve seen in this tutorial, performing advanced scraping operations is actually quite easy using Scrapy’s framework. Some advanced next steps might include loading selectors from a database and scraping using very generic Spider classes, or by using proxies or modified user-agents to see if the HTML changes based on location or device type. Scraping in the real world becomes complicated because of all the edge cases, Scrapy provides an easy way to build this logic in Python.

This post is a part of Kite’s new series on Python. You can check out the code from this and other posts on our GitHub repository.

Company

Product

Resources

Stay in touch

Get Kite updates & coding tips

Made with in San Francisco

Web scraping can be incredibly powerful. So much so, that many businesses use web scraping technologies to fuel their operations.

After all, having access to the right data can provide powerful insights about an industry or competitor.

Today, we will review some common uses of web scraping by companies in many different sectors.

What is Web Scraping?

If you are wondering what web scraping is in the first place, let us break it down.

Uipath Web Scraping Examples

Web scraping refers to the extraction of web data on to a format that is more useful for the user. For example, you might scrape product information from an ecommerce website onto an excel spreadsheet.

Although web scraping can be done manually, in most cases, you might be better off using an automated tool. After all, these are usually faster and less expensive than scraping data manually.

Want to learn more about web scraping? Learn more by reading our in-depth guide on web scraping.

Web Scraping Examples: Business Uses

Real Estate Listing Scraping

Many real estate agents use web scraping to populate their database of available properties for sale or for rent.

For example, a real estate agency will scrape MLS listings to build and API that directly populate this information onto their website. This way, they get to act as the agent for the property when someone finds this listing on their site.

Most listings that you will find on a Real Estate website are automatically generated by an API.

Industry Statistics and Insights

Many companies use web scraping to build massive databases and draw industry-specific insights from these. These companies can then sell access to these insights to companies in said industries.

For example, a company might scrape and analyze tons of data about oil prices, exports and imports in order to sell their insights to oil companies across the world.

A notable example of the value that can be unlocked from this use is the one of HiQ Labs.

This company was caught scraping public data from LinkedIn, which resulted in them getting banned from scraping LinkedIn data. However, the courts have upheld HiQ’s argument that scraping publicly available data is not illegal.

Comparison Shopping Sites

Web Scraping Examples Python

There are several websites and applications that can help you to easily compare pricing between several retailers for the same product.

One way that these websites work is by using web scrapers to scrape product data and pricing from each retailer on a daily basis. This way, they can provide their users with the comparison data they need.

Lead Generation

One incredibly popular use of web scraping is lead generation. This use is so popular in fact, that we have written an entire guide on using web scraping for lead generation.

Web Scraping Examples

In short, web scraping is used by many companies to collect contact information about potential customers or clients. This is incredibly common in the business-to-business space, where potential customers will post their business information publicly online.

Website Transitions

Sometimes, companies with incredibly large websites are tasked with having to transition their site to a more modern environment. Think of large and outdated websites that hold a lot of critical information (such as most government websites).

In these cases, companies might want to use a web scraper to quickly and easily export data from their legacy website on to their new platform.

Social Media Sentiment Analysis

We don’t mean to freak you out, but if you ever tweeted during a Game of Thrones episodes, your tweet might have been scraped and analyzed by HBO to understand how their show is being received on social media.

Many different social media platforms can be scraped for further sentiment analysis about specific topics. Not only is this useful for many companies, but also for individuals such as politicians. They can use this type of analysis to understand the perception of their campaigns on social media.

Closing Thoughts

As you might already realize, there are many ways in which web scraping can be used. In fact, the uses we have highlighted are just the tip of the iceberg.

If you’re wondering what’s the best web scraper four your next project, read our guide on what makes the best web scraper.

What will you use web scraping for?





Comments are closed.