How to Scrape Leads For Your Business
Robert Munceanu on Apr 24 2021
If you have got an amazing product or a revolutionary idea to share with the world, wouldn’t it be a shame for people not to know about its existence?
Hello and welcome to today’s special! Lead generation will be the main dish of this article, and we will find out why using a web scraping tool can help us gather leads efficiently.
If you stick until the end, you will see a quick example of a web scraper in action and how it helps businesses find potential clients. Without further ado, let’s sharpen our knives and start with the appetiser!
How web scraping can help you gather leads
Creating a great pool of leads could take some time, because you need quite a lot of contacts in order to grow your business, and manually searching on every website is time-consuming. You don’t want everyone’s phone number either, just the ones in your businesses’ interest, so quality selection will take up even more of your time.
What can we do about that?
Some people create their business around this subject; that is, gathering and selling leads to other companies to use. This sounds like a fast and easy solution, but the quality of their lead pool may not be up to your standards. Plus, there’s no doubt that this option will mean spending some money.
So how can you do it yourself in a fast and efficient way? That’s where a web scraping tool comes in to save the day. This way, you can select only the leads you are interested in, for example, separating them by their reviews.
A step-by-step guide to generating leads with Web Scraping API
Let’s imagine we are a new microbrewery in town and want people to know of our products. To achieve this goal, we need to contact local restaurants or pubs and ask if they are interested in selling our goodies within their business.
How you choose to do that is up to you. We will help you out with the scraping, not the talking!
First, we need to know where to look for the needed information, and the Internet is the best way to do it. There are directory websites with lists of businesses organized by their niche, location, activity, and even size. In this example, we will be using Yell.
Up next, we will create our WebScrapingAPI account and proceed with the example.
Create your WebScrapingAPI account
This step is easy, as creating an account is pretty straightforward, and don’t worry, it’s free! After confirming your registration via email, we can continue to the next step.
Use your access key
After logging in, you will be redirected to the dashboard, where you will find information that will help you scrape the web. In the playground section, you can test out results with different parameters, and if you wish to know more about how to use the API, you can have a look at the documentation.
For now, we are interested in the access key. This key will be used as a parameter in our project in order to authenticate with the API.
Be careful not to share it with anyone, as it is your little secret, but if you think the access key has been compromised, you can reset the API key anytime you want by pressing the button shown above.
Integrate WebScrapingAPI into your project
In this step, we will have to prepare our project for scraping. You can use whichever IDE and programming language suits you best, but for this example, we are going to use WebStorm as the IDE, and the code will be written in NodeJS.
1. Install the following packages:
- got: used to make HTTP requests
- jsdom: helpful when it comes to HTML parsing
- csv-writer: to store the extracted data into a csv file
To install the packages from above, simply use the following command line in your projects terminal: npm install got jsdom csv-writer
2. Set parameters for your requests
Here we will specify the URL of the website we want to scrape, in our case, it will be Yell, and of course, the access key in order for WebScrapingAPI to work.
const url = "https://www.yell.com/ucs/UcsSearchAction.do?keywords=restaurants%26location=United+Kingdom%26scrambleSeed=1024089043"
const params = {
api_key: "XXXXX",
url: url
}
3. Make the request
const response = await got('https://api.webscrapingapi.com/v1', {searchParams: params})
The request will be made to WebScrapingAPI along with the parameters we set earlier, and we will receive a response containing the raw HTML of the scraped page. Next, we’ll have to see how to locate the information we need within the HTML.
4. Inspect elements
Here we must browse the website we wish to scrape and use the Developer Tools in order to search for the elements that hold the information of each business, in our case, each item has the class “businessCapsule--mainRow”.
In the images presented above, we went deeper into the business item element to observe that the name of each business is located in the tag with the class “businessCapsule--name”.
Repeating this process, we will find the phone number in the element with the class “business--telephoneNumber” and the average and total ratings are in the elements with the classes “startRating--average” and “starRating--total” respectively.
5. Parsing the HTML
JSDOM will help us parse the information provided from WebScrapingAPI, since it will return the full page in HTML format.
const {document} = new JSDOM(response.body).window
6. Filtering results
In this phase, we will iterate through all the elements having the class “businessCapsule--mainRow” and extract the information from the elements we talked about earlier. The extracted data will then be added as an object into a list.
const relatedElements = document.querySelectorAll('.businessCapsule--mainRow')
relatedElements.forEach(el => {
const businessName = el.querySelector('.businessCapsule--name')
const businessRatingAverage = el.querySelector('.starRating--average')
const businessRatingTotal = el.querySelector('.starRating--total span')
const businessContact = el.querySelector('.business--telephoneNumber')
leads.push({
businessName: businessName ? businessName.innerHTML : 'No business name',
businessRatingAverage: businessRatingAverage ? businessRatingAverage.innerHTML : 'No ratings',
businessRatingTotal: businessRatingTotal ? businessRatingTotal.innerHTML : 'No ratings',
businessContact: businessContact ? businessContact.innerHTML : 'No phone number'
})
})
7. Store the data
Storing the data into a csv file sounds like a good solution and this is where csv-writer will help us out. We need to specify the path and name of the csv file in the path parameter and in the header parameter, and we will have to specify a list of objects. Each object will represent a column of our csv file. The title property of these objects represents the title of each column while the id property needs to match the properties of the objects in our list of leads.
Now, if we wrap the whole code with an async function and add a loop to scrape the first 5 pages for businesses, the code should look like this:
const {JSDOM} = require("jsdom");
const got = require("got");
(async () => {
const leads = []
const nrPages = 5
for (let page = 1; page <= nrPages; page++) {
const url = "https://www.yell.com/ucs/UcsSearchAction.do?keywords=restaurants%26location=United+Kingdom%26scrambleSeed=1024089043%26pageNum=" + page
const params = {
api_key: "XXX",
url: url
}
const response = await got('https://api.webscrapingapi.com/v1', {searchParams: params})
const {document} = new JSDOM(response.body).window
const relatedElements = document.querySelectorAll('.businessCapsule--mainRow')
if (relatedElements) {
relatedElements.forEach(el => {
const businessName = el.querySelector('.businessCapsule--name')
const businessRatingAverage = el.querySelector('.starRating--average')
const businessRatingTotal = el.querySelector('.starRating--total span')
const businessContact = el.querySelector('.business--telephoneNumber')
leads.push({
businessName: businessName ? businessName.innerHTML : 'No business name',
businessRatingAverage: businessRatingAverage ? businessRatingAverage.innerHTML : 'No ratings',
businessRatingTotal: businessRatingTotal ? businessRatingTotal.innerHTML : 'No ratings',
businessContact: businessContact ? businessContact.innerHTML : 'No phone number'
})
})
}
}
const csvWriter = require('csv-writer').createObjectCsvWriter({
path: 'leads.csv',
header: [
{id: 'businessName', title: 'Business Name'},
{id: 'businessRatingAverage', title: 'Business Average Rating'},
{id: 'businessRatingTotal', title: 'Business No. of Ratings'},
{id: 'businessContact', title: 'Business Phone Number'},
]
})
csvWriter.writeRecords(leads).then(() => console.log('Success!!'))
})();
Success!!
Good job! You have finished collecting information about potential leads.
Let WebScrapingAPI make best friends with your business
This is a fast way to create your own pool of leads and potential business partners. Other than generating leads, WebScrapingAPI can help you with additional situations as well. You can learn more about those on our blog.
Based on your project, WebScrapingAPI has several packages to fit your needs. If you are not yet convinced of how our product condiments your business, why not try the free plan first? It offers 1000 free API calls for you to get started.
News and updates
Stay up-to-date with the latest web scraping guides and news by subscribing to our newsletter.
We care about the protection of your data. Read our Privacy Policy.
Related articles
Learn how to scrape Google Maps place results with our API using Node.js: step-by-step guide, professional scraper benefits, and more. Get data_id, coordinates, and build data parameter easily.
Learn how to scrape Yelp.com for business data with our updated guide. Get step-by-step instructions and tips for web scraping Yelp in 2023.
If you want data on your competitors, few websites are as valuable as Yelp. With WebScrapingAPI, getting that data is easy as pie.