Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on March 24, 2016

Marketing the TNW Way #11: Collecting 500 million Google search results


Marketing the TNW Way #11: Collecting 500 million Google search results

In this series of blog posts, I’ve enjoyed shedding some light onto how we approach marketing at The Next Web through Web analytics, Search Engine Optimization (SEO), Conversion Rate Optimization (CRO), social media and more. This piece focuses on what data we use from Google Search Console and how process it at bigger scale.

500 million, WHUT!?

Since three years ago, we have been tracking over 30,000 keywords on a weekly basis. For every keyword we track, we save the top 100 plus AdWords results in a database in order to have historical data over time on our performance.

Why are you tracking keywords weekly and not daily?

In the publishing industry we see two different kinds of content that does well in search engines. One of them is obviously viral news – once a company releases a big feature or an acquisition it will quickly rise to the top of Google and display search results.

The second is evergreen content – content that is not focused around a news topic, but is always searched for, for example: ‘The ultimate guide to become a product manager.’ This is content that could rank well for months and based on our analytics is very stable in the search results.

Tracking the data on daily basis would involve handling more data but also providing us with more scalability issues and increase the costs incredibly.

What are we tracking?

We’re tracking approximately 12,500 unique keywords at the moment in our dataset. That’s because we’re tracking certain keywords in multiple regions (mostly US, UK and Japan for some out of the box data).

marusem

Since The Next Web is a global publication, we want to make sure that the data we’re using is significant throughout all the regions that we get traffic from. We wouldn’t like to see our rankings and traffic go down in the UK and not have a clue why that is because we are only tracking US keywords, for example.

The keywords are divided between three groups:

  • Baseline: we track multiple groups of several thousands keywords each for a baseline on our rankings. If averages go up we know across the board we’re doing better.
  • Focus: groups that are focused around a certain topic, for example events: SXSW, Apple.
  • Types: types of content or categories of content that do well: companies, people.

How do we chose what to track?

Probably by reading the last part on how we’ve divided the groups already gives some insights into what approach we have for tracking our keywords.

The baseline group consists of keywords we get from lists of top keywords from our Google Search Console account. It contains a mix of news related keywords and ones that are related to evergreen content.

The others contain more long tail keywords with sometimes not that much traffic. As sometimes we’d like to ‘dominate’ with our content in a certain area we will track our progress then across a whole set of related keywords.

What we are doing with the data?

For every keyword, we get on a weekly basis around 110 results with the data for the current Search Result Page (SERP). The data is being saved in our databases and made available via our own tools which do most of the reporting/dashboarding.

We’re using the results for two different purposes: creating baselines for our group of keywords that give an overall impression on how The Next Web is ranking in search engines; and for the other two groups, we’re making sure that we can keep track of specific keywords from ‘SERP competitors’.

Having our own tools

Why not go with any of the dozen tools that are available to do the job as well? The reasons are simple:

  • Costs: we were able to save thousands of dollars on a monthly basis based on our volume.
  • Raw data: probably the biggest reason – we wanted to make sure that the tools we use can provide us with the raw data at all times. We need to be able to have the raw data and connect it to other tools and sources to get new insights when combining them.

What’s next?

  • New features: lately we’ve been talking to SEO agencies, vendors as in-house folks to see how they’re working with their SERP data and how they’re using it for bigger analysis. Based on that, we’ll be working with the people to help out with the tool and make sure we can improve it even further.
  • More keywords: as we speak we’re working on setting up even more groups of keywords to monitor their performance according to the projects we’re working on or the optimization that we do. 30,000 keywords is a good start, but we would like to track even more to have more significant data in certain markets.
  • More regions: almost a logical step as TNW is growing, we’ll probably start tracking more keywords in regions like India, Canada and even the Netherlands (where we’re based).

What would you like to see/What would you built?

So what kind of tools are you using to get access to your SERP data? Do you have any cool ideas on what kind of data you would like to see visualized or made available? What would it be!? Let us know by tweeting to @MartijnSch or @Jaijal.

If you missed the previous posts in this series, don’t forget to check them out: #1: Heat maps , #2: Deep dive on A/B testing and #3: Learnings from our A/B tests, #4: From Marketing Manager to Recruiter, #5 Running ScreamingFrog in the Cloud and #6 What tools do we use?, #7 We track everything!, #8 Google Tag Manager , #9  A/B Testing with Google Tag Manager and #10 Google Search Console.

This is a #TNWLife article, a look into the lives of those that work at The Next Web.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with