Ads

Ad

Wednesday, April 17, 2019

How Do I Improve My Domain Authority (DA)?

Posted by Dr-Pete

The Short Version: Don't obsess over Domain Authority (DA) for its own sake. Domain Authority shines at comparing your overall authority (your aggregate link equity, for the most part) to other sites and determining where you can compete. Attract real links that drive traffic, and you'll improve both your Domain Authority and your rankings.

Unless you've been living under a rock, over a rock, or really anywhere rock-adjacent, you may know that Moz has recently invested a lot of time, research, and money in a new-and-improved Domain Authority. People who use Domain Authority (DA) naturally want to improve their score, and this is a question that I admit we've avoided at times, because like any metric, DA can be abused if taken out of context or viewed in isolation.

I set out to write a how-to post, but what follows can only be described as a belligerent FAQ ...

Why do you want to increase DA?

This may sound like a strange question coming from an employee of the company that created Domain Authority, but it's the most important question I can ask you. What's your end-goal? Domain Authority is designed to be an indicator of success (more on that in a moment), but it doesn't drive success. DA is not used by Google and will have no direct impact on your rankings. Increasing your DA solely to increase your DA is pointless vanity.

So, I don't want a high DA?

I understand your confusion. If I had to over-simplify Domain Authority, I would say that DA is an indicator of your aggregate link equity. Yes, all else being equal, a high DA is better than a low DA, and it's ok to strive for a higher DA, but high DA itself should not be your end-goal.

So, DA is useless, then?

No, but like any metric, you can't use it recklessly or out of context. Our Domain Authority resource page dives into more detail, but the short answer is that DA is very good at helping you understand your relative competitiveness. Smart SEO isn't about throwing resources at vanity keywords, but about understanding where you realistically have a chance at competing. Knowing that your DA is 48 is useless in a vacuum. Knowing that your DA is 48 and the sites competing on a query you're targeting have DAs from 30-45 can be extremely useful. Likewise, knowing that your would-be competitors have DAs of 80+ could save you a lot of wasted time and money.

But Google says DA isn't real!

This topic is a blog post (or eleven) in and of itself, but I'm going to reduce it to a couple points. First, Google's official statements tend to define terms very narrowly. What Google has said is that they don't use a domain-level authority metric for rankings. Ok, let's take that at face value. Do you believe that a new page on a low-authority domain (let's say DA = 25) has an equal chance of ranking as a high-authority domain (DA = 75)? Of course not, because every domain benefits from its aggregate internal link equity, which is driven by the links to individual pages. Whether you measure that aggregate effect in a single metric or not, it still exists.

Let me ask another question. How do you measure the competitiveness of a new page, that has no Page Authority (or PageRank or whatever metrics Google uses)? This question is a big part of why Domain Authority exists — to help you understand your ability to compete on terms you haven't targeted and for content you haven't even written yet.


Seriously, give me some tips!

I'll assume you've read all of my warnings and taken them seriously. You want to improve your Domain Authority because it's the best authority metric you have, and authority is generally a good thing. There are no magical secrets to improving the factors that drive DA, but here are the main points:

1. Get more high-authority links

Shocking, I know, but that's the long and short of it. Links from high-authority sites and pages still carry significant ranking power, and they drive both Domain Authority and Page Authority. Even if you choose to ignore DA, you know high-authority links are a good thing to have. Getting them is the topic of thousands of posts and more than a couple of full-length novels (well, ok, books — but there's probably a novel and feature film in the works).

2. Get fewer spammy links

Our new DA score does a much better job of discounting bad links, as Google clearly tries to do. Note that "bad" doesn't mean low-authority links. It's perfectly natural to have some links from low-authority domains and pages, and in many cases it's both relevant and useful to searchers. Moz's Spam Score is pretty complex, but as humans we intuitively know when we're chasing low-quality, low-relevance links. Stop doing that.

3. Get more traffic-driving links

Our new DA score also factors in whether links come from legitimate sites with real traffic, because that's a strong signal of usefulness. Whether or not you use DA regularly, you know that attracting links that drive traffic is a good thing that indicates relevance to searches and drives bottom-line results. It's also a good reason to stop chasing every link you can at all costs. What's the point of a link that no one will see, that drives no traffic, and that is likely discounted by both our authority metrics and Google.


You can't fake real authority

Like any metric based on signals outside of our control, it's theoretically possible to manipulate Domain Authority. The question is: why? If you're using DA to sell DA 10 links for $1, DA 20 links for $2, and DA 30 links for $3, please, for the love of all that is holy, stop (and yes, I've seen that almost verbatim in multiple email pitches). If you're buying those links, please spend that money on something more useful, like sandwiches.

Do the work and build the kind of real authority that moves the needle both for Moz metrics and Google. It's harder in the short-term, but the dividends will pay off for years. Use Domain Authority to understand where you can compete today, cost-effectively, and maximize your investments. Don't let it become just another vanity metric.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Wednesday, March 27, 2019

MozCon 2019: The Initial Agenda

Posted by cheryldraper

We’ve got three months and some change before MozCon 2019 splashes onto the scene (can you believe it?!) Today, we’re excited to give you a sneak preview of the first batch of 19 incredible speakers to take the stage this year.

With a healthy mix of fresh faces joining us for the first time and fan favorites making a return appearance, our speaker lineup this year is bound to make waves. While a few details are still being pulled together, topics range from technical SEO, content marketing, and local search to link building, machine learning, and way more — all with an emphasis on practitioners sharing tactical advice and real-world stories of how they’ve moved the needle (and how you can, too.)

Still need to snag your ticket for this sea of actionable talks? We've got you covered:

Register for MozCon

The Speakers

Take a gander at who you'll see on stage this year, along with some of the topics we've already worked out:

Sarah Bird

CEO — Moz

Welcome to MozCon 2019 + the State of the Industry

Our vivacious CEO will be kicking things off early on the first day of MozCon with a warm welcome, laying out all the pertinent details of the conference, and getting us in the right mindset for three days of learning with a dive into the State of the Industry.


Casie Gillette

Senior Director, Digital Marketing — KoMarketing

Making Memories: Creating Content People Remember

We know that only 20% of people remember what they read, but 80% remember what they saw. How do you create something people actually remember? You have to think beyond words and consider factors like images, colors, movement, location, and more. In this talk, Casie will dissect what brands are currently doing to capture attention and how everyone, regardless of budget or resources, can create the kind of content their audience will actually remember.


Ruth Burr Reedy

Director of Strategy — UpBuild

Human > Machine > Human: Understanding Human-Readable Quality Signals and Their Machine-Readable Equivalents

The push and pull of making decisions for searchers versus search engines is an ever-present SEO conundrum. How do you tackle industry changes through the lens of whether something is good for humans or for machines? Ruth will take us through human-readable quality signals and their machine-readable equivalents and how to make SEO decisions accordingly, as well as how to communicate change to clients and bosses.


Wil Reynolds

Founder & Director of Digital Strategy — Seer Interactive

Topic: TBD

A perennial favorite on the MozCon stage, we’re excited to share more details about Wil’s 2019 talk as soon as we can!


Dana DiTomaso

President & Partner — Kick Point

Improved Reporting & Analytics within Google Tools

Covering the intersections between some of our favorite free tools — Google Data Studio, Google Analytics, and Google Tag Manager— Dana will be deep-diving into how to improve your reporting and analytics, even providing downloadable Data Studio templates along the way.


Paul Shapiro

Senior Partner, Head of SEO — Catalyst, a GroupM and WPP Agency

Redefining Technical SEO

It’s time to throw the traditional definition of technical SEO out the window. Why? Because technical SEO is much, much bigger than just crawling, indexing, and rendering. Technical SEO is applicable to all areas of SEO, including content development and other creative functions. In this session, you’ll learn how to integrate technical SEO into all aspects of your SEO program.


Shannon McGuirk

Head of PR & Content — Aira Digital

How to Supercharge Link Building with a Digital PR Newsroom

Everyone who’s ever tried their hand at link building knows how much effort it demands. If only there was a way to keep a steady stream of quality links coming in the door for clients, right? In this talk, Shannon will share how to set up a "digital PR newsroom" in-house or agency-side that supports and grows your link building efforts. Get your note-taking hand ready, because she’s going to outline her process and provide a replicable tutorial for how to make it happen.


Russ Jones

Marketing Scientist — Moz

Topic: TBD

Russ is planning to wow us with a talk he’s been waiting years to give — we’re still hashing out the details and can’t wait to share what you can expect!


Dr. Pete Meyers

Marketing Scientist — Moz

How Many Words is a Question Worth?

Traditional keyword research is poorly suited to Google's quest for answers. One question might represent thousands of keyword variants, so how do we find the best questions, craft content around them, and evaluate success? Dr. Pete dives into three case studies to answer these questions.


Cindy Krum

CEO — MobileMoxie

Fraggles, Mobile-First Indexing, & the SERP of the Future

Before you ask: no, this isn’t Fraggle Rock, MozCon edition! Cindy will cover the myriad ways mobile-first indexing is changing the SERPs, including progressive web apps, entity-first indexing, and how "fraggles" are indexed in the Knowledge Graph and what it all means for the future of mobile SERPs.


Ross Simmonds

Digital Strategist — Foundation Marketing

Keyword's Aren't Enough: How to Uncover Content Ideas Worth Chasing

Many marketers focus solely on keyword research when crafting their content, but it just isn't enough these days if you want to gain a competitive edge. Ross will share a framework for uncovering content ideas leveraged from forums, communities, niche sites, good old-fashioned SERP analysis, and more, tools and techniques to help along the way, and exclusive research surrounding the data that backs this up.


Britney Muller

Senior SEO Scientist — Moz

Topic: TBD

Last year, Britney rocked our socks off with her presentation on machine learning and SEO. We’re still ironing out the specifics of her 2019 talk, but suffice to say it might be smart to double-up on socks.


Mary Bowling

Co-Founder — Ignitor Digital

Brand Is King: How to Rule in the New Era of Local Search

Get ready for a healthy dose of all things local with this talk! Mary will deep-dive into how the Google Local algorithm has matured in 2019 and how marketers need to mature with it; how the major elements of the algo (relevance, prominence, and proximity) influence local rankings and how they affect each other; how local results are query dependent; how to feed business info into the Knowledge Graph; and how brand is now "king" in Local Search.


Darren Shaw

Founder — Whitespark

From Zero to Local Ranking Hero

From zero web presence to ranking hyper-locally, Darren will take us along on the 8-month-long journey of a business growing its digital footprint and analyzing what worked (and didn’t) along the way. How well will they rank from a GMB listing alone? What about when citations were added, and later indexed? Did having a keyword in the business name help or harm, and what changes when they earn a few good links? Buckle up for this wild ride as we discover exactly what impact different strategies have on local rankings.


Andy Crestodina

Co-Founder / Chief Marketing Officer — Orbit Media

What’s the Most Effective Content Strategy?

There’s so much advice out there on how to craft a content strategy that it can feel scattered and overwhelming. In his talk, Andy will cover exactly which tactics are the most effective and pull together a cohesive story on just what details make for an effective and truly great content strategy.


Luke Carthy

Digital Lead — Excel Networking

Killer CRO and UX Wins Using an SEO Crawler

CRO, UX, and an SEO crawler? You read that right! Luke will share actionable tips on how to identify revenue wins and impactful low-hanging fruit to increase conversions and improve UX with the help of a site crawler typically used for SEO, as well as a generous helping of data points from case studies and real-world examples.


Joy Hawkins

Owner — Sterling Sky Inc.

Factors that Affect the Local Algorithm that Don't Impact Organic

Google’s local algorithm is a horse of a different color when compared with the organic algo most SEOs are familiar with. Joy will share results from a SterlingSky study on how proximity varies greatly when comparing local and organic results, how reviews impact ranking (complete with data points from testing), how spam is running wild (and how it negatively impacts real businesses), and more.


Heather Physioc

Group Director of Discoverability — VMLY&R

Mastering Branded Search

Doing branded search right is complicated. “Branded search” isn't just when people search for your client’s brand name — instead, think brand, category, people, conversation around the brand, PR narrative, brand entities/assets, and so on. Heather will bring the unique twists and perspectives that come from her enterprise and agency experience working on some of the biggest brands in the world, providing different avenues to go down when it comes to keyword research and optimization.

See you at MozCon?

We hope you’re as jazzed as we are for July 15th–17th to hurry up and get here. And again, if you haven’t grabbed your ticket yet, we’ve got your back:

Grab your MozCon ticket now!

Has speaking at MozCon been on your SEO conference bucket list? If so, stay tuned — we’ll be starting our community speaker pitch process soon, so keep an eye on the blog in the coming weeks!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Wednesday, February 27, 2019

14 SEO Predictions for 2019 & Beyond, as Told by Mozzers

Posted by TheMozTeam

With the new year in full swing and an already busy first quarter, our 2019 predictions for SEO in the new year are hopping onto the scene a little late — but fashionably so, we hope. From an explosion of SERP features to increased monetization to the key drivers of search this year, our SEO experts have consulted their crystal balls (read: access to mountains of data and in-depth analyses) and made their predictions. Read on for an exhaustive list of fourteen things to watch out for in search from our very own Dr. Pete, Britney Muller, Rob Bucci, Russ Jones, and Miriam Ellis!

1. Answers will drive search

People Also Ask boxes exploded in 2018, and featured snippets have expanded into both multifaceted and multi-snippet versions. Google wants to answer questions, it wants to answer them across as many devices as possible, and it will reward sites with succinct, well-structured answers. Focus on answers that naturally leave visitors wanting more and establish your brand and credibility. [Dr. Peter J. Meyers]

Further reading:

2. Voice search will continue to be utterly useless for optimization

Optimizing for voice search will still be no more than optimizing for featured snippets, and conversions from voice will remain a dark box. [Russ Jones]

Further reading:

3. Mobile is table stakes

This is barely a prediction. If your 2019 plan is to finally figure out mobile, you're already too late. Almost all Google features are designed with mobile-first in mind, and the mobile-first index has expanded rapidly in the past few months. Get your mobile house (not to be confused with your mobile home) in order as soon as you can. [Dr. Peter J. Meyers]

Further reading:

4. Further SERP feature intrusions in organic search

Expect Google to find more and more ways to replace organic with solutions that keep users on Google’s property. This includes interactive SERP features that replace, slowly but surely, many website offerings in the same way that live scores, weather, and flights have. [Russ Jones]

Further reading:

5. Video will dominate niches

Featured Videos, Video Carousels, and Suggested Clips (where Google targets specific content in a video) are taking over the how-to spaces. As Google tests search appliances with screens, including Home Hub, expect video to dominate instructional and DIY niches. [Dr. Peter J. Meyers]

Further reading:

6. SERPs will become more interactive

We’ve seen the start of interactive SERPs with People Also Ask Boxes. Depending on which question you expand, two to three new questions will generate below that directly pertain to your expanded question. This real-time engagement keeps people on the SERP longer and helps Google better understand what a user is seeking. [Britney Muller]

Further reading:

7. Local SEO: Google will continue getting up in your business — literally

Google will continue asking more and more intimate questions about your business to your customers. Does this business have gender-neutral bathrooms? Is this business accessible? What is the atmosphere like? How clean is it? What kind of lighting do they have? And so on. If Google can acquire accurate, real-world information about your business (your percentage of repeat customers via geocaching, price via transaction history, etc.) they can rely less heavily on website signals and provide more accurate results to searchers. [Britney Muller]

Further reading:

8. Business proximity-to-searcher will remain a top local ranking factor

In Moz’s recent State of Local SEO report, the majority of respondents agreed that Google’s focus on the proximity of a searcher to local businesses frequently emphasizes distance over quality in the local SERPs. I predict that we’ll continue to see this heavily weighting the results in 2019. On the one hand, hyper-localized results can be positive, as they allow a diversity of businesses to shine for a given search. On the other hand, with the exception of urgent situations, most people would prefer to see best options rather than just closest ones. [Miriam Ellis]

Further reading:

9. Local SEO: Google is going to increase monetization

Look to see more of the local and maps space monetized uniquely by Google both through Adwords and potentially new lead-gen models. This space will become more and more competitive. [Russ Jones]

Further reading:

10. Monetization tests for voice

Google and Amazon have been moving towards voice-supported displays in hopes of better monetizing voice. It will be interesting to see their efforts to get displays in homes and how they integrate the display advertising. Bold prediction: Amazon will provide sleep-mode display ads similar to how Kindle currently displays them today. [Britney Muller]

11. Marketers will place a greater focus on the SERPs

I expect we’ll see a greater focus on the analysis of SERPs as Google does more to give people answers without them having to leave the search results. We’re seeing more and more vertical search engines like Google Jobs, Google Flights, Google Hotels, Google Shopping. We’re also seeing more in-depth content make it onto the SERP than ever in the form of featured snippets, People Also Ask boxes, and more. With these new developments, marketers are increasingly going to want to report on their general brand visibility within the SERPs, not just their website ranking. It’s going to be more important than ever for people to be measuring all the elements within a SERP, not just their own ranking. [Rob Bucci]

Further reading:

12. Targeting topics will be more productive than targeting queries

2019 is going to be another year in which we see the emphasis on individual search queries start to decline, as people focus more on clusters of queries around topics. People Also Ask queries have made the importance of topics much more obvious to the SEO industry. With PAAs, Google is clearly illustrating that they think about searcher experience in terms of a searcher’s satisfaction across an entire topic, not just a specific search query. With this in mind, we can expect SEOs to more and more want to see their search queries clustered into topics so they can measure their visibility and the competitive landscape across these clusters. [Rob Bucci]

Further reading:

13. Linked unstructured citations will receive increasing focus

I recently conducted a small study in which there was a 75% correlation between organic and local pack rank. Linked unstructured citations (the mention of partial or complete business information + a link on any type of relevant website) are a means of improving organic rankings which underpin local rankings. They can also serve as a non-Google dependent means of driving traffic and leads. Anything you’re not having to pay Google for will become increasingly precious. Structured citations on key local business listing platforms will remain table stakes, but competitive local businesses will need to focus on unstructured data to move the needle. [Miriam Ellis]

Further reading:

14. Reviews will remain a competitive difference-maker

A Google rep recently stated that about one-third of local searches are made with the intent of reading reviews. This is huge. Local businesses that acquire and maintain a good and interactive reputation on the web will have a critical advantage over brands that ignore reviews as fundamental to customer service. Competitive local businesses will earn, monitor, respond to, and analyze the sentiment of their review corpus. [Miriam Ellis]

Further reading:

We’ve heard from Mozzers, and now we want to hear from you. What have you seen so far in 2019 that’s got your SEO Spidey senses tingling? What trends are you capitalizing on and planning for? Let us know in the comments below (and brag to friends and colleagues when your prediction comes true in the next 6–10 months). ;-)


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Thursday, February 21, 2019

The Influence of Voice Search on Featured Snippets

Posted by TheMozTeam

This post was originally published on the STAT blog.


We all know that featured snippets provide easy-to-read, authoritative answers and that digital assistants love to say them out loud when asked questions.

This means that featured snippets have an impact on voice search — bad snippets, or no snippets at all, and digital assistants struggle. By that logic: Create a lot of awesome snippets and win the voice search race. Right?

Right, but there’s actually a far more interesting angle to examine — one that will help you nab more snippets and optimize for voice search at the same time. In order to explore this, we need to make like Doctor Who and go back in time.

From typing to talking

Back when dinosaurs roamed the earth and queries were typed into search engines via keyboards, people adapted to search engines by adjusting how they performed queries. We pulled out unnecessary words and phrases, like “the,” “of,” and, well, “and,” which created truncated requests — robotic-sounding searches for a robotic search engine.

The first ever dinosaur to use Google.

Of course, as search engines have evolved, so too has their ability to understand natural language patterns and the intent behind queries. Google’s 2013 Hummingbird update helped pave the way for such evolution. This algorithm rejigging allowed Google’s search engine to better understand the whole of a query, moving it away from keyword matching to conversation having.

This is good news if you’re a human person: We have a harder time changing the way we speak than the way we write. It’s even greater news for digital assistants, because voice search only works if search engines can interpret human speech and engage in chitchat.

Digital assistants and machine learning

By looking at how digital assistants do their voice search thing (what we say versus what they search), we can see just how far machine learning has come with natural language processing and how far it still has to go (robots, they’re just like us!). We can also get a sense of the kinds of queries we need to be tracking if voice search is on the SEO agenda.

For example, when we asked our Google Assistant, “What are the best headphones for $100,” it queried [best headphones for $100]. We followed that by asking, “What about wireless,” and it searched [best wireless headphones for $100]. And then we remembered that we’re in Canada, so we followed that with, “I meant $100 Canadian,” and it performed a search for [best wireless headphones for $100 Canadian].

We can learn two things from this successful tête-à-tête: Not only does our Google Assistant manage to construct mostly full-sentence queries out of our mostly full-sentence asks, but it’s able to accurately link together topical queries. Despite us dropping our subject altogether by the end, Google Assistant still knows what we’re talking about.

Of course, we’re not above pointing out the fumbles. In the string of: “How to bake a Bundt cake,” “What kind of pan does it take,” and then “How much do those cost,” the actual query Google Assistant searched for the last question was [how much does bundt cake cost].

Just after we finished praising our Assistant for being able to maintain the same subject all the way through our inquiry, we needed it to be able to switch tracks. And it couldn’t. It associated the “those” with our initial Bundt cake subject instead of the most recent noun mentioned (Bundt cake pans).

In another important line of questioning about Bundt cake-baking, “How long will it take” produced the query [how long does it take to take a Bundt cake], while “How long does that take” produced [how long does a Bundt cake take to bake].

They’re the same ask, but our Google Assistant had a harder time parsing which definition of “take” our first sentence was using, spitting out a rather awkward query. Unless we really did want to know how long it’s going to take us to run off with someone’s freshly baked Bundt cake? (Don’t judge us.)

Since Google is likely paying out the wazoo to up the machine learning ante, we expect there to be less awkward failures over time. Which is a good thing, because when we asked about Bundt cake ingredients (“Does it take butter”) we found ourselves looking at a SERP for [how do I bake a butter].

Not that that doesn’t sound delicious.

Snippets are appearing for different kinds of queries

So, what are we to make of all of this? That we’re essentially in the midst of a natural language renaissance. And that voice search is helping spearhead the charge.

As for what this means for snippets specifically? They’re going to have to show up for human speak-type queries. And wouldn’t you know it, Google is already moving forward with this strategy, and not simply creating more snippets for the same types of queries. We’ve even got proof.

Over the last two years, we’ve seen an increase in the number of words in a query that surfaces a featured snippet. Long-tail queries may be a nuisance and a half, but snippet-having queries are getting longer by the minute.

When we bucket and weight the terms found in those long-tail queries by TF-IDF, we get further proof of voice search’s sway over snippets. The term “how” appears more than any other word and is followed closely by “does,” “to,” “much,” “what,” and “is” — all words that typically compose full sentences and are easier to remove from our typed searches than our spoken ones.

This means that if we want to snag more snippets and help searchers using digital assistants, we need to build out long-tail, natural-sounding keyword lists to track and optimize for.

Format your snippet content to match

When it’s finally time to optimize, one of the best ways to get your content into the ears of a searcher is through the right snippet formatting, which is a lesson we can learn from Google.

Taking our TF-IDF-weighted terms, we found that the words “best” and “how to” brought in the most list snippets of the bunch. We certainly don’t have to think too hard about why Google decided they benefit from list formatting — it provides a quick comparative snapshot or a handy step-by-step.

From this, we may be inclined to format all of our “best” and “how to” keyword content into lists. But, as you can see in the chart above, paragraphs and tables are still appearing here, and we could be leaving snippets on the table by ignoring them. If we have time, we’ll dig into which keywords those formats are a better fit for and why.

Get tracking

You could be the Wonder Woman of meta descriptions, but if you aren’t optimizing for the right kind of snippets, then your content’s going to have a harder time getting heard. Building out a voice search-friendly keyword list to track is the first step to lassoing those snippets.

Want to learn how you can do that in STAT? Say hello and request a tailored demo.

Need more snippets in your life? We dug into Google’s double-snippet SERPs for you — double the snippets, double the fun.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

SEO Channel Context: An Analysis of Growth Opportunities

Posted by BrankoK

Too often do you see SEO analyses and decisions being made without considering the context of the marketing channel mix. Equally as often do you see large budgets being poured into paid ads in ways that seem to forget there's a whole lot to gain from catering to popular search demand.

Both instances can lead to leaky conversion funnels and missed opportunity for long term traffic flows. But this article will show you a case of an SEO context analysis we used to determine the importance and role of SEO.

This analysis was one of our deliverables for a marketing agency client who hired us to inform SEO decisions which we then turned into a report template for you to get inspired by and duplicate.

Case description

The included charts show real, live data. You can see the whole SEO channel context analysis in this Data Studio SEO report template.

The traffic analyzed is for of a monetizing blog, whose marketing team also happens to be one of most fun to work for. For the sake of this case study, we're giving them a spectacular undercover name — "The Broze Fellaz."

For context, this blog started off with content for the first two years before they launched their flagship product. Now, they sell a catalogue of products highly relevant to their content and, thanks to one of the most entertaining Shark Tank episodes ever aired, they have acquired investments and a highly engaged niche community.

As you’ll see below, organic search is their biggest channel in many ways. Facebook also runs both as organic and paid and the team spends many an hour inside the platform. Email has elaborate automated flows that strive to leverage subscribers that come from the stellar content on the website. We therefore chose the three — organic Search, Facebook, and email — as a combination that would yield a comprehensive analysis with insights we can easily act on.

Ingredients for the SEO analysis

This analysis is a result of a long-term retainer relationship with "The Broze Fellaz" as our ongoing analytics client. A great deal was required in order for data-driven action to happen, but we assure you, it's all doable.

From the analysis best practice drawer, we used:

  • 2 cups of relevant channels for context and analysis via comparison.
  • 3 cups of different touch points to identify channel roles — bringing in traffic, generating opt-ins, closing sales, etc.
  • 5 heads of open-minded lettuce and readiness to change current status quo, for a team that can execute.
  • 457 oz of focus-on-finding what is going on with organic search, why it is going on, and what we can do about it (otherwise, we’d end up with another scorecard export).
  • Imperial units used in arbitrary numbers that are hard to imagine and thus feel very large.
  • 1 to 2 heads of your analyst brain, baked into the analysis. You're not making an automated report — even a HubSpot intern can do that. You're being a human and you're analyzing. You're making human analysis. This helps avoid having your job stolen by a robot.
  • Full tray of Data Studio visualizations that appeal to the eye.
  • Sprinkles of benchmarks, for highlighting significance of performance differences.

From the measurement setup and stack toolbox, we used:

  • Google Analytics with tailored channel definitions, enhanced e-commerce and Search Console integration.
  • Event tracking for opt-ins and adjusted bounce rate via MashMetrics GTM setup framework.
  • UTM routine for social and email traffic implemented via Google Sheets & UTM.io.
  • Google Data Studio. This is my favorite visualization tool. Despite its flaws and gaps (as it’s still in beta) I say it is better than its paid counterparts, and it keeps getting better. For data sources, we used the native connectors for Google Analytics and Google Sheets, then Facebook community connectors by Supermetrics.
  • Keyword Hero. Thanks to semantic algorithms and data aggregation, you are indeed able to see 95 percent of your organic search queries (check out Onpage Hero, too, you'll be amazed).

Inspiration for my approach comes from Lea Pica, Avinash, the Google Data Studio newsletter, and Chris Penn, along with our dear clients and the questions they have us answer for them.

Ready? Let's dive in.

Analysis of the client's SEO on the context of their channel mix

1) Insight: Before the visit

What's going on and why is it happening?

Organic search traffic volume blows the other channels out of the water. This is normal for sites with quality regular content; yet, the difference is stark considering the active effort that goes into Facebook and email campaigns.

The CTR of organic search is up to par with Facebook. That's a lot to say when comparing an organic channel to a channel with high level of targeting control.

It looks like email flows are the clear winner in terms of CTR to the website, which has a highly engaged community of users who return fairly often and advocate passionately. It also has a product and content that's incredibly relevant to their users, which few other companies appear to be good at.

There's a high CTR on search engine results pages often indicates that organic search may support funnel stages beyond just the top.

As well, email flows are sent to a very warm audience — interested users who went through a double opt-in. It is to be expected for this CTR to be high.

What's been done already?

There's an active effort and budget allocation being put towards Facebook Ads and email automation. A content plan has been put in place and is being executed diligently.

What we recommend next

  1. Approach SEO in a way as systematic as what you do for Facebook and email flows.
  2. Optimize meta titles and descriptions via testing tools such as Sanity Check. The organic search CTR may become consistently higher than that of Facebook ads.
  3. Assuming you've worked on improving CTR for Facebook ads, have the same person work on the meta text and titles. Most likely, there'll be patterns you can replicate from social to SEO.
  4. Run a technical audit and optimize accordingly. Knowing that you haven’t done that in a long time, and seeing how much traffic you get anyway, there’ll be quick, big wins to enjoy.

Results we expect

You can easily increase the organic CTR by at least 5 percent. You could also clean up the technical state of your site in the eyes of crawlers -— you’ll then see faster indexing by search engines when you publish new content, increased impressions for existing content. As a result, you may enjoy a major spike within a month.

2) Insight: Engagement and options during the visit

With over 70 percent of traffic coming to this website from organic search, the metrics in this analysis will be heavily skewed towards organic search. So, comparing the rate for organic search to site-wide is sometimes conclusive, other times not conclusive.

Adjusted bounce rate — via GTM events in the measurement framework used, we do not count a visit as a bounce if the visit lasts 45 seconds or longer. We prefer this approach because such an adjusted bounce rate is much more actionable for content sites. Users who find what they were searching for often read the page they land on for several minutes without clicking to another page. However, this is still a memorable visit for the user. Further, staying on the landing page for a while, or keeping the page open in a browser tab, are both good indicators for distinguishing quality, interested traffic, from all traffic.

We included all Facebook traffic here, not just paid. We know from the client’s data that the majority is from paid content, they have a solid UTM routine in place. But due to boosted posts, we’ve experienced big inaccuracies when splitting paid and organic Facebook for the purposes of channel attribution.

What's going on and why is it happening?

It looks like organic search has a bounce rate worse than the email flows — that's to be expected and not actionable, considering that the emails are only sent to recent visitors who have gone through a double opt-in. What is meaningful, however, is that organic has a better bounce rate than Facebook. It is safe to say that organic search visitors will be more likely to remember the website than the Facebook visitors.

Opt-in rates for Facebook are right above site average, and those for organic search are right below, while organic is bringing in a majority of email opt-ins despite its lower opt-in rate.

Google's algorithms and the draw of the content on this website are doing better at winning users' attention than the detailed targeting applied on Facebook. The organic traffic will have a higher likelihood of remembering the website and coming back. Across all of our clients, we find that organic search can be a great retargeting channel, particularly if you consider that the site will come up higher in search results for its recent visitors.

What's been done already?

The Facebook ad campaigns of "The Broze Fellaz" have been built and optimized for driving content opt-ins. Site content that ranks in organic search is less intentional than that.

Opt-in placements have been tested on some of the biggest organic traffic magnets.

Thorough, creative and consistent content calendars have been in place as a foundation for all channels.

What we recommend next

  1. It's great to keep using organic search as a way to introduce new users to the site. Now, you can try to be more intentional about using it for driving opt-ins. It’s already serving both of the stages of the funnel.
  2. Test and optimize opt-in placements on more traffic magnets.
  3. Test and optimize opt-in copy for top 10 traffic magnets.
  4. Once your opt-in rates have improved, focus on growing the channel. Add to the content work with a 3-month sprint of an extensive SEO project
  5. Assign Google Analytics goal values to non-e-commerce actions on your site. The current opt-ins have different roles and levels of importance and there’s also a handful of other actions people can take that lead to marketing results down the road. Analyzing goal values will help you create better flows toward pre-purchase actions.
  6. Facebook campaigns seem to be at a point where you can pour more budget into them and expect proportionate increase in opt-in count.

Results we expect

Growth in your opt-ins from Facebook should be proportionate to increase in budget, with a near-immediate effect. At the same time, it’s fairly realistic to bring the opt-in rate of organic search closer to site average.

3) Insight: Closing the deal

For channel attribution with money involved, you want to make sure that your Google Analytics channel definitions, view filters, and UTM’s are in top shape.

What's going on and why is it happening?

Transaction rate, as well as per session value, is higher for organic search than it is for Facebook (paid and organic combined).

Organic search contributes to far more last-click revenue than Facebook and email combined. For its relatively low volume of traffic, email flows are outstanding in the volume of revenue they bring in.

Thanks to the integration of Keyword Hero with Google Analytics for this client, we can see that about 30 percent of organic search visits are from branded keywords, which tends to drive the transaction rate up.

So, why is this happening? Most of the product on the site is highly relevant to the information people search for on Google.

Multi-channel reports in Google Analytics also show that people often discover the site in organic search, then come back by typing in the URL or clicking a bookmark. That makes organic a source of conversions where, very often, no other channels are even needed.

We can conclude that Facebook posts and campaigns of this client are built to drive content opt-ins, not e-commerce transactions. Email flows are built specifically to close sales.

What’s been done already?

There is dedicated staff for Facebook campaigns and posts, as well a thorough system dedicated to automated email flows.

A consistent content routine is in place, with experienced staff at the helm. A piece has been published every week for the last few years, with the content calendar filled with ready-to-publish content for the next few months. The community is highly engaged, reading times are high, comment count soaring, and usefulness of content outstanding. This, along with partnerships with influencers, helps "The Broze Fellaz" take up a half of the first page on the SERP for several lucrative topics. They’ve been achieving this even without a comprehensive SEO project. Content seems to be king indeed.

Google Shopping has been tried. The campaign looked promising but didn't yield incremental sales. There’s much more search demand for informational queries than there is for product.

What we recommend next

  1. Organic traffic is ready to grow. If there is no budget left, resource allocation should be considered. In paid search, you can often simply increase budgets. Here, with stellar content already performing well, a comprehensive SEO project is begging for your attention. Focus can be put into structure and technical aspects, as well as content that better caters to search demand. Think optimizing the site’s information architecture, interlinking content for cornerstone structure, log analysis, and technical cleanup, meta text testing for CTR gains that would also lead to ranking gains, strategic ranking of long tail topics, intentional growing of the backlink profile.
  2. Three- or six-month intensive sprint of comprehensive SEO work would be appropriate.

Results we expect

Increasing last click revenue from organic search and direct by 25 percent would lead to a gain as high as all of the current revenue from automated email flows. Considering how large the growth has been already, this gain is more than achievable in 3–6 months.

Wrapping it up

Organic search presence of "The Broze Fellaz" should continue to be the number-one role for bringing new people to the site and bringing people back to the site. Doing so supports sales that happen with the contribution of other channels, e.g. email flows. The analysis points out is that organic search is also effective at playing the role of the last-click channel for transactions, often times without the help of other channels.

We’ve worked with this client for a few years, and, based on our knowledge of their marketing focus, this analysis points us to a confident conclusion that a dedicated, comprehensive SEO project will lead to high incremental growth.

Your turn

In drawing analytical conclusions and acting on them, there’s always more than one way to shoe a horse. Let us know what conclusions you would’ve drawn instead. Copy the layout of our SEO Channel Context Comparison analysis template and show us what it helped you do for your SEO efforts — create a similar analysis for a paid or owned channel in your mix. Whether it’s comments below, tweeting our way, or sending a smoke signal, we’ll be all ears. And eyes.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Wednesday, February 20, 2019

Make sense of your data with these essential keyword segments

Posted by TheMozTeam

This blog post was originally published on the STAT blog.


The first step to getting the most out of your SERP data is smart keyword segmentation — it surfaces targeted insights that will help you make data-driven decisions.

But knowing what to segment can feel daunting, especially when you’re working with thousands of keywords. That’s why we’re arming you with a handful of must-have tags.

Follow along as we walk through the different kinds of segments in STAT, how to create them, and which tags you’ll want to get started with. You’ll be a fanciful segment connoisseur by the time we’re through!

Segmentation in STAT

In STAT, keyword segments are called “tags” and come as two different types: standard or dynamic.

Standard tags are best used when you want to keep specific keywords grouped together because of shared characteristics — like term (brand, product type, etc), location, or device. Standard tags are static, so the keywords that populate those segments won’t change unless you manually add or remove them.

Dynamic tags, on the other hand, are a fancier kind of tag based on filter criteria. Just like a smart playlist, dynamic tags automatically populate with all of the keywords that meet said criteria, such as keywords with a search volume over 500 that rank on page one. This means that the keywords in a dynamic tag aren’t forever — they’ll filter in and out depending on the criteria you’ve set.

How to create a keyword segment

Tags are created in a few easy steps. At the Site level, pop over to the Keywords tab, click the down arrow on any table column header, and then select Filter keywords. From there, you can select the pre-populated options or enter your own metrics for a choose-your-own-filter adventure.

Once your filters are in place, simply click Tag All Filtered Keywords, enter a new tag name, and then pick the tag type best suited to your needs — standard or dynamic — and voila! You’ve created your very own segment.

Segments to get you started

Now that you know how to set up a tag, it’s time to explore some of the different segments you can implement and the filter criteria you’ll need to apply.

Rank and rank movement

Tracking your rank and ranking movements with dynamic tags will give you eyeballs on your keyword performance, making it easy to monitor and report on current and historical trends.

There’s a boatload of rank segments you can set up, but here’s just a sampling to get you started:

  • Keywords ranking in position 1–3; this will identify your top performing keywords.
  • Keywords ranking in position 11–15; this will suss out the low-hanging, top of page two fruit in need of a little nudge.
  • Keywords with a rank change of 10 or more (in either direction); this will show you keywords that are slipping off or shooting up the SERP.

Appearance and ownership of SERP features

Whether they’re images, carousels, or news results, SERP features have significantly altered the search landscape. Sometimes they push you down the page and other times, like when you manage to snag one, they can give you a serious leg up on the competition and drive loads more traffic to your site.

Whatever industry-related SERP features that you want to keep apprised of, you can create dynamic tags that show you the prevalence and movement of them within your keyword set. Segment even further for tags that show which keywords own those features and which have fallen short.

Below are a few segments you can set up for featured snippets and local packs.

Featured snippets

Everyone’s favourite SERP feature isn’t going anywhere anytime soon, so it wouldn’t be a bad idea to outfit yourself with a snippet tracking strategy. You can create as many tags as there are snippet options to choose from:

  • Keywords with a featured snippet.
  • Keywords with a paragraph, list, table, and/or carousel snippet.
  • Keywords with an owned paragraph, list, table, and/or carousel snippet.
  • Keywords with an unowned paragraph, list, table, and/or carousel snippet.

The first two will allow you to see over-arching snippet trends, while the last two will chart your ownership progress.

If you want to know the URL that’s won you a snippet, just take a peek at the URL column.

Local packs

If you’re a brick and mortar business, we highly advise creating tags for local packs since they provide a huge opportunity for exposure. These two tags will show you which local packs you have a presence in and which you need to work on

  • Keywords with an owned local pack.
  • Keywords with an unowned local pack.

Want all the juicy data squeezed into a local pack, like who’s showing up and with what URL? We created the Local pack report just for that.

Landing pages, subdomains, and other important URLs

Whether you’re adding new content or implementing link-building strategies around subdomains and landing pages, dynamic tags allow you to track and measure page performance, see whether your searchers are ending up on the pages you want, and match increases in page traffic with specific keywords.

For example, are your informational intent keywords driving traffic to your product pages instead of your blog? To check, a tag that includes your blog URL will pull in each post that ranks for one of your keywords.

Try these three dynamic tags for starters:

  • Keywords ranking for a landing page URL.
  • Keywords ranking for a subdomain URL.
  • Keywords ranking for a blog URL.

Is a page not indexed yet? That’s okay. You can still create a dynamic tag for its URL and keywords will start appearing in that segment when Google finally gets to it.

Location, location, location

Google cares a lot about location and so should you, which is why keyword segments centred around location are essential. You can tag in two ways: by geo-modifier and by geo-location.

For these, it’s better to go with the standard tag as the search term and location are fixed to the keyword.

Geo-modifier

A geo-modifier is the geographical qualifier that searchers manually include in their query — like in [sushi near me]. We advocate for adding various geo-modifiers to your keywords and then incorporating them into your tagging strategy. For instance, you can segment by:

  • Keywords with “in [city]” in them.
  • Keywords with “near me” in them.

The former will show you how you fare for city-wide searches, while the latter will let you see if you’re meeting the needs of searchers looking for nearby options.

Geo-location

Geo-location is where the keyword is being tracked. More tracked locations mean more searchers’ SERPs to sample. And the closer you can get to searchers standing on a street corner, the more accurate those SERPs will be. This is why we strongly recommend you track in multiple pin-point locations in every market you serve.

Once you’ve got your tracking strategy in place, get your segmentation on. You can filter and tag by:

  • Keywords tracked in specific locations; this will let you keep tabs on geographical trends.
  • Keywords tracked in each market; this will allow for market-level research.

Search volume & cost-per-click

Search volume might be a contentious metric thanks to Google’s close variants, but having a decent idea of what it’s up to is better than a complete shot in the dark. We suggest at least two dynamic segments around search volume:

  • Keywords with high search volume; this will show which queries are popular in your industry and have the potential to drive the most traffic.
  • Keywords with low search volume; this can actually help reveal conversion opportunities — remember, long-tail keywords typically have lower search volumes but higher conversion rates.

Tracking the cost-per-click of your keywords will also bring you and your PPC team tonnes of valuable insights — you’ll know if you’re holding the top organic spot for an outrageously high CPC keyword.

As with search volume, tags for high and low CPC should do you just fine. High CPC keywords will show you where the competition is the fiercest, while low CPC keywords will surface your easiest point of entry into the paid game — queries you can optimize for with less of a fight.

Device type

From screen size to indexing, desktop and smartphones produce substantially different SERPs from one another, making it essential to track them separately. So, filter and tag for:

  • Keywords tracked on a desktop.
  • Keywords tracked on a smartphone.

Similar to your location segments, it’s best to use the standard tag here.

Go crazy with multiple filters

We’ve shown you some really high-level segments, but you can actually filter down your keywords even further. In other words, you can get extra fancy and add multiple filters to a single tag. Go as far as high search volume, branded keywords triggering paragraph featured snippets that you own for smartphone searchers in the downtown core. Phew!

Want to make talk shop about segmentation or see dynamic tags in action? Say hello (don’t be shy) and request a demo.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Monday, February 18, 2019

Build a Search Intent Dashboard to Unlock Better Opportunities

Posted by scott.taft

We've been talking a lot about search intent this week, and if you've been following along, you’re likely already aware of how “search intent” is essential for a robust SEO strategy. If, however, you’ve ever laboured for hours classifying keywords by topic and search intent, only to end up with a ton of data you don’t really know what to do with, then this post is for you.

I’m going to share how to take all that sweet keyword data you’ve categorized, put it into a Power BI dashboard, and start slicing and dicing to uncover a ton insights — faster than you ever could before.

Building your keyword list

Every great search analysis starts with keyword research and this one is no different. I’m not going to go into excruciating detail about how to build your keyword list. However, I will mention a few of my favorite tools that I’m sure most of you are using already:

  • Search Query Report — What better place to look first than the search terms already driving clicks and (hopefully) conversions to your site.
  • Answer The Public — Great for pulling a ton of suggested terms, questions and phrases related to a single search term.
  • InfiniteSuggest — Like Answer The Public, but faster and allows you to build based on a continuous list of seed keywords.
  • MergeWords — Quickly expand your keywords by adding modifiers upon modifiers.
  • Grep Words — A suite of keyword tools for expanding, pulling search volume and more.

Please note that these tools are a great way to scale your keyword collecting but each will come with the need to comb through and clean your data to ensure all keywords are at least somewhat relevant to your business and audience.

Once I have an initial keyword list built, I’ll upload it to STAT and let it run for a couple days to get an initial data pull. This allows me to pull the ‘People Also Ask’ and ‘Related Searches’ reports in STAT to further build out my keyword list. All in all, I’m aiming to get to at least 5,000 keywords, but the more the merrier.

For the purposes of this blog post I have about 19,000 keywords I collected for a client in the window treatments space.

Categorizing your keywords by topic

Bucketing keywords into categories is an age-old challenge for most digital marketers but it’s a critical step in understanding the distribution of your data. One of the best ways to segment your keywords is by shared words. If you’re short on AI and machine learning capabilities, look no further than a trusty Ngram analyzer. I love to use this Ngram Tool from guidetodatamining.com — it ain’t much to look at, but it’s fast and trustworthy.

After dropping my 19,000 keywords into the tool and analyzing by unigram (or 1-word phrases), I manually select categories that fit with my client’s business and audience. I also make sure the unigram accounts for a decent amount of keywords (e.g. I wouldn’t pick a unigram that has a count of only 2 keywords).

Using this data, I then create a Category Mapping table and map a unigram, or “trigger word”, to a Category like the following:

You’ll notice that for “curtain” and “drapes” I mapped both to the Curtains category. For my client’s business, they treat these as the same product, and doing this allows me to account for variations in keywords but ultimately group them how I want for this analysis.

Using this method, I create a Trigger Word-Category mapping based on my entire dataset. It’s possible that not every keyword will fall into a category and that’s okay — it likely means that keyword is not relevant or significant enough to be accounted for.

Creating a keyword intent map

Similar to identifying common topics by which to group your keywords, I’m going to follow a similar process but with the goal of grouping keywords by intent modifier.

Search intent is the end goal of a person using a search engine. Digital marketers can leverage these terms and modifiers to infer what types of results or actions a consumer is aiming for.

For example, if a person searches for “white blinds near me”, it is safe to infer that this person is looking to buy white blinds as they are looking for a physical location that sells them. In this case I would classify “near me” as a “Transactional” modifier. If, however, the person searched “living room blinds ideas” I would infer their intent is to see images or read blog posts on the topic of living room blinds. I might classify this search term as being at the “Inspirational” stage, where a person is still deciding what products they might be interested and, therefore, isn’t quite ready to buy yet.

There is a lot of research on some generally accepted intent modifiers in search and I don’t intent to reinvent the wheel. This handy guide (originally published in STAT) provides a good review of intent modifiers you can start with.

I followed the same process as building out categories to build out my intent mapping and the result is a table of intent triggers and their corresponding Intent stage.

Intro to Power BI

There are tons of resources on how to get started with the free tool Power BI, one of which is from own founder Will Reynold’s video series on using Power BI for Digital Marketing. This is a great place to start if you’re new to the tool and its capabilities.

Note: it’s not about the tool necessarily (although Power BI is a super powerful one). It’s more about being able to look at all of this data in one place and pull insights from it at speeds which Excel just won’t give you. If you’re still skeptical of trying a new tool like Power BI at the end of this post, I urge you to get the free download from Microsoft and give it a try.

Setting up your data in Power BI

Power BI’s power comes from linking multiple datasets together based on common “keys." Think back to your Microsoft Access days and this should all start to sound familiar.

Step 1: Upload your data sources

First, open Power BI and you’ll see a button called “Get Data” in the top ribbon. Click that and then select the data format you want to upload. All of my data for this analysis is in CSV format so I will select the Text/CSV option for all of my data sources. You have to follow these steps for each data source. Click “Load” for each data source.

Step 2: Clean your data

In the Power BI ribbon menu, click the button called “Edit Queries." This will open the Query Editor where we will make all of our data transformations.

The main things you’ll want to do in the Query Editor are the following:

  • Make sure all data formats make sense (e.g. keywords are formatted as text, numbers are formatted as decimals or whole numbers).
  • Rename columns as needed.
  • Create a domain column in your Top 20 report based on the URL column.

Close and apply your changes by hitting the "Edit Queries" button, as seen above.

Step 3: Create relationships between data sources

On the left side of Power BI is a vertical bar with icons for different views. Click the third one to see your relationships view.

In this view, we are going to connect all data sources to our ‘Keywords Bridge’ table by clicking and dragging a line from the field ‘Keyword’ in each table and to ‘Keyword’ in the ‘Keywords Bridge’ table (note that for the PPC Data, I have connected ‘Search Term’ as this is the PPC equivalent of a keyword, as we’re using here).

The last thing we need to do for our relationships is double-click on each line to ensure the following options are selected for each so that our dashboard works properly:

  • The cardinality is Many to 1
  • The relationship is “active”
  • The cross filter direction is set to “both”

We are now ready to start building our Intent Dashboard and analyzing our data.

Building the search intent dashboard

In this section I’ll walk you through each visual in the Search Intent Dashboard (as seen below):

Top domains by count of keywords

Visual type: Stacked Bar Chart visual

Axis: I’ve nested URL under Domain so I can drill down to see this same breakdown by URL for a specific Domain

Value: Distinct count of keywords

Legend: Result Types

Filter: Top 10 filter on Domains by count of distinct keywords

Keyword breakdown by result type

Visual type: Donut chart

Legend: Result Types

Value: Count of distinct keywords, shown as Percent of grand total

Metric Cards

Sum of Distinct MSV

Because the Top 20 report shows each keyword 20 times, we need to create a calculated measure in Power BI to only sum MSV for the unique list of keywords. Use this formula for that calculated measure:

Sum Distinct MSV = SUMX(DISTINCT('Table'[Keywords]), FIRSTNONBLANK('Table'[MSV], 0))

Keywords

This is just a distinct count of keywords

Slicer: PPC Conversions

Visual type: Slicer

Drop your PPC Conversions field into a slicer and set the format to “Between” to get this nifty slider visual.

Tables

Visual type: Table or Matrix (a matrix allows for drilling down similar to a pivot table in Excel)

Values: Here I have Category or Intent Stage and then the distinct count of keywords.

Pulling insights from your search intent dashboard

This dashboard is now a Swiss Army knife of data that allows you to slice and dice to your heart’s content. Below are a couple examples of how I use this dashboard to pull out opportunities and insights for my clients.

Where are competitors winning?

With this data we can quickly see who the top competing domains are, but what’s more valuable is seeing who the competitors are for a particular intent stage and category.

I start by filtering to the “Informational” stage, since it represents the most keywords in our dataset. I also filter to the top category for this intent stage which is “Blinds”. Looking at my Keyword Count card, I can now see that I’m looking at a subset of 641 keywords.

Note: To filter multiple visuals in Power BI, you need to press and hold the “Ctrl” button each time you click a new visual to maintain all the filters you clicked previously.

The top competing subdomain here is videos.blinds.com with visibility in the top 20 for over 250 keywords, most of which are for video results. I hit ctrl+click on the Video results portion of videos.blinds.com to update the keywords table to only keywords where videos.blinds.com is ranking in the top 20 with a video result.

From all this I can now say that videos.blinds.com is ranking in the top 20 positions for about 30 percent of keywords that fall into the “Blinds” category and the “Informational” intent stage. I can also see that most of the keywords here start with “how to”, which tells me that most likely people searching for blinds in an informational stage are looking for how to instructions and that video may be a desired content format.

Where should I focus my time?

Whether you’re in-house or at an agency, time is always a hit commodity. You can use this dashboard to quickly identify opportunities that you should be prioritizing first — opportunities that can guarantee you’ll deliver bottom-line results.

To find these bottom-line results, we’re going to filter our data using the PPC conversions slicer so that our data only includes keywords that have converted at least once in our PPC campaigns.

Once I do that, I can see I’m working with a pretty limited set of keywords that have been bucketed into intent stages, but I can continue by drilling into the “Transactional” intent stage because I want to target queries that are linked to a possible purchase.

Note: Not every keyword will fall into an intent stage if it doesn’t meet the criteria we set. These keywords will still appear in the data, but this is the reason why your total keyword count might not always match the total keyword count in the intent stages or category tables.

From there I want to focus on those “Transactional” keywords that are triggering answer boxes to make sure I have good visibility, since they are converting for me on PPC. To do that, I filter to only show keywords triggering answer boxes. Based on these filters I can look at my keyword table and see most (if not all) of the keywords are “installation” keywords and I don’t see my client’s domain in the top list of competitors. This is now an area of focus for me to start driving organic conversions.

Wrap up

I’ve only just scratched the surface — there’s tons that can can be done with this data inside a tool like Power BI. Having a solid data set of keywords and visuals that I can revisit repeatedly for a client and continuously pull out opportunities to help fuel our strategy is, for me, invaluable. I can work efficiently without having to go back to keyword tools whenever I need an idea. Hopefully you find this makes building an intent-based strategy more efficient and sound for your business or clients.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Detecting Link Manipulation and Spam with Domain Authority

Posted by rjonesx.

Over 7 years ago, while still an employee at Virante, Inc. (now Hive Digital), I wrote a post on Moz outlining some simple methods for detecting backlink manipulation by comparing one's backlink profile to an ideal model based on Wikipedia. At the time, I was limited in the research I could perform because I was a consumer of the API, lacking access to deeper metrics, measurements, and methodologies to identify anomalies in backlink profiles. We used these techniques in spotting backlink manipulation with tools like Remove'em and Penguin Risk, but they were always handicapped by the limitations of consumer facing APIs. Moreover, they didn't scale. It is one thing to collect all the backlinks for a site, even a large site, and judge every individual link for source type, quality, anchor text, etc. Reports like these can be accessed from dozens of vendors if you are willing to wait a few hours for the report to complete. But how do you do this for 30 trillion links every single day?

Since the launch of Link Explorer and my residency here at Moz, I have had the luxury of far less filtered data, giving me a far deeper, clearer picture of the tools available to backlink index maintainers to identify and counter manipulation. While I in no way intend to say that all manipulation can be detected, I want to outline just some of the myriad surprising methodologies to detect spam.

The general methodology

You don't need to be a data scientist or a math nerd to understand this simple practice for identifying link spam. While there certainly is a great deal of math used in the execution of measuring, testing, and building practical models, the general gist is plainly understandable.

The first step is to get a good random sample of links from the web, which you can read about here. But let's assume you have already finished that step. Then, for any property of those random links (DA, anchor text, etc.), you figure out what is normal or expected. Finally, you look for outliers and see if those correspond with something important - like sites that are manipulating the link graph, or sites that are exceptionally good. Let's start with an easy example, link decay.

Link decay and link spam

Link decay is the natural occurrence of links either dropping off the web or changing URLs. For example, if you get links after you send out a press release, you would expect some of those links to eventually disappear as the pages are archived or removed for being old. And, if you were to get a link from a blog post, you might expect to have a homepage link on the blog until that post is pushed to the second or third page by new posts.

But what if you bought your links? What if you own a large number of domains and all the sites link to each other? What if you use a PBN? These links tend not to decay. Exercising control over your inbound links often means that you keep them from ever decaying. Thus, we can create a simple hypothesis:

Hypothesis: The link decay rate of sites manipulating the link graph will differ from sites with natural link profiles.

The methodology for testing this hypothesis is just as we discussed before. We first figure out what is natural. What does a random site's link decay rate look like? Well, we simply get a bunch of sites and record how fast links are deleted (we visit a page and see a link is gone) vs. their total number of links. We then can look for anomalies.

In this case of anomaly hunting, I'm going to make it really easy. No statistics, no math, just a quick look at what pops up when we first sort by Lowest Decay Rate and then sort by Highest Domain Authority to see who is at the tail-end of the spectrum.

spreadsheet of sites with high deleted link ratios

Success! Every example we see of a good DA score but 0 link decay appears to be powered by a link network of some sort. This is the Aha! moment of data science that is so fun. What is particularly interesting is we find spam on both ends of the distribution — that is to say, sites that have 0 decay or near 100% decay rates both tend to be spammy. The first type tends to be part of a link network, the second part tends to spam their backlinks to sites others are spamming, so their links quickly shuffle off to other pages.

Of course, now we do the hard work of building a model that actually takes this into account and accurately reduces Domain Authority relative to the severity of the link spam. But you might be asking...

These sites don't rank in Google — why do they have decent DAs in the first place?

Well, this is a common problem with training sets. DA is trained on sites that rank in Google so that we can figure out who will rank above who. However, historically, we haven't (and no one to my knowledge in our industry has) taken into account random URLs that don't rank at all. This is something we're solving for in the new DA model set to launch in early March, so stay tuned, as this represents a major improvement on the way we calculate DA!

Spam Score distribution and link spam

One of the most exciting new additions to the upcoming Domain Authority 2.0 is the use of our Spam Score. Moz's Spam Score is a link-blind (we don't use links at all) metric that predicts the likelihood a domain will be indexed in Google. The higher the score, the worse the site.

Now, we could just ignore any links from sites with Spam Scores over 70 and call it a day, but it turns out there are fascinating patterns left behind by common link manipulation schemes waiting to be discovered by using this simple methodology of using a random sample of URLs to find out what a normal backlink profile looks like, and then see if there are anomalies in the way Spam Score is distributed among the backlinks to a site. Let me show you just one.

It turns out that acting natural is really hard to do. Even the best attempts often fall short, as did this particularly pernicious link spam network. This network had haunted me for 2 years because it included a directory of the top million sites, so if you were one of those sites, you could see anywhere from 200 to 600 followed links show up in your backlink profile. I called it "The Globe" network. It was easy to look at the network and see what they were doing, but could we spot it automatically so that we could devalue other networks like it in the future? When we looked at the link profile of sites included in the network, the Spam Score distribution lit up like a Christmas tree.

spreadsheet with distribution of spam scores

Most sites get the majority of their backlinks from low Spam Score domains and get fewer and fewer as the Spam Score of the domains go up. But this link network couldn't hide because we were able to detect the sites in their network as having quality issues using Spam Score. If we relied only on ignoring the bad Spam Score links, we would have never discovered this issue. Instead, we found a great classifier for finding sites that are likely to be penalized by Google for bad link building practices.

DA distribution and link spam

We can find similar patterns among sites with the distribution of inbound Domain Authority. It's common for businesses seeking to increase their rankings to set minimum quality standards on their outreach campaigns, often DA30 and above. An unfortunate outcome of this is that what remains are glaring examples of sites with manipulated link profiles.

Let me take a moment and be clear here. A manipulated link profile is not necessarily against Google's guidelines. If you do targeted PR outreach, it is reasonable to expect that such a distribution might occur without any attempt to manipulate the graph. However, the real question is whether Google wants sites that perform such outreach to perform better. If not, this glaring example of link manipulation is pretty easy for Google to dampen, if not ignore altogether.

spreadsheet with distribution of domain authorityA normal link graph for a site that is not targeting high link equity domains will have the majority of their links coming from DA0–10 sites, slightly fewer for DA10–20, and so on and so forth until there are almost no links from DA90+. This makes sense, as the web has far more low DA sites than high. But all the sites above have abnormal link distributions, which make it easy to detect and correct — at scale — link value.

Now, I want to be clear: these are not necessarily examples of violating Google's guidelines. However, they are manipulations of the link graph. It's up to you to determine whether you believe Google takes the time to differentiate between how the outreach was conducted that resulted in the abnormal link distribution.

What doesn't work

For every type of link manipulation detection method we discover, we scrap dozens more. Some of these are actually quite surprising. Let me write about just one of the many.

The first surprising example was the ratio of nofollow to follow links. It seems pretty straightforward that comment, forum, and other types of spammers would end up accumulating lots of nofollowed links, thereby leaving a pattern that is easy to discern. Well, it turns out this is not true at all.

The ratio of nofollow to follow links turns out to be a poor indicator, as popular sites like facebook.com often have a higher ratio than even pure comment spammers. This is likely due to the use of widgets and beacons and the legitimate usage of popular sites like facebook.com in comments across the web. Of course, this isn't always the case. There are some sites with 100% nofollow links and a high number of root linking domains. These anomalies, like "Comment Spammer 1," can be detected quite easily, but as a general measurement the ratio does not serve as a good classifier for spam or ham.

So what's next?

Moz is continually traversing the the link graph looking for ways to improve Domain Authority using everything from basic linear algebra to complex neural networks. The goal in mind is simple: We want to make the best Domain Authority metric ever. We want a metric which users can trust in the long run to root out spam just like Google (and help you determine when you or your competitors are pushing the limits) while at the same time maintaining or improving correlations with rankings. Of course, we have no expectation of rooting out all spam — no one can do that. But we can do a better job. Led by the incomparable Neil Martinsen-Burrell, our metric will stand alone in the industry as the canonical method for measuring the likelihood a site will rank in Google.


We're launching Domain Authority 2.0 on March 5th! Check out our helpful resources here, or sign up for our webinar this Thursday, February 21st for more info on how to communicate changes like this to clients and stakeholders:

Save my spot!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!