Google’s 2023 Search quality rater guidelines update: Here’s what changed

It’s been nearly a year since Google last updated its Search Quality Rater guidelines. 

Unlike previous edits to the Search Quality Guidelines, which have introduced significant, new concepts (like the new E for Experience last year), the latest updates to the Search Quality Guidelines seem much more focused on user intent and needs met. 

Google is:

Refining what it means to provide high-quality search results.

Helping quality raters understand why certain results are more helpful than others. 

This level of nuance can help explain why we see certain volatility during core updates (as well as periods outside of announced algorithm updates).

If search quality raters have given Google ample evidence that its results are not meeting user expectations, this can lead to substantial intent shifts during core updates. 

Looking at search results for the same query, before and after major Google updates, makes a lot more sense when you understand the granularity with which Google approaches understanding the intent behind a query, and what it means to have high-quality, helpful content

Here are high-level insights into what has changed in the latest search quality rater guidelines update.

More guidance around rating page quality for forums & Q&A pages

Google added a new block of text to instruct raters on how to rate the quality of forum and Q&A pages, specifically in situations where the discussions are either brand new, or drifting into “combative,” “misleading” or “spammy content.” 

A forum page defaults to “medium” if it’s simply a new page that hasn’t had time to collect answers. But older posts without answers should be rated as low quality. 

Google mentions “decorum” a few times in this section, indicating that combative discussions that show a lack of respect should be rated as low quality. 

This is a good reminder that the quality of comments on a given page can impact the overall quality of that page’s content, assuming Google can crawl and index the content in the comments. Often, comment sections are neglected or unmoderated, and if they become problematic, insulting or disrespectful, this can negatively impact an otherwise good-quality page.

(Page 76)

Additionally, the new version of the Search Quality Guidelines includes a visual example of what a “medium” quality forum page looks like on Reddit. The question is only 9 hours old and has no answers, so it defaults to a score of “medium.”

It’s worth noting that Google implies there are no other low-quality characteristics on this page that otherwise could lean the score towards “low quality” for new discussions. 

(Page 77)

Below is the example Reddit URL Google used in this case: 

A quick addition about the importance of a location to a query

Google added a short snippet about the importance of user location to understanding a query. For searches looking for nearby places, location is important, whereas generic questions like “how does gravity work” have the same answer, regardless of the user’s location. 

(Page 84)

Interestingly, Google felt this nuance was worth adding to the Rater Guidelines. While it seems self-explanatory, it’s true that the extent to which a search query has a built in “local intent” can have a major impact on the types of results that would best answer that query. 

Expanding on “Minor Interpretations”

Google added a deeper explanation about its definitions for “Minor interpretations.”

Minor interpretations describe a situation when a query can have multiple meanings, and the minor interpretations are the least likely to be the commonly expected meanings of the query.

Within minor interpretations, Google introduced:

“Reasonable minor interpretations,” which help “fewer users” but are still helpful for search results. 

“Unlikely minor interpretations,” which are theoretically possible but highly unlikely. 

“No chance interpretations” are interpretations of a query that are incredibly unlikely for the user to be looking for. Google provides the example of an “overheated pet” when the searcher types “hot dog” (although I feel that this interpretation is more plausible than a “no chance” rating!).

(Page 87)

Google also added some new visual examples of how to interpret these definitions. For example, an “unlikely minor interpretation” of the search query “Apple” would be the U.S. city, Apple, Oklahoma.

(Page 88)

Further defining ‘Know Simple’ and ‘Do’ queries

Google added several new examples of what types of queries are not “Know Simple” queries.

Know Simple” queries are defined as queries that seek a very specific answer, like a fact or a diagram, that can be answered in a small amount of space, like one or two sentences. 

Google added three new examples of queries that are not Know Simple queries: when users want to browse or explore a topic, find inspiration related to a topic, or are seeking personal opinions and perspectives from real people.

(Page 90)

What makes this addition interesting: The language used here is quite similar to the language Google uses when describing the value of SGE (Search Generative Experience). For example, the following language comes from the main SGE page:

“Dive deeper on a topic in a conversational way.”

“Access the high-quality results and perspectives that you expect from Google.”

“I want to know what people think to help me make a decision”

Perhaps – and this is purely speculation – the feedback Google gets from quality raters about whether queries can be classified as “Know Simple” (or not) can help them understand when to trigger SGE.

Along the same lines, Google added two new examples of “Know Simple Queries” and “Know Queries” – the bottom two rows of the below table – to provide additional context about when a query is easily answered or when the answer is more open ended.

(Page 91)

Google also added more examples of “Do” queries to the table below, starting with [shape of you video] and all queries below it. The three new examples represent queries that would be best answered with videos, images or how-to guides.

(Page 91)

Further refining user intent

Google introduced new language around user intent by classifying what is an unlikely user intent for a set of keywords. 

In the table below, Google added a second column to explain unlikely intents for the keywords Harvard and Walmart. This limits the intent of these keywords to a more reasonable user intent, rather than a completely open-ended set of possible answers.

Users searching for “Harvard” could be looking for various details about the university, but are probably not looking for a specific course. 

(Page 95)

Examples of Google’s SERP features that highly meet user needs

Google provides a table with various examples of search queries, the user’s location, and the user’s intent. It then shows the search result, and rates the extent to which the result met the expectations of the user (“Needs met”).

Google also offers an explanation about why these particular results are ranked as “highly meeting” the expectations of users. 

In this new version of the QRG, Google added and adjusted some of the examples in the “Highly Meets (HM)” results, the highest possible rating of meeting user needs “for most queries.” 

One example of a new example Google added to this list is one where the user is looking for “nearby coffee shops.” Google provides a screenshot of a Google Maps local pack with three coffee shops listed, and explains why this result highly, but not fully, meets the user’s expectation (it doesn’t list every possible coffee shop). 

(Page 114)

Google even added a TikTok video as an example of a result that highly meets the needs of users looking an “around the world tutorial” for soccer. 

These are just some of the new examples, which seem to provide a more modern view of different results, both from external sites as well as Google’s own SERP features.

Dig deeper. An SEO guide to understanding E-E-A-T

The post Google’s 2023 Search quality rater guidelines update: Here’s what changed appeared first on Search Engine Land.