I noticed that an emphatic comment by Google’s VP of Search, Hyung-Jin Kim, at SMX Next in November 2022 passed without much discussion in the SEO community to this day.
He said (my emphasis):
“E-A-T is a template for how we rate an individual site. We do it to every single query and every single result. It’s pervasive throughout every single thing we do.” He added, “E-A-T is a core part of our metrics.”
E-A-T, and, subsequently E-E-A-T, are discussed by SEOs all the time. Most are quick to say that they are not a part of any Google ranking system. Google spokespeople also confirm those statements. They are quality concepts conveyed to the human quality raters whose reports are used to confirm that the ranking systems are delivering the best results to the SERPs. The raters are given a copy of the Search Quality Rater Guidelines.
I shared the SMX Next quote in about five forums and chat groups. Each has a potential audience in the hundreds to the thousands. I focused on the second quote that E-A-T is a core part of Google’s metrics.
How could it be applied to “every single query and every single result” if it is not part of a ranking system?
I posited that it must be a quality assurance process done after a SERP is delivered. The process might be the following:
An AI process examines evidence of expertise, authoritativeness and trustworthiness on each page in the index. Perhaps they have added Experience by now.
This assessment is conducted continuously during crawls of the site and other sites that cite it or link to it.
Each factor is given a numeric score that can change with each crawl.
Each element of a SERP — snippet, carousel image and URL result — would have such a score. This should have a high value relative to the results that follow below it and on subsequent “pages” in the continuous scroll.
SERP results are obviously selected by separate ranking systems, so I speculate that E-E-A-T serves as a quality check after the fact.
Consequently, it does not slow down the delivery of the SERPs. If an adverse trend is noted, it is analyzed in detail and a ranking system is amended, or the E-E-A-T factors are tweaked.
I might be totally off the mark there, but interpreting the Google researcher’s words isn’t the point of this article.
Of the hundreds of SEOs who might have noted my invitation for discussion, about five replied with a serious response. They are old friends, and I have met three in person many times. Other responses included weak humor, sarcasm or skepticism of anything said by Google. Then, crickets.
Where is SEO curiosity headed?
I am in shock that an uncommon statement from a Googler hasn’t resulted in subsequent discussion, even when I tried to raise this topic recently.
What happened to the legendary SEO curiosity in guessing the “200 ranking factors?” More than one author would poll the SEO community to find and rank the major ranking factors. We loved to add our own observations to the knowledge pool.
A lot of energy and curiosity is expended on building shiny tools with Python, particularly with a good dose of AI. Some of that work seems to be reinventing wheels.
There is a vigorous discussion of SEO tools every day. Can there be a better tool for keyword research, as the marketers would have us believe? Can AI writing tools really benefit all niches of SEO?
There is no shortage of self-styled experts growing their mailing lists by inviting us to steal their “secrets”. A lot of misinformation is being passed on as fact.
We have lost the early explorers who dissect every search engine patent and try to correlate it to their observations of the SERPs. I miss pioneers such as Ted Ulle and Bill Slawski, who would analyze algorithm updates and try to pinpoint possible ways to avoid being caught up in Google’s net.
Be more curious
SEO curiosity isn’t completely dead. Using the E-E-A-T example, many say those factors are not part of Google’s ranking systems. It’s OK to have a healthy skepticism of anything put out by search engine spokespeople.
Channel your curiosity into an investigation. You might not want to share the results if they are not meaningful. Then again, we just saw Cyrus Shepard examining 50 sites for correlations between features found on websites and Google algorithm update winners and losers.
Shepard found “experience” to be one of the features on “winner” websites. But haven’t SEOs been echoing the mantra that E-E-A-T is not a part of the ranking algorithms?
Maybe not in a direct way, but whatever algorithm looks at experience is sending a positive signal to a ranking algorithm. Since relatively few pages are product or place reviews, it makes sense to keep an Experience algorithm separate from a ranking algorithm.
I’m privileged to watch a curious SEO, Daniel K. Cheung, build a matrix of E-E-A-T attributes for auditing a page. So far, he has found it necessary to give each attribute a numeric value so that some can be shown to provide a larger impact on a page than others.
For example, an attribute might be the presence of a video of the author using the reviewed product. This might have a greater impact than a still image of the same scene. It doesn’t matter if the real method used by Google is far more nuanced. Such curiosity gives us ideas to test.
Be skeptical
You might argue that Shepard’s sample of 50 isn’t large enough. Fair enough. One of the large SEO tool makers might get their crawlers to examine a million websites and tell us whether they agree with him.
Don’t wait for a tool company to conduct the study — you pick 100 or more sites and do your own tests. Rinse and repeat until you are ready to announce your findings.