Home MARKET March Core Update & Spam Updates: Four Major Trends

March Core Update & Spam Updates: Four Major Trends

by Ohio Digital News


At the start of the March Core and Spam updates, Google claimed it intended to reduce unhelpful content in search by 40%. After the conclusion of the March Core Update on April 19 (which Google announced seven days after the fact), Google then clarified that it had actually reduced unhelpful content by 45%.

One of the main drivers of this drop in unhelpful content likely stemmed from the “pure spam” manual actions issued to thousands of sites during the March Spam Update, which began rolling out on March 5 and concluded on March 20.

Soon after the conclusion of the March Spam update, lists of affected sites and analyses began circulating within the SEO industry, such as this Originality.ai article that provided a deep dive into the role of AI content in affected websites. According to this study, many affected websites used generative AI to mass-auto-generated content — which is precisely what Google aimed to target with its new policy related to “scaled content abuse.”

While sites were able to get away with publishing AI-generated content with little editing or oversight for over a year, often with great SEO success, the March Spam Update gave a clear signal about how Google intends to treat this type of content and websites misusing AI to generate low-quality content at scale.

The penalized sites were often using generative AI to create content answering popular questions, such as the net worth of popular celebrities, high-volume queries about popular hairstyles or fashion trends, or rumors and news about popular games. The ads were usually filled with aggressive advertising and showed no indication of real human authors or involvement. Many of the sites also showed a publishing velocity that would be difficult to accomplish for most smaller blogs using actual human writers — such as tens or hundreds of new articles daily — which could have been one of many flags Google used to identify scaled content abuse.



Source link

related posts