email:  
pw:
Membership is FREE
Post Reviews, Receive Notice Of Specials
Sign Up Here     Password Help
Penalized in Google?
Unwinding Google Penalties

srsltid : Massive New Bug in Google Search
E-commerce Sites at Risk

17 September 2024

We're seeing a new bug in Google Search causing ?srsltid= parameters to be appended to most ecommerce site URLs. What was intended as a way for merchants to gain insights into their traffic sources, is now creating increasingly substantial problems when these urls get indexed.

 

Original Intention for srsltid Parameter

In 2021 Google introduced the srsltid parameter as an opt-in feature within Merchant Center's Conversion Settings. Referred to as auto-tagging it allowed site owners to differentiate between organic search result clicks and organic product clicks by dynamically appending a srsltid parameter to the URL. Here is a Google Merchant Center Help article on the subject. For example, store.com/product would turn into store.com/product?srsltid=123xyz

Google Merchant Center Conversion Settings Auto-tagging

 

Now it's Everywhere…

Beginning late July 2024 we've been seeing the srsltid parameter added to more and more ecommerce URLs within organic search. This completely undermines their original intention and may cause these new parameter laden URLs to be indexed. Each day the chances of inbound links to your site inadvertently including this parameter dramatically increases. The most concerning consequences include:

  • Duplication and Dilution: Most platforms like Shopify, Bigcommerce, WooCommerce, etc do not properly define the canonical URL within the code. We are already seeing these URLs getting crawled and indexed by Google. In some cases replacing the ranking product URL, in others diluting its strength.
  • Crawl Budget Bloat: As Google pushes out more and more URLs with srsltid parameters, it will in turn cause their own crawlers to begin visiting these URLs. Revisiting the same pages multiple times under different URLs consumes unnecessary resources. This bloated crawl budget means less efficient indexing of your actual content.
  • Analytics Compromised: Organic search result clicks are now being wrongly attributed to product links in Google Shopping, mixing two distinct types of traffic and skewing your analytics data. This compromises your ability to accurately track organic traffic and make informed decisions.
  • Rank Tracking Tools Compromised: Rank tracking tools that monitor your site's position in search results now see auto-tagged URLs as holding rank. This artificial data creates a misleading picture of your site's performance and ranking stability. 

This issue is only getting worse. More and more ecommerce sites are reporting instances of this problem and Google is doing nothing to help. Initially it was intermittent and limited to mobile devices. However over the past weeks the problem has grown to include desktop search results and for some sites appears on 100% of the results. Even major stores like Columbia Sportwear and Black&Decker are affected:

Google search results showing srsltid parameters on Columbia.com

Google search results showing srsltid parameters on BlackAndDecker.com

 

How to check your site

  1. Perform a site: search in Google like so:
    site:yourdomain.com
  2. Look for ?srsltid= parameters appended to your URLs in the search results (You may need to click on the URLs to see the full address with the parameter)
  3. If you do see the parameters, also check this search to see if it's already been indexed using this search:
    site:yourdomain.com inurl:srsltid

 

Solutions to try

Google has been ignoring requests to fix the problem, and unfortunately the fix is not as straightforward as one would like. Here are a few things to try:

  • Merchant Center Settings: The most obvious should be to disable auto-tagging in Merchant Center's Conversion Settings, however we've seen reports of e-commerce sites seeing srsltid's without ever opening a Merchant Center account!  If you do have an account, disable "Auto-tagging" by clicking the upper right Settings cog  and then visiting Conversion Settings. From there disable the first toggle switch related to auto-tagging.

    UPDATE: you may need to keep an eye on this setting, as we're seeing this  option reverting to the ON position as Google rolls out updates.

    Shopify users: Conversion tracking settings within Shopify will turn on auto-tagging in Merchant Center. If you keep seeing the parameters showing up even after you turn it off in MC, it's probably being triggered by Shopify. If you rely on the MC data you'll have to put up with the parameters, but in this case it's important to use the conditional noindex solution below to protect against Google indexing those urls and impacting your ranks/traffic.
     
  • Canonical Tags: Implement canonical tags to indicate the definitive version of each URL. While a vital part of maintaining a clean index, we have repeatedly seen Google not respecting user-defined canonicals. Still, we recommended to make use of canonicals and pray Google follows their own guidelines.
     
  • Robots.txt: Update your robots.txt file to prevent Google from crawling URLs with the srsltid parameter. This won't help if it's already indexed, but you can try the following syntax: 
    User-agent: *
    Disallow: /*?*srsltid
  • Contacting Experts: Given the complexity and evolving nature of this issue, it may be best to seek professional help. We can guide you through more advanced techniques for addressing problems like this. Contact us to learn more about how we can assist you.

 

More evidence of a Broken Google

The emergence of this issue is disappointing, reflecting yet another broken aspect of Google's system that needs attention. If you're experiencing issues related to this bug or need help implementing solutions, don't hesitate to seek expert assistance. Stay vigilant, and remember that addressing this problem promptly can help safeguard your site's SEO health and performance.

example of Google Search Console filled with srsltid parameters

Screenshot of a compromised site. Source: Google Support Forum

 



 

More Background On This Problem

Google's new parameter, srsltid, is part of a broader effort by Google to improve the clarity and integrity of referral traffic data. This is particularly important for maintaining accuracy while tracking user behavior and conversions. The parameter was originally only appended to URLs when users are redirected through Google Shopping search results. Allowing merchants to easily distinguish between a visitor clicking an organic search result in Google Search vs. an organic product listing within Google Shopping.

 

The new Google bug however, adds the srsltid parameter to organic search results as well. In addition to invalidating your analytics and conversion tracking, this can have a massive negative impact on a website's SEO. The crawl budget in particular is at risk. Crawl budget refers to the number of pages a search engine like Google is willing to crawl on a site within a specific time frame. Here’s how srsltid can affect it negatively:

1. Increased Number of URLs for Crawling

  • The srsltid parameter generates multiple unique URLs for the same content. Search engines could potentially see these URLs as separate pages, leading to a significant increase in the number of URLs that need to be crawled. This can dilute the crawl budget, meaning that search engines might spend more time crawling these parameterized URLs and less time on the core, canonical content of the site.

2. Duplicate Content Issues

  • With multiple URLs pointing to the same content due to the srsltid parameter, search engines may end up crawling and indexing multiple versions of the same page. This duplication can confuse search engines about which version of the page is the most relevant, possibly leading to lower rankings or even exclusion of certain pages from the index.

3. Wasted Crawl Budget

  • Since the srsltid parameter doesn’t add unique content but merely tracks referral data, search engines crawling these URLs are essentially wasting their resources on non-essential pages. This waste of crawl budget means fewer critical pages may get crawled and indexed, which can hinder a site's overall SEO performance.

4. Complexity in URL Parameter Management

  • To mitigate the impact of srsltid on crawl budget, webmasters need to correctly implement URL parameter handling in Google Search Console or use robots.txt to prevent these URLs from being crawled. However, improper configuration can lead to significant problems, such as accidentally blocking important pages from being crawled or indexed.

5. Longer Time for New Content to be Discovered

  • If the crawl budget is heavily consumed by crawling URLs with srsltid parameters, it might take longer for search engines to discover and index new or updated content. This delay can negatively affect the visibility of fresh content, particularly for news or time-sensitive material, where rapid indexing is crucial.

6. Increased Load on Servers

  • The additional data carried by srsltid parameters may marginally increase the size of HTTP requests, particularly when compounded across millions of users. This could lead to slightly increased load times and greater server processing requirements, especially for high-traffic websites. While the impact might be small individually, it could accumulate to a noticeable degree in large-scale operations.

7. Compatibility Issues with Legacy Systems

  • Not all systems or platforms may be prepared to handle the srsltid parameter, especially older or custom-built solutions. This could lead to compatibility issues, where URLs with srsltid are not properly processed, leading to broken links or failed redirects. Businesses relying on legacy systems may face significant challenges in adapting to this change, requiring potentially costly updates or replacements.

 

The srsltid parameter can negatively impact a website's crawl budget by generating multiple, often redundant URLs that search engines may crawl and index. This not only wastes valuable crawl budget but can also lead to duplicate content issues, longer indexing times for new content, and overall inefficiencies in how search engines interact with the site. Proper management of URL parameters and careful configuration of crawling rules are essential to mitigate these effects and ensure that critical content remains prioritized in the crawl budget.

Google Search Console explosive crawl budget

 

Canonical Problems

The introduction of the srsltid parameter can exacerbate issues related to canonical tags, potentially leading to the unintended indexing of redundant content. Here's how this problem arises and its consequences:

1. Canonical Tags Being Ignored

  • Canonical tags are HTML elements that tell search engines which version of a page should be considered the "primary" or canonical version when multiple URLs lead to the same content. However, when URLs with the srsltid parameter are generated in large numbers, search engines might not always respect the canonical tags. This can happen because search engines might perceive the srsltid URLs as distinct enough to warrant separate indexing, particularly if the parameter is not recognized or understood by the search engine's algorithms.

2. Indexing of Redundant Content

  • If search engines start indexing the URLs with srsltid despite canonical tags pointing to the original URL, it can lead to the indexing of redundant or duplicate content. This means that instead of consolidating the ranking signals (like backlinks, content relevance, etc.) to the canonical URL, these signals might get split across multiple URLs. As a result, none of the URLs, including the canonical one, might perform as well as they should in search rankings.

3. Dilution of Page Authority

  • The splitting of ranking signals across multiple indexed versions of the same content (due to srsltid) dilutes the authority of the page. For example, if several versions of a page with different srsltid parameters are indexed, the backlinks pointing to these different versions won't contribute fully to the canonical page's authority, weakening its potential to rank well.

4. Increased Risk of Duplicate Content Penalties

  • Search engines like Google aim to provide users with the most relevant and unique content. If they detect what they perceive as duplicate content across different URLs (due to srsltid), they might decide to devalue or even exclude some of these URLs from the index. In the worst-case scenario, this could lead to penalties that affect the site's overall visibility.

5. Higher Crawl Budget Consumption

  • As search engines encounter multiple URLs with the srsltid parameter, they may spend more time crawling and indexing these redundant pages, further straining the site's crawl budget. This not only leads to redundant content being indexed but also means that more critical pages might be crawled less frequently or not at all.

6. Impact on Content Discovery and Ranking

  • The indexing of redundant content can slow down the discovery and ranking of new or updated content. Since search engines might waste resources on crawling and indexing srsltid-affected URLs, they may delay or miss the indexing of more important pages. This could hurt the site's overall SEO performance, particularly for pages that need to be discovered quickly, such as news articles or product launches.

 

Mitigation Strategies

To address these issues, website owners can:

  • Disable Auto-tagging in Merchant Center: Visit the Settings > Conversion Settings page and uncheck "auto-tagging" to instruct Google to stop adding srstlid paramters to your URLs. This may take a couple days to take effect. There is also a risk that Google's system is so broken that this strategy will have no effect - in which case we recommend implementing all the mitigation strategies to ensure you're protected.
  • Ensure Proper Canonicalization: Make sure that canonical tags are correctly implemented and that they consistently point to the preferred version of the URL without the srsltid parameter.
  • Use Robots.txt: Block the crawling of URLs with srsltid parameters using robots.txt if appropriate, to prevent them from being indexed. Do not use robots.txt if you're already using a conditional noindex - it will prevent the noindex from being crawled.
  • Conditional Noindex: Add custom code to conditionally add a meta robots "noindex" instruction to urls containing srsltid

The srsltid parameter can lead to the unintentional indexing of redundant content if canonical tags are ignored by search engines. This can dilute page authority, consume valuable crawl budget, and negatively impact SEO by causing duplicate content issues. To mitigate these risks, it's crucial to implement strong canonicalization practices, manage URL parameters effectively, and consider blocking unnecessary crawling of parameterized URLs.

 

Conditional NoIndex

Implementing a conditional robots meta noindex directive to prevent the indexing of URLs with parameters like srsltid is a smart approach to manage your site's SEO and avoid the negative consequences associated with parameterized URLs. Here’s how you can go about implementing this:

1. Understanding Conditional Meta Tags

  • A conditional robots meta noindex tag allows you to instruct search engines not to index certain pages based on specific conditions, such as the presence of particular URL parameters like srsltid. This ensures that only the canonical versions of your pages are indexed, while URLs with tracking parameters are excluded.

2. Implementing the Conditional noindex

  • To implement this, you'll need to dynamically generate the robots meta tag based on the presence of the srsltid parameter in the URL. This can be achieved through server-side scripting (using PHP, Python, JavaScript, etc.) depending on your website's backend technology.

 

If the only problem is urls with the parameter "srsltid" being indexed:

Example in PHP

Here’s a basic example of how you could implement this in PHP:

example code for php conditional noindex

 

Example in JavaScript (great for Shopify stores)

If you prefer to handle this client-side using JavaScript you can use the below code
For Shopify you would add this into the section of theme.liquid


const currentURL = new URL(window.location);
const URLContainsSrsltid = currentURL.searchParams.get('srsltid');

if( URLContainsSrsltid ){
      const newMetaRobotsTag = document.createElement("meta");
      newMetaRobotsTag.name = "robots";
      newMetaRobotsTag.content = "noindex,follow";
      document.head.appendChild( newMetaRobotsTag );
}

 

A Bigger Problem:

One of the more significant problems we're seeing is that the indexing of "srsltid" has enabled indexing of urls with other parameters, polluting Google's index with urls that should never be indexed. Using a rel canonical tag may not prevent this, and once a url is indexed it can't easily be removed. This problem can exist even after "srsltid" is no longer present on indexed urls!

If the indexing of the parameter "srsltid" has triggered other index issues involving other parameters - for example: all the paginated urls now indexed where "page" or "color" or  "size", etc. is the parameter:

Example of a solution that removes ALL urls with parameters in JavaScript (for any platform)

This will prevent ANY parameter from being indexed:

document.addEventListener("DOMContentLoaded", function() {
    // Check if the current URL contains a query string (parameters)
    if (window.location.search) {
        // Create a new meta element
        var metaTag = document.createElement('meta');
        metaTag.name = "robots";
        metaTag.content = "noindex";

        // Append the meta tag to the head of the document
        document.head.appendChild(metaTag);
    }
});

 

3. Testing the Implementation

  • After implementing the conditional noindex tag, thoroughly test it across different scenarios to ensure that it only affects URLs with the srsltid parameter and does not inadvertently apply to other important pages. You can manually check the page source in your browser's developer tools to verify that the meta tag is correctly applied.

4. Monitoring with Search Console

  • Use Google Search Console to monitor how your site is being crawled and indexed. Look for any issues that might arise due to the implementation, such as legitimate pages being accidentally marked as noindex.

5. Combining with Other Techniques

  • In addition to the conditional noindex tag, you might consider combining this approach with other techniques like canonical tags, URL parameter handling in Google Search Console, and robots.txt directives for more comprehensive control over how your site's content is crawled and indexed.

 

Implementing a conditional robots meta noindex directive is an effective way to prevent the indexing of URLs with the srsltid parameter. By dynamically adding this meta tag based on the presence of specific URL parameters, you can ensure that only canonical content is indexed, protecting your site from duplicate content issues and optimizing your crawl budget. Remember to test the implementation thoroughly and monitor its impact using tools like Google Search Console. If you're having difficulties implementing these solutions, or worried these parameters have already been indexed on your site, don't hesitate to reach out to us for assitance!

 


 

Google is clueless about this.

Here is John Mueller's bonkers post that demonstrates a complete misunderstanding of the actual problem, claiming that it's intentional and not a problem!:

Google's John Mueller responds

"We've now expanded it to traditional search results."

"This doesn't affect crawling, indexing, or ranking "

"While not necessary, you can use the link rel=canonical element pointing at the preferred URL"

Unless he's just trying to cover up the problem, this official response does not make sense! Auto-tagging can be a helpful featurebut Merchants do not want this extended to organic search. 

 


 

Home       Google Penalty Primer       Contact       Recovery