Case study: The unexpected impact of blocking GoogleOther on SEO performance

GoogleOther bot

What happened (the short version)

When my client’s IT team blocked GoogleOther to address their overloaded translation database, they unintentionally created significant server connectivity issues in Search Console.

Google officially maintains that GoogleOther has no effect on Google Search indexing or rankings, but our findings, along with reports from other SEO professionals, indicate that there may be more connections between these systems than Google publicly acknowledges.

So we internally wanted to examine if blocking “GoogleOther” can indirectly affect indexation, crawl rates or organic traffic, despite Google’s assurances.

The problem: Translation database overload

Initial observations

In January 2025, my client’s team noticed their translations database experiencing unusual strain.

Upon investigation, they discovered that a Google’s bot traces increased significantly starting January 23, 2025.

In Azure monitoring, these “traces” represent logged events that track a specific bot’s interactions with the application and are used for performance monitoring and debugging.

Google was responsible for approximately 80% of all traces

Upon further digging, it turned out that the culprit was a relatively new bot called “GoogleOther” which Google introduced officially in April 2023.

According to Google’s Gary Illyes, it was designed as a “generic crawler that may be used by various product teams for fetching publicly accessible content from sites” for internal research and development purposes.

Prior to GoogleOther’s introduction, these non-search related crawls were handled by the regular Googlebot.

The creation of GoogleOther was intended to “take some strain off of Googlebot’ by ensuring that “Googlebot’s crawl jobs are only used internally for building the index that’s used by Search.”

Google’s official documentation states it is specifically for ‘non-Search index related crawls.’

Working off of this information, the IT team autonomously decided to address the issue by blocking GoogleOther using an Azure Front Door WAF rule.

Unexpected consequences

Shortly after blocking GoogleOther, Search Console reported a significant increase in connectivity errors.

The screenshot above shows one of the least affected domains, but some of the client’s domains experienced up to 40% of server requests failing.

This raised an important question: if blocking GoogleOther affects Search Console functionality, could it be connected to search operations despite Google’s documentation suggesting otherwise?

Investigation findings

The connection between GoogleOther and Googlebot

  • IT identified that the IP address most frequently blocked by their “BlockGoogleOther” rule was 66.249.70.66 (this address was blocked 123,384 times in a single week).
  • When examining other requests, we found this same IP address was also being used by regular Googlebot (over 10,000 requests in the same week).

Key discovery: Google uses identical IP addresses for both GoogleOther and regular Googlebot operations.

This is not breaking news, as it was discussed in the WebmasterWorld forum and other SEO communities, but it is somewhat obscure information that isn’t clearly stated in Google’s official documentation.

While SEOs might be aware of it, most IT teams aren’t.

The solution: A more effective approach

Based on our findings, we implemented a more elegant solution:

  1. We removed the Front Door WAF rule that was blocking GoogleOther by IP.
  2. Instead, we added this directive to the robots.txt file:
User-agent: GoogleOther 
Disallow: /

Results after implementation

After switching to the robots.txt approach:

  • Search Console connectivity errors returned to normal levels.
  • Website performance stabilized.
  • Server resource usage normalized to 3-4 instances.
  • Most importantly: No negative impact on search rankings or indexing was observed.

Lessons learned

  1. Don’t blindly trust Google’s documentation – While Google states that GoogleOther is a separate crawler (which might lead you to think that everything is separate), both GoogleOther and Googlebot can share identical IP addresses.
  2. Be careful with IP blocking – Block one bot’s IP and you might accidentally block something important.
  3. Robots.txt is always your best bet – At least for GoogleOther, it follows the rules laid out in robots.txt perfectly.
  4. Always watch Search Console’s crawl report after technical changes – This saved us from a potential SEO disaster.

What this means for SEO

IT teams often prefer to use firewall rules to block unwanted traffic (which is entirely reasonable from a server management perspective), but this approach can have unintended consequences for search visibility.

GoogleOther actually has no effect on search but only when it’s blocked via robots.txt, a solution that may not be top-of-mind for IT teams focused on firewall-level security.

The resulting connectivity issues in Search Console and failed requests demonstrate how technical decisions made without SEO input can impact search performance.

Be proactive about getting involved in any technical decisions that affect crawlers’ access to your website.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *