3 Common Issues Found in Recent SEO Audits

3 Common Issues Found in Recent SEO Audits

I have worked on building my SEO Audit and Site Review business over the past year. In doing so, I have had the opportunity to work with everyone from entrepreneurs trying to run a company themselves, right on through to Fortune 500 companies.

Amazingly, many of the mistakes or issues I come across are consistent across the board – regardless of who’s responsible or how large the site is.

Three of the most common issues I’ve found so far in 2013 include:

robots.txt Typos

The robots.txt file is an incredibly powerful resource for site owners. For SEO purposes, it also serves as the instruction set (well behaved) bots and crawlers should follow. With the ability to open up or close down entire sections of your site, you really need to be careful about what directives you place in there.

One client was having tremendous difficulty with having their content spidered regularly. When I took a glance at their robots.txt file, I found that they had implemented a crawl delay and layered on a value of 3000. When I uncovered this and brought it up for discussion, they told me that they wanted to delay crawling of more than 3,000 URLs on their site per day.

Unfortunately, what they had done was told GoogleBot and other spiders listening that they could only make one request every 3,000 seconds. If you do the math, that meant that crawlers could get one page every 50 minutes… or about 29 pages a day. That’s a far cry from the 3,000 they were looking to have grabbed.

Another time, a robots.txt file introduced wildcards against all pages with a .html extension as the CMS being used was rewriting those pages with vanity URLs. Unfortunately, those vanity URLs also dropped canonical URLs that pointed to the .html pages.

You can probably imagine the difficulties that created both for SEO as well as for tracking link values.

Lack of Webmaster Tools Integration

For some reason, people love visitor analytics… But don’t care much about domain analytics. Webmaster Tools (both for Google and for Bing) are incredibly useful. From diagnosing crawler or indexing issues to controlling key display characteristics of your content in the SERPs, Webmaster Tools is your friend.

When it comes to integration though, no one takes is seriously – and if they do, it’s because they’ve adopted the Ron Popeil approach to SEO… “Set it and forget it.”

WMT interfaces are real time statistical centers of information you need to use to ensure your site is performing as it should. More recently, too, we’ve seen Google improve the messaging features in the wake of Penguin 2.0 to help police unethical link profile development.

Building Exclusively for Engines

At the risk of sounding like a broken record here, we should all be clear that content needs to be created (and attributed) by someone… And it should be created for a real reader – not a robot or algorithm. Blog writing form Stellar SEO helps a lot.

The number of sites I’ve seen over-engineered for SEO is almost ridiculous. Some recent examples include:

  • Learning of a CMS that whips together Mad Libs style page titling and heading creation. Best of all? You can’t change those variables… You can only toggle them on and off. Need a real H1 with actual content? Sorry. Not possible. Need one that jams in your targeted keyword phrase, physical location and nearest DMA? Sure, that’s just a toggle away.
  • A “Search Engine Friendly Redirects” module in a homegrown CMS. The module was fantastic if you ever had to edit your content’s URL. It would allow the old URL to exist, apply a 302 to the new page once, then 302 back to a “node/xyz” style URL, and then drop a 301 back to the new URL with a canonicalization reference against the old, changed URL. I’m glad you were trying to get a 301 in place from the old to the new, but you can’t really just make this stuff up as you go. It’s got to work, kids.
  • Automatic internal link dropping… Listen, internal links are great. Necessary, even. But having a tool that scans your content and drops internal links across every single instance of “keyword 1” and points to “URL” with link titles matching the content title is a little excessive… Particularly when you’ve managed to identify keywords like “technology” and magically end up with some 300+ internal links (some of which are looping from current pages to themselves) with zero diversity.

At the end of the day though, I love audit work. It keeps me on my toes. Helps me understand things that don’t initially make sense, and when you make the connection with someone who’s really limited by their existing web presence – it’s very rewarding.

One thought on “3 Common Issues Found in Recent SEO Audits

Leave a Reply

Your email address will not be published. Required fields are marked *