Friday 6 January 2017

5 Silly Yet Harmful SEO Mistakes to Avoid

5 SEO Mistakes to Avoid

#1 Close Your Site from Indexing in .htaccess

If you do SEO for a living, most likely you have heard about .htaccess. Basically, it is a configuration file which is used to store specific directives that block or provide access to a site’s document directories.

I cannot stress enough how important the .htaccess file is. If you know how to manage it, you can:
  • Create a more detailed sitemap
  • Generate cleaner URLs
  • Adjust caching to improve load time
In short, .htaccess is a crucial tool to polish your site’s indexing process and, eventually, receive higher SERPs.

However, you need to be a true pro to set up an .htaccess file correctly. A single mistake could lead to dire consequences. For instance, you can entirely block your site from indexing, like this:

RewriteCond %{HTTP_USER_AGENT} ^Google.* [OR]
RewriteCond %{HTTP_USER_AGENT} ^.*Bing.*
RewriteRule ^/dir/.*$ – [F]

If you see these lines of code in your site’s .htaccess file, search bots won’t crawl and index it. Ask a developer to delete the code or do it yourself.


Make sure that you check .htaccess every time you start a new project. Some SEOs promote and optimize sites for months, not realizing that all their efforts are in vain. You don’t want to be like them.

#2 Discourage Search Engines from Indexing in CMS

SEO plugins of CMS platforms like WordPress, Joomla, and Drupal can jeopardize your optimization efforts. They have built-in features that allow users to instruct search engines not to crawl the website. All you need to do is go to Settings → Reading → Discourage to stop search engines from indexing the site. Tick the box and you can forbid search bots from indexing your site.

Make sure that you check this box at least once a week. After all, anyone who has access to the CMS might accidentally click the box, which will undoubtedly have a negative effect on your campaign.

To note: Search engines may keep indexing your site even if you tick the ‘discourage’ box. So, if you really need to close the site from indexing, you’d better do it in the .htaccess or robots.txt files.

#3 Leave Your robots.txt File Entirely Open for Crawling

This one is the worst. Remember: Never ever leave your robots.txt file open for crawling because it can result in serious privacy issues. You can lose your site entirely through a data breach.

If you are a beginner, make sure you take the time to learn as much as possible about setting up and managing robots.txt files. Act immediately if you check robots.txt and see something like this:

User-Agent: *
Allow: /

This means that search bots can access and crawl every web page on your site, including admin, login, cart, and dynamic pages (search and filters). Keep your customers’ personal pages closed and protected. You don’t want to be penalized for having dozens of spammy dynamic pages as well.


Either way, ensure that you disallow pages that should be blocked but allow pages that should be indexed. It sounds simple but takes time to learn.

#4 Forget Adding “nofollow” Tag Attribute to Outbound Links

SEOs know that links are still an important ranking factor. Unfortunately, they focus on backlinks and completely forget that their own sites pass link juice to other sites. What you should do is drive high-quality backlinks, but keep all the link power on your site.
So, your strategy is simple:
  • Scan your site using a site scanner tool (I use Xenu)
  • Sort links by address to locate outbound ones
  • Create an Excel file with all outbound links (or download a standard HTML report)
  • Check out every link in the list to implement “nofollow” tag attribute where necessary
Don’t be obsessed with the “nofollow” tag attribute, though. By saving all the link juice for yourself, you provoke other SEO professionals to nofollow you as well. In short, don’t abuse it.

#5 Fail to Check the Code in Validator

Your website consists of code, and the better this code is, the higher SERPs your site will potentially earn. This is because neat and clean code allows search crawlers to scan and index your site more efficiently, without leaving a single page behind.

So, everytime a new project is assigned to you to promote and optimize, make sure you check the code. You don’t have to be a developer. Just copy your site’s URL and paste it into the address field of The W3C Markup Validation Service. Then, ask a developer to fix the errors. The image below demonstrates a typical validator report:

A Typical The W3C Markup Validation Service Report

While Google doesn’t penalize websites for having invalid bits of HTML and CSS, you’re better off running the validator tool anyway. After all, it doesn’t take much time but improves your site’s performance for both users and crawlers.

Conclusion

Search engine optimization is ever-changing, and you should work hard to keep up with all the tactics and algorithm updates. Keeping your finger on the pulse of SEO is great (a must, actually) but don’t forget about the basic stuff, too. After all, silly mistakes are the most harmful ones.

 Author: Sergey Grybniak
 https://www.searchenginejournal.com/5-silly-seo-mistakes-even-professional-marketers-make/181283/?ver=181283X2


No comments:

Post a Comment