Duplicate content has always been one of the most common issues that a website has. Website owners tend to forget the long term effect of having duplicate content on their website. They might be ranking well despite having pages with duplicate context but once algorithm penalty (Google Panda) hits them, then that would be the time when they will realise that they should have taken actions long before it is too late.
Listed below are 3 ways to eliminate duplicate content on your website and stay away from future algorithm penalties:
1) 301 Redirection – Most websites has similar versions of a web page showing on different URLs. This is the most common problem that majority of the websites have. A common scenario is when the non www version of the index page does not redirect to the one with www or vice versa, thus creating two instances of the homepage. In order to solve this issue, we should choose our preferred domain and apply 301 redirect on pages that has completely the same content.
The process of 301 redirection is applied through your website’s .htaccess file. You can do this by going to your site’s root folder and downloading the file to your local computer. Edit the file using a plain text editor such as Notepad.
2) Page/URL Canonicalization – This is another option to eliminate duplicate content on your website. Some duplicate pages are created during website development especially on sites with dynamic URLs. And since it has dynamic URLs, it also has the tendency to produce different versions of URLs presenting similar content. E-commerce websites are normally having this issue because it gives you multiple ways to reach a particular page by navigating through products and categories. It shows different URL paths while presenting the same product or content.
In this case 301redirection is unnecessary since it will cause bad user experience when users search for a product or page then gets redirect to another page. This is solved by placing “rel canonical tag” on pages with the same content pointing the URL of your preferred domain.
3) Adding noindex, follow tag – This is a meta robot tag that allows search engines to crawl all the links on a page but exclude it on their indices. So whenever you have similar pages that need to exist, then this is another option that you can apply.
This works almost the same as page canonicalization since it allows you to have your similar content pages published without worrying about getting duplicate content penalties.
Now that we have already enumerated the solutions to avoid duplicate content punishments from major search engines. It is time to evaluate your website and see if you got duplicate issues that should be resolved as soon as possible.
Results oriented Search Engine Optimisation (SEO) and Online Marketing Strategist with over 10 years of professional experience. Had helped numerous businesses across Australia in terms of leveraging their SEO / Internet Mareketing campaigns and guiding them to success using effective and white hat methodologies.