Web developers need to know about 301 and 302 Redirects. However, lots of developers get this wrong and get them mixed up. There are many ways how this can be done in a wrong way, here are some explanations. A 301 redirect is usually used in SEO for one of the following reasons – a page has moved somewhere or been taken down, so you want to redirect users and search engines to an appropriate new page.

If you have created some duplicate content and want to remove them from Google’s index by redirecting them to the main canonical version, a 301 redirect will usually pass nearly all of the link juice and equity across to the URL it is pointing to. Despite not being SEO friendly, there are some genuine reasons for an SEO using a 302 redirect -a page may just be temporarily unavailable or you may want to test moving to a new domain to get some customer feedback but not want to damage the old domains history and rankings.

A 302 redirect is a good choice because you are confident that the move isn’t permanent.  Because of this, Google will not pass link juice across the redirect, nor will they remove the old URL from their index.  This is why getting mixed up with 301s and 302s can hurt your SEO performance. The common reason why some SEOs and developers get this wrong is because the user won’t notice any difference, since they will be redirected anyway. However, the search engines will notice the difference, so you should pay special attention to it.

Redirecting all URLs back to the Homepage is another problem that may appear. Redirecting all pages back to the homepage, or even a single top level category, is bad for users and can sometimes look manipulative.  Also it is not passing the much needed link juice across to the deep pages within your site that need them.

Crawler Access is very important, so restricting crawler access and optimizing your crawl allowance is an overlooked part of SEO.  To understand this and optimize it, you must first become comfortable with the concept of crawl allowance.  Don’t think that Google will automatically crawl and index every page on your site, they do have limited resources and must be selective about which pages they crawl over and over.

Robots.txt is the first file that a search engine will request when they crawl your site.  Within this file, they will try to see if there are any areas of your site or specific URLs that they should not crawl. The action to take here is to take a good look at your site and decide which sections you would not like the search engines to crawl.  Use some caution here though, as you don’t want to block pages by accident.

Javascript can be a great thing, since it can add great functionality to your website and enhance user experience. The search engines struggle with understanding Javascript, they are getting better all the time and are actively trying to execute Javascript so they can access more content.  Keep in mind not to place valuable content within javascript, since you want to make sure that the search engines can read all of the content you have produced.

author:Azra Jovicic