Even though Google harps on about high quality content, there are other aspects of your website and hosting environment which can contribute to the rankings of your website in search engines. Some of what we will cover below is easy (but can be tricky) to implement, and other issues may require revisiting the entire design and architecture of your website.
Load time refers to the amount of time it takes for your server/webhost to serve a requested page from your website by a user or search engine spider. Generally, the load time is in milliseconds; however, if your server is not able to serve pages quickly enough it will cause problems, both with rankings and bounce rate (people leaving a site after looking at only one pagein this case assuming they even wait long enough for your page to load). So it is important to test how quickly your hosting service is able to serve up your web pages. Keep in mind that the problem may not be your host but the way your site is set up.
You can use a multitude of tools to check the speed with which the pages of your site load, or utilize the Google Webmaster Tools to find out how quickly Google is able to load your website’s pages. You will find this information at Webmaster Tools > Crawl > Crawl Stats. You can also use Google’s PageSpeed Insights developer tool to pinpoint some of the issues which may be causing the slowness of your site.
Once you have checked to see how quickly your web pages load, you will have to use that information to figure out your next step. If the pages are loading in a reasonable amount of time (less than one second per page), then you do not have much to do, unless you would like to improve your site’s speed even more. However, if your pages are taking a few seconds to load, then you certainly have some work to do to get the load time down to below one second per page. Here are some of the things you can do to improve load time:
- Minimize HTTP requests
- Optimize your images so you are not using unnecessarily large image files without gaining in visual quality
- Use server side caching & Gzip to reduce render time and file size
- Reduce 301 redirects
- If possible, use a content delivery network
The robots.txt, which is a creation of the Robots Exclusion Protocol, is a file stored in a website’s root directory (.e.g., example.com/robots.txt), and is used to provide crawl instructions to automated web crawlers (including search engine spiders) which visit your website.
The robots.txt file is used by webmasters to instruct crawlers which parts of their site they would like to disallow from crawls. They also set crawl-delay parameters, and point out the location of the sitemap file(s).
Note: Not all web spiders follow robots.txt directions. Malicious bots can be programmed to ignore directions from the robots.txt file.
Users are redirected to a 404 page when they have tried to locate or reach a page that longer exists, often because they have either clicked a broken link or mistyped the URL. By default, a 404 page is just an empty page (save for a small notice stating that the page that has been requested could not be found), which usually means that the user (and search engine spiders) will simply turn back and leave the site, and this is certainly not something you want happening.
To solve this issue, and avoid 301 redirecting all error pages to your home page or some other destination on your website, you can utilize a custom 404 page which gives the users (or search engine spiders) links and navigational elements to follow into your site. A custom 404 page could simply be smaller version of your website’s sitemap which directs the user or search engine spider to the main sections of your site.
Of course, not every 404 page should be redirected somewhere, and that is where a custom 404 page would come into play. These 404 pages can be shown by default, but creating a customized 404 page is highly recommended.