I have borrowed the code from Web Distortion for a very nice SEO guide.
Everyones favourite (and probably most misunderstood) on-page HTML element, meta descriptions have their place in a webmasters kit bag to increase the number of clicks you receive on Google. Yes that’s right, they don’t play any part in the ranking of your website.
Repeat after me. Meta descriptions don’t help ranking, Meta descriptions increase clickthroughs.
You should take care to craft unique meta descriptions for each page, and if you can (in natural language) use the keywords you would like to target for that page – not for a ranking boost, but merely because they will be bolded in the meta description if found. This in itself, increases the clickthrough to your website.
The importance of using unique meta descriptions, simply put is this. They can be an indicator to Google that the page containing them, is also unique. Unique content = more pages in the index = more traffic.
Again, a misunderstood meta tag, meta keywords are not used by Google, due to the fact that they have been spammed to death over time. Meta keywords, like descriptions play no part in the ranking of your website.
However, its worth mentioning that a) – they will do your site no harm if included and b) – Google isn’t the only search engine out there on the web. You may find that other search engine bots do use meta keywords to determine some degree of relevance to your page, particularly in less sophisticated bots.
One example might be a student writing a basic crawler that looks for them. If your goal is to get as many links and to increase the reach of your website – something as simple as meta keywords might see your site turn up in places that others are not.
Webmaster tools, first introduced by Google (waaaay back with fat Matt) – is a way for you to check a number of search related factors. You can check indexing status (how many of your pages have made their way into the index), backlinks, (places that search engines have found links to your website), robots, (to test your robots.txt file), Keyword searches (including positions and clickthrough data) and any errors that Googlebot finds with your content.
This additional information is available within Google and Bing, and you might as well setup accounts with both of these, as you may find information that correlates between the two. Google Webmaster Tools setup firstly requires a Google account. Once you’ve setup and logged in, you will have to verify your site with a meta tag or upload a file that proves you are the owner of the account.
It’s well worth doing this for both search engines at the start of a project, as robots.txt, and sitemaps are verified through this tool. Bing Webmaster Tools has been recently updated to introduce a couple of new features, and runs in Microsoft’s Silverlight technology. Windows Live account required prior to setting that up.
Yahoo Open Site explorer Verification
Yahoo site explorer and Open Site explorer are both great ways of finding out who is linking to you and your competitors. To obtain additional data from Yahoo on the number of backlinks from various sites around the web, and to get other information on the data they’ve detected, its another verfication tag you should be setting up prior to a site going live for seo and marketing purposes.
To a degree, Yahoo’s open site explorer behaves somewhat like Google’s and Bing’s Webmaster tools, with a much more accurate indicator of backlinks.
Search Monkey meta
As well as the link information Yahoo provide, Open site explorer also lists the data they have recognised for Searchmonkey – this technology is likely to be integrated into Bing’s search technology going forward, so get your coding hats on folks.
The primary reason for making sure your markup uses some of these suggestions is that it is likely to increase clickthroughs from the SERPs – particularly the video markup, which will show a thumbnail alongside your result on Yahoo. There is some crossover here with the data you can use for Facebook’s Open Graph Protocol.
Sometimes, it makes sense to tell Google what parts of your website aren’t really worth offering to visitors via their search engine. Typically, this would include things that are sensitive (private documents), or things such as login pages which you don’t want naughty robots sniffing around. Even if you want your entire site indexed, its still worthwhile to add one to prevent 404′s from showing up in your raw server logs (as robots request this file).
Have you told the search engines what you don’t want included in the search engines? Well, without a robots.txt file, you aren’t likely to get what you want. Google provide good documentation that is worthwhile reading – create it in the root of your website domain, and check, check and check it again. Failure to get this basic file correct could see your entire site deindexed!
Webmaster tools for both Google and Bing include testing tools for robots to make sure everything is kosher, so assuming you’ve set this up, there’s really no excuses for getting this right.
Sitemaps not only give crawlers a comprehensive list of URLs to check frequently, they also result in faster indexing of your site, meaning new content gets to the search engines quicker. I’ve already touted the benefits of that before in generating traffic.
If you think about it, it makes sense to provide an easier, more structured way for Google and others to find your new content, rather than parsing through tag soup HTML, (even though they are pretty darn good at that by now). The faster search engines are, the more profitable they are as they catch waves of temporal traffic and save money on the processing of the information you provide to them.
Sitemaps are supported by all of the major engines, and they managed to agree on the sitemap protocol , so they are here to stay for the foreseeable future. Sitemaps can (and should) be specified in your robots.txt file, and directly in Webmaster tools.
If you are on WordPress, you can generate a sitemap automatically with plugins. If you are on any other platform, and have a small(ish) site, you might get away with generating via a web based sitemap generator – everything else, including sites with complex, and large site architectures – these sitemap generators, and programming code will do the business for you.
There are also a number of different sitemap ‘types’ that many webmasters miss, worth examining if your site niche happens to fit in with those.
Proper heading tags are important to help define relevance for organic keywords in the search engines, and specifying a range of headings helps to segment the page for engines to determine ‘sections’ on the page. Every document on your site should concentrate on a particular topic or feature, and heading tags contain the keywords which describe this. Combined with keywords in the title tag, they strongly indicate what a particular web page is about.
So how do bloggers decide on what keywords go in a heading? Well here is a bit of my own strategy on choosing them, really it’s a delicate balancing act between creating attention grabbing headlines, and on making sure your post has relevance for future high traffic searches.
Titles are probably one of the most important tags in your HTML that market your website. You should treat them as such, and as with heading tags, you need to balance interesting titles with titles that contain the keywords which describe the content on that particular page.
Many brands miss a trick by having their company name in every page title on their website, when in actual fact the only page that arguably needs it, is the home page. The reason you should leave it off elsewhere? If you have a strong and unique brandname, your site will get found anyway, from natural search. Unique phrases (such as brandnames) generally rank at number one naturally anyway. Putting a company name in every page only dilutes the other keywords Google has to crunch on.
Google Places Submitted
Previously known as Google Local Business Centre, Google offer a service for location based business, called ‘Places’ which allows you to submit details about your business, including telephone number, website address, and importantly your exact location.
Submitting data here lets your business appear on Google maps, and show up in location sensitive searches. You will have to verify the data you submit with an automated telephone call, or wait for a postcode – details on the how, why and where’s of Google local are all available at Google’s help centre. If you are wondering how to appear higher in the results once you have submitted this, I’ve blogged at length about other ways to improve your local search position.
Interestingly, alongside the rebrand of this service, and the recent publication of an API for checkin’s – one would expect this hints strongly at the move towards Foursquare territory, and I wouldn’t be at all surprised to see them using a mobile game to crowdsource their business data.
Contact Details Markup
Contact Details such as your telephone number, email address and physical address, should be marked up in RDFa. This gives you the best chance of association of your website address with a physical location, and an increase in appropriate local searches. Multiple business addresses should be listed on multiple pages, according to 2010 local search ranking factors.
Google Profile Created
Your Google profile allows you to create a homepage about yourself, and include the (followed) links of the websites that you manage. A full and complete profile will make its way into the search engines, and provide an additional place to capture visitors to your site(s). You can list YouTube, Picasa and Flickr photos and posts that you create on Google buzz are also syndicated here. If you use Google Sidewiki, Sidewiki entries also make their way on automatically, providing additional content for the search engines to find you.
If you haven’t already created a profile, now is definitely the time to do so. Rumours are abound that Google are about to launch a social network of their own ‘Google Me’ - a full and complete Google profile is undoubtedly going to be the home of this service if and when it launches, so it makes sense to at least have it complete now, and if you really want to jump the gun – to start marketing it around the web on your other social profiles.
Having an RSS feed is paramount to syndication, and providing one can also help to get your content that bit quicker into the search engines. In much the same way that sitemaps improve discoverability, RSS being a structured format is used by search engines to speed indexing in many instances.
The other obvious benefit of RSS is that you are providing visitors with a way to pull your content in, and use it in their own way. Some will use it to automatically tweet your content, some will use it to populate their blog, others will use it programmatically in ways you can’t even imagine. Bottom line, RSS is a must if you are publishing regular content.
Every image on your site should be optimised for maximum search benefits, and if your site is particularly image rich, it can be difficult to provide relevance to Google. Sites such as Picocool , 9GAG and ImgFave have this sorted – introducing a social element to their service, and crowdsourcing tags and descriptions.
A previous post of mine discusses some of the things you can do to optimise your images for search, and one other extra tip to add to this is the image tag inside the sitemap protocol. Further information on how to go about that at the official Google Webmaster blog.
Also worth pointing out that high impact photos used on your site will greatly enhance the chances of your content being shared, so take time and attention to polish your content.
Your website URL’s should if at all possible be rewritten to include keywords that underline the main focus of the page. You have probably come across links on the web that look like this:
Well, thing is, search engines aren’t really able to work out what is going on with those, they don’t highlight what exactly p=72 means. If on the other hand, your website looks like this:
Google has a pretty good idea what the focus of that page will be, prior to even parsing the page. Take this advice and apply sparingly – if you already have a structure on the web that looks like the former, it may not be worth the effort to create friendly URLs, as existing links out there will break!
Some people would instictively redirect any broken URL’s that occur, but remember folks, 301 redirects do not carry all the juice, so be prepared to wait a while for you to rebuild your authority, which will probably be worth the effort in the long run. If you are starting out a project from scratch, take the time, and the effort to consult with developers to ensure they are thinking in this way, and following best practise from the get-go.
Your website architecture is an important consideration when developing a site. You may choose to have a flat architecture, with no directories, or multiple directories. Google suggest that deep directory structures don’t work just as well from an SEO point of view. Many people suggest that having dates in URL’s is a no no, as it indicates when content is old to visitors, and adds unnecessary information to your URLs.
The flip side of this is that it enables much more deep directory diving in Google Analytics! I can, for example work out that content in the month of May has attracted more pageviews than February, or that 2008 was a more successful year than 2007. Google automagically categorise content according to pseudo directories. Just another thing to think about when deciding about your site architecture.
If your site has identical URLs which serve the same purpose and deliver the same content, you may be suffering from ‘duplicate content‘ issues. Simply put, this is where search engines can’t work out which content is the most important to show in the results, and it chooses itself. This may not be what you intended, and is commonly the fault of the content management system, or site architecture in question.
Thankfully, Google and other engines have agreed to support a new tag, known as the link canonical which can specify which page is the preferred one. This will result in link juice flowing correctly (if people have linked to the wrong version), and help Google figure out what page you intended to show in the SERPs.
Yoast has developed a canonical plugin for WordPress, but if you don’t know what you are doing seek advice before implementing this, always better to be safe rather than sorry when dealing with a tag that influences a search robot’s activity.
Social media and SEO go hand in hand, contrary to what others would have you believe. It’s just another tool in your marketing kit to get the job done. Social media makes SEO that bit easier, as it facilitates the sharing of your content.
You can however make things easier on yourself, by weaving a social media strategy around your content. Have you tried turning comments off and letting the debate happen on twitter with a hashtag for example? What about running a competition on Facebook which requires people to enter by commenting on your site? Mix things up a little, and you may find that your content reaches a bigger audience as a result.
At the crux of it, more shares = more potential for secondary links. This is the same reason it is worthwhile making it to the home page of any of the major social voting sites – sure the traffic is bouncy, and not really worth much as regards conversions, but it will put your content in front of the eyeballs that may give links in the future.
Not even the greatest attention to detail for on-page SEO can mask poor content. Far too many people build brochure sites that stagnate over time, and then wonder why they are being outranked by the people developing fresh content.
When you are launching a site for a client, you should really be selling them the benefits of blogging, or creating something with a user generated element to it. If you haven’t an ongoing commitment to crafting great content online, forget about it.
Other useful titbits
Google’s seo report card
Optimisation Guide from Google