How to optimize your Blog for Google?

Search Engine Optimizing for blogs does not differ much from SEO for any other site. There are 3 things that make blogs blogs which can be helpfull in your efforts to optimize your website.

  • Timestamp The fact that blog posts show the date they are published on, makes that they can be seen as a news worthy site and because of that rank in Google News and Blog search
  • Comments There are big opportunities for user generated content and blogs can have an impact on the social web relatively easy by engaging in discussions on other blogs.
  • Syndication RSS is a feature blogs have and make it very easy for people to keep up to date with your site or even place links to your site automatically.

What is effective link building?

Incoming links or back links are extremely important for SEO. If there are many links pointing to your site, search engines will think your content must be relevant. That is why it is important to get people to link to your site. There are many ways to effectively build links. You can ask for them or place them yourself in the social web (forum posts, blog comments, directory listings, social bookmarks etc.), but the most effective way of building links by creating link bait. Make sure people like your content, product or tool so much that they start linking to it without asking them.

Is it best to use absolute or relative links?

There is no clear answer to the question: Is it best to use absolute or relative links? For SEO you would like to use relative links because they take up less bytes because they are shorter. If you have a lot of internal links on your site this can make a difference of over 1 Kb. On the other hand you want to use absolute links as an SEO, because that way you have a change of getting back links from people who ‘syndicate’ your content. When you are publishing your content via RSS, Atom or JSon, you should definitely use absolute links to make sure that the links in your syndicated content do not get broken when published on an other website.

Posted via web from Coding Strategist

What to do with duplicate content?

Make sure you notice it and make sure to tell Google about it, because duplicate content is a SEO issue. Searh engines like to show every bit of content only once. To be able to do this they need to decide which of the duplicates is the original. To prevent Google from thinking your content is originaly from the person who ‘syndicated’ it from you, you should make sure that it is easy for Google bot to find your content.

Posted via web from Coding Strategist

How to make an ‘advanced segment’ in Google Analytics for long tail organic traffic?

How to find out if  “Google MayDay” affected your rankings?

There have been some algorithmic changes to Google Search the around the 1st of May, hence the name ‘Google MayDay’. These changes could have effected your long tail positions in Google. Matt Cutts reacts to questions about Google MayDay in the video below. If you run Google analytics on your website the easiest way to find out if the changes affected the ranking of your site is to create a ‘advanced segment’. With advanced segments you could make analytics display only data for traffic generated by long tail search phrases (ie. search phrases containing 4 or more words).

How to make an ‘advanced segment’ for long tail organic results?

  1. Log in to your google analytics account
  2. If you have multiple websites configured select the website profile you want to work with
  3. In the left menu choose ‘Advanced segments’ under ‘My customizations’
  4. Click  ‘+  Create new custom segment’ in the top right corner of the content area
  5. Add the ‘dimension’ ‘Medium’ from ‘Traffic Sources’ by dragging it from the left menu. (see: figure 1)
  6. Set the condition to: ‘matches exactly’ ‘organic’ to make sure only organic search results will show up
  7. Add “and” statement
  8. In the  and statement add the ‘dimension’ ‘Keyword’ from ‘Traffic Sources’ by dragging it from the left menu.
  9. Set the ‘condition’ to ‘matches regular expression’
  10. Set the ‘value’ to: ^([a-z]+)([\s+]+)([a-z]+)([\s+]+)([a-z]+)([\s+]+)([a-z]+) This regular expression will match strings that start with 4 words (containing only chatracters a to z) which are seperated by one or more spaces (which can also be a + sign).
  11. Give the segment a name in the field indicated with ‘name segment’ and save it. (Your screen should now look like figure 2)
  12. To start using the segment go to the dashboard and click ‘Advanced Segments: All Visits’ in the top right corner of the content area. Check the new segment and uncheck ‘all visits’ and apply the changes. (see: figure 3)
  13. You are now ready to view your stats with only long tail data.

further reading:

Google image search results: universal vs images

You would expect the image results that show up in google’s universal search to be the same as they are in google’s image search, but there is a clear difference between the image results in universal search and in image search. Most of the times the first X amount of images from google image search will show up exactly the same for the same search phrase in universale search, but not always. Even when it is the images look the same they are not always realy the same.  Below you see screendumps of the search results for “Jack Herer” in universal search and in image search.

google universal search result for jack herer

Universal search: http://www.google.com/search?q=jack+herer

Image search: http://images.google.com/images?q=jack%20herer

It looks like exactly the same image results are displayed in both searches, but if you look more closely you will find out this is not the case.

My theory is that in universal search the context in which the image is displayed needs to be textual content on the subject as the filename & alt tag of the image suggest the image is about. In image search you seem to get more results from image galleries and pages that contain several images.

What is duplicate content?

Duplicate content is exactly what the term suggests it to be: It is content of which a duplicate exists somewhere on the web. The term duplicate content applies to all kinds of content: text, video and images, but in the context of SEO it mainly refers to textual content. Every page, article or even a snippet of text can have a duplicate somewhere on the web. The existence of duplicate content has three main causes:

  1. One page in a website can be accessed via multiple URLs
    The most common examples are homes page that can be reached via www.example.com, example.com, example.com/index.html and www.example.com/index.html. This can be prevented by using 301 (permanent) redirects
  2. The same or very similar content is shown in different ways on different urls in one website
    The most common examples are lists of items that can be sorted in multiple ways and every sort state is displayed on another URL.
  3. Copies of your content exists on other websites
    The most common cause is other webmasters hijacking your content. There are many lazy webmasters out there who rather steal content from others than write their own. Not all copied content is stolen though. YouTube videos for instance are offered to everyone to embed on their own website and using RSS feeds is a way for authors to yndicate their content.

To find out if there are copies of your content out there you could use a tool like the one offered at copyscape.com
On site issues with duplicate content can easily be found with free tools like the one available from virante.com
To see how similar two pages are you could use the tool provided at duplicatecontent.net

Duplicate content issues can be fixed by making only one version of the content indexable by spiders. This can be accomplished by:

  • Deleting all duplicate pages and making sure that you fix all links that might break. (This can be a lot of work.)
  • Using robots.txt,  no-follow and no-index tags to prevent spiders from indexing duplicates.
  • Using 301 redirects to redirect all traffic from the duplicate content to the original content.
  • Using the canonical tag can also be very useful to prevent penalties from Google for duplicate content.

See what more Matt Cutts has to say about duplicate content and the canonical tag on mattcutts.com. As you can see here too the people at Google do make a lot of fuss about duplicate content. If  you can be penalized for having duplicate content on your site is not clear, but I think Google cannot do that. As all big news agencies like BBC and CNN duplicate content provided by agencies like Reuters and  Associated Press, it seems very unlikely that Google’s can punish you for republishing content. For furhter reading see the post “SEO: There is no duplicate content penalty” on practicalecommerce.com.

Further reading:

Associated Press.