Themeforest

Irrelevant Keywords Can Be Costly

How Effective is Google’s Session-Based Broad Match?

Some Google AdWords advertisers are not pleased with what they are finding in Google’s Search Query Performance reports for their campaigns. These reports show advertisers what keyword queries are surfacing their ads, and some are finding some of these keywords questionable.

Are you losing money on clicks from questionable keywords? Let WebProNews know.

You might think that an ad impression is an ad impression, but when you’re charged by the click, you want the clicks to come from people who are likely to buy what you’re selling, considering that you are paying Google for each click.

A Wall Street Journal piece has put the spotlight on some of these advertisers, including a New York dentist who claims irrelevant keywords have cost him nearly $3,000 over the last year or so. The problem allegedly stems from Google’s session-based broad match feature, which shows ads to users not only for a single query, but also for subsequent queries in the users same search session.

Google explains the feature in the AdWords Help Center:

“When determining which ads to show on a Google search result page, the AdWords system evaluates some of the user’s previous queries during their search session as well as the current search query. If the system detects a relationship, it will show ads related to these other queries, too.”

“The system considers the previous queries in order to better understand the intent of the user’s current query. The added information allows the system to deliver more relevant ads.”

“This feature is an enhancement of broad match. It works by generating similar terms for each search query based on the content of the current query and, if deemed relevant, the previous queries in a user’s search session. Your ad will potentially show if one of your broad-matched keywords matches any of these similar terms.”

Sounds good in theory, but the advertisers complaining appear to disagree with what Google is considering to be relevant. The dentist from the WSJ story cited ” ” and “[Chinese characters] in Chinatown” as examples – not exactly dentist-related. The story also cites a plastic surgeon, who counted “olivia newton john photos” among questionable keywords.

The WSJ spoke with Google’s Nick Fox:

Mr. Fox acknowledged there are “edge” cases in which search queries “does not appear to be relevant to the ads, but the context of previous queries indicated that the user would have a strong interest in that advertisers’ ad.” In addition, he said, “a user must be interested enough in an ad to want to click on it.” He said a very small percentage of ad clicks are session-based and that advertisers can limit the scope of their campaign to halt session-based clicks.

..

Google’s Mr. Fox said: “It has to be the case that the users, in the very recent history, searched for terms he’s advertising on.”

It’s worth noting that Google says that whenever an ad is served based on the associated keyword’s relevance to the previous search queries, the ad’s performance has no effect on that keyword’s Quality Score.

It’s also worth noting that not everyone is unhappy with the session-based clicks. Jordan McClements, commenting on a Clixmarketing post on session-based broad match says, “If you are in a niche where there is not much search traffic, and a new client/sale is worth a lot of money to you then it is probably a good idea to keep all your ‘broad’ options open.”

John Lee, who wrote that post, says, “I want advertisers to be aware that in the case of session-based broad match – you can’t turn it off. My recommendation is to remain vigilant in reporting, primarily with Search Query Reports to ensure that the session-based query matches that do come through are relevant. If they aren’t, roll that knowledge (and those queries) into your negative keyword lists.”

Probably good advice.

Perhaps the real question is how much of the problem is Google and how much is the advertiser?

Speaking of negative keywords, Google actually just released a new feature this week to manage negative keywords across multiple campaigns with negative keyword lists.

About the Author:
Chris Crum has been a part of the WebProNews team and the iEntry Network of B2B Publications since 2003. Follow WebProNews on Facebook or Twitter. Twitter: @CCrum237

SocialTwist Tell-a-Friend

The Google Duplicate Content Penalty: the Truth

Here’s an interesting article by Peter Nisbet of Article-Writing.com on Google’s duplicate content penalty (or lack thereof). Go figure … :>

======================================================

The truth of the Google duplicate content penalty is quite simply that there is none! If that confuses you, then you have been reading too many misinformed forums or blogs where people get stuck on some popular term that they have no idea what it means, and then profess to be experts.

The only experts on the Google duplicate content penalty, and the only people who are qualified to define it, are Google, and in Google’s own words “There is no such thing as a duplicate content penalty“. This comes directly from Google’s Webmaster Central Blog.

That should be the end of this article, at precisely 96 words excluding title as I define my word count. But it is not. Why? Because even though this blog is operated by Google, and even though much the same has been stated by Matt Cutts, Google’s main software engineer, and other Google experts, people still argue and complain about the Google ‘duplicate content penalty’.

So here is the truth: you might ask who am I to know the truth, but I read all the Google blogs and their official statements, and in applying what I learn, I achieve excellent results for my web pages on Google search engine listings: and those of Yahoo, MSN and Bing. So I am coming from a sound base that my results can prove.

As a professional article writer whose customers trust to get them the best results from the articles I write, I have to be very aware of the policies and the way the algorithms work of each of the major search engines, and so I am as qualified as anybody to comment on myths such as this.

The Truth of the Google Duplicate Content Penalty

There is no duplicate content penalty. Google’s major search engine function is to provide a customer the best possible results for a search, based upon the search term (keywords) that the customer has used in the Google search box.

Google’s customers are not:

1. You, who use it to get your web pages listed.

2. Adwords advertisers that use Adwords to advertise their products.

3. Corporations or individuals that use it to have their web pages listed.

4. Internet marketers who recommend others to use Google for advertising or searching.

Google’s customers are those seeking information, whether that is to solve a problem, where to purchase a product at the cheapest price, find a sports result or to get directions to a specific location. Everybody that uses Google uses a search term to find some information that they need. That search term is what you and I refer to as a keyword.

If Google detects several web pages offering exactly the same content, its algorithms will select that which best offers the information required and list that. It might also list one or two other pages offering exactly the same content if there are good reasons for it doing so (e.g. more links to other relevant websites, more other relevant pages on the domain, and so on).

So, not all duplicate content pages will be refused a listing. If these duplicates are articles, then the algorithms that the spiders carry on their backs will take the links from these articles into consideration, the authority of the directory on which it is published, and other factors, before deciding which should be listed. It is wrong to believe that this decision has a chronological factor, but, if you include a link in your article Resource section to your web page that contains the same article, then your page is liable to be listed above the others, partially because of a greater number of links back to it from the other copies, and partially because your entire site is liable to be more relevant than these others to information being sought by Google’s customer.

This is not because yours was created first, but because it better meets Google’s criterion for authoritative back-links. However, if the rest of your website is not equally authoritative, your page might be listed behind another with the same content or even not listed at all.

All of this is designed by Google so that its customer is offered the most relevant range of results to the keywords they used. That is what Google is for, and is its ultimate objective. Google will not penalize any individual or any website for publishing what you refer to as ‘duplicate content’, and it will take your version into consideration for publication just as any other version.

What counts in the long run is which version Google’s algorithms believe to be most likely to provide the best possible information to the person seeking it, and if that means not publishing a whole host of duplicate information, then that is only fair, isn’t it? If you used Google to find some information, you wouldn’t want to find page after page saying exactly the same thing, would you?

No, and neither does Google. A Google listing comes from its indexing of billions of web pages that contain the keywords used by the searcher: both in relation to the entire phrase and to the individual words used in the search term. If you want your copy to be different, make some minor changes and perhaps change the form of the keywords, but most importantly, change the title and the introductory paragraph to which the crawlers will take special notice.

You then have a better chance of your version being listed along with some of the others, but remember: the next time you use the term ‘duplicate content’ you are using a term that does not exist in Google’s vocabulary for any reason than to deny its existence. The Google Duplicate Content Penalty does not exist: the truth!

About The Author
For more information on the mythical duplicate content penalty visit www.article-services.com where Peter Nisbet will also explain how to earn money using article marketing.

Subscribe to Building Mailing Lists

Reblog this post [with Zemanta]

SocialTwist Tell-a-Friend

Google’s SEO Report Card… Information Nuggets or Fool’s Gold?

While ostensibly aimed at helping Google target potential weaknesses in its own product pages, and of no direct use to SEOs, there is nonetheless more than a little gold to be found here, if one just examines the document in a little more depth. So while the post at Google’s Webmaster Central Blog is already beginning to bristle with comments lamenting the fact that this isn’t a clear treasure map to the search-ranking mother lode, it’s worth sifting through the Report Card to see what informational nuggets are hidden inside.

Subject I: Search Result Presentation

It’s easy to see why some readers simply dismissed this document out of hand, as the first section starts off being little more than a rehash of the standard “Use Page Titles, Use Meta Descriptions” advice found in any SEO-101 manual. Only by persevering to the part talking about Google Sitelink Triggering, does one begin to suspect that there may be a little more to the report card than meets the eye. Here the authors throw out a couple of crumbs about categorizing website and link-structure, and consolidating a site’s URLs to maximize its informational focus with the aim of increasing the chances of
Google generating Sitelinks.

Even so, it’s nothing most professionals haven’t heard before, and I suspect that by this time a lot of readers had given up, thinking that nothing interesting was in store.

Subject II: URLs and Redirects

This is where we see a little glitter among the rubble, as the section starts off with the statement that: “Google products’ URLs take many different forms. Most larger products use a subdomain, while smaller ones usually use a directory form…”

In itself this is not an exceptional statement, and the chapter continues to give handy, but hardly unique, information about canonicalization, URL structure, and redirects until Page 10, where we find the following declaration:

“Subdomains require an extra DNS lookup, slightly affecting latency, which is very important at Google.”

Page load-speeds are an important factor to Google. There’s been talk and speculation about this ever since Matt Cutts dropped the first hints last year, and these days most SEOs are busily proclaiming that slow websites are now a handicap.

Haven’t they always been?

Be that as it may, this fact is not common knowledge with the average webmaster, as demonstrated by a question I’m regularly confronted with over at the Google Webmaster Help Forum:

“Which is a better way to categorize my site, subdomains or folders?”

The standard answer to this question used to be “Whichever you prefer” before load-times became an issue. Now, however, we find a clear indicator that a folder-based approach is much-preferable unless a category actually contains enough information to merit its own site, which is effectively what a subdomain turns it into.

Subject III: On-Page Optimizations

While at first glance this chapter is more standard SEO-101 fodder, it’s where we find a sizable nugget, as the report talks about semantic markup, and how Google uses it to gauge a page’s content.

“Nothing new here; we all use H1 tags.” you might say, but you’d only be partially right, because this issue not only runs much deeper than H1 headings, it runs beyond Heading tags altogether, as I’ll explain shortly. For the moment, however, let’s stay with them.

In the past few years, a great many Optimizers have reached the conclusion that only H1, and, to a degree, H2 are of any promotional value, and that lesser headings (H3 – H6) carry practically no weight at all. But let’s take a look at the following statement, taken from Page 38 of the Report:

“Most product main pages have an opportunity to use one <h1> tag, like the example above, but they’re currently only using other heading tags (<h3> in this case) or larger font styling. While styling your text so it appears larger might achieve the same visual presentation, it does not provide the same semantic meaning to the search engine that an <h1> tag does.”

For starters it’s obvious that the lesser headings are alive and well, and being used by Google. We’re also told that Google does not, or cannot, judge the visual-context meaning of CSS styled text. The conclusion is to use more heading tags instead of CSS styles wherever your content calls for it. However, there’s more to it still. Let’s take another look at part of that statement:

“…but they’re currently only using other heading tags…”

It would appear that Google still places greater value on other semantic markup tags (em, strong, blockquote, etc.) than many professionals give them credit for these days. Otherwise why would the author specifically note the fact that Google only uses headings and font styles?

I personally know quite a few professionals who have long-since abandoned most semantic markup tags in favour of CSS style, since the prevailing attitude of designers and SEOs has been that making text bold or italic no longer carries much promotional weight, following widespread abuses in the mid-2000s and Google’s consequent algorithm updates.

And although the above statement may be a tentative one, it might just point the way back to a more HTML-based approach to web design. Indeed, if it can be taken at face-value, it’s entirely possible that those SEOs and designers advocating CSS-based, table-less design as the way forward are barking up the wrong tree. Whatever the case may be, there is undoubtedly more to the SEO Report Card than first meets the eye, and at the very least, there is a little gold to be extracted from the mass of standard information. Only by reading the full document will you be able to make an assessment yourself.

What should also be remembered is that the SEO Report Card is not aimed at high-flying SEOs or E-lebrity industry pundits, but at the intermediate webmaster for whom even the report’s basic information is of immense value, if read alongside Google’s SEO Starter Guide.

About The Author
Sasch Mayer is a writer and consultant with a career spanning well over a decade and a half. Over the years, his web design and promotion advice and Professional Keyword Research have helped countless clients diagnose and solve problems with a wide range of site issues.

Subscribe to Building Mailing Lists

Reblog this post [with Zemanta]

SocialTwist Tell-a-Friend

Who’s Linking to Your Web Site and What Does That Say About You to Google?

Linking is the mechanism that connects all the pages on the Internet. You’ve got links throughout your web site to let people navigate their way around. You may have links going out to other web sites that you think will be useful for your visitors. And hopefully you have links coming into your web site from independent sources.

All types of links can impact your search engine optimization results, helping determine where your web site shows up online. Though the hardest to control, inbound links pointing to your site can make the biggest impact.

At its most basic, the concept is that if several high-quality sites are linking to your web site, then Google and other search engines figure your site must be a popular, valuable resource – and they will be more likely to show it higher in their search results. In effect, your site receives “link juice” from other web pages that link to it.

However, it’s not enough to secure a couple links and then sit still. The Google PageRank algorithm looks at the pattern of links to your site as they build over time.

Building the right kind of links can bring a major payoff, while a wrong turn could get you penalized – and the Google Sandbox is not easy to dig out of.

Armed with a bit of knowledge and some creativity, you can build up valuable incoming links naturally and powerfully, avoiding the traps that plague amateurs.

Spice Up Your Links With Some Variety

There are all kinds of link farming schemes to grow links, and you need to run the other way from these. This is also called reciprocal linking, where you exchange links with other web sites that will then link to you on a mass scale. Warning: Google is onto this.

While it’s perfectly advantageous to link to high-quality sites that also link to you, the key here is to cultivate a natural mix of links over time.

Is it natural to suddenly have 100 links pointing to your site, all with the same text? Of course not. When people link to you naturally, they might use your business name (SEO Advantage) or some variation on a descriptive phrase (search optimization company). If too many similar links exist, it can signal that those links were generated artificially and potentially result in penalties.

Also consider which pages on your site inbound links point to. Your home page is probably going to get the most, but it’s natural to have links pointing to specific pages inside your web site, too. Cultivate links to your services, your blog, your news pages, your articles, etc., to help those pages get indexed and build their own PageRank. Called deep links, these can help bolster your site’s overall performance.

Some links also carry a title tag, which is indicated in the source code. This is a little too technical to go into detail here, but if you can influence this you’ll want both the link text and title to vary a bit among the links pointing to your site. Once again, the key is to grow your links in a natural pattern.

Not Every Link Carries The Same Value

Links from popular, established web sites usually carry the greatest value. That’s because they have high PageRank from plenty of other people already linking to them. A link from CNN.com, for example, will carry much more weight than a link from a free press release distribution site that few people know of. Likewise, a link from www.sbdpro.com will have a greater impact than a link from a directory that uses no-follow tags.

No-follow tags are the bane of naive link builders. It’s tempting to think you can just link to pages on your site from your Twitter tweets, Facebook and other social media applications. However, many of these sites as well as online ads and also some directories employ “no follow” tags that prevent the search engines from following a link to your site. In this case, it’s as if the link doesn’t exist in the eyes of the search engines. (That doesn’t mean the links aren’t valuable to people who find you and follow the link, it’s just not helping your web site show up in Google.)

So, How Can A Business Build Incoming Links Naturally?

The mix of links created out on the web pointing back at your web site should avoid skewing toward any particular type. A good mix that you can influence may include:

• Directories – Professional organizations, online communities and forums, business directories, etc. can all potentially provide good links to your site. There are several premium directories that are staples in an SEO firm’s link building toolkit, like DMOZ.org. Keep in mind that your listing itself should be optimized in order to reap the full link juice benefits.

• Press Releases – Writing and submitting press releases online can help you get your news in front of more people and build links to your site. (Be sure to use best practices for writing and evaluate carefully your outlets for good links).

• Blogs – Link to relevant pages on your site from your blog. Build relationships online with other bloggers, too, and they may want to link back to you! Active blogs with high visibility and large followings are going to be your best bet, but you can mix it up over time targeting lesser known bloggers, too. Keep in mind that as other sites grow in PageRank, the value passed to your site will also grow.

• Create Some Link Bait – Make sure your content is so fascinating or funny that people will want to tell others about it. This is the ultimate for building naturally growing incoming links but of course hard to do.

A sample schedule could mean every month you líst your site in two good dírectories, link to interior site pages from a couple relevant posts in your blog, distribute one press release to news sites, and write one great article that other people may want to link to and then let them know about it.

A word about selecting outlets is in order, too. You’ll need to carefully assess each place you target in order to determine the link value they can pass onto you. For example, different press release submission sites and directories can offer you a wide variety in link value. This can be time-consuming to determine but worth it when your site’s PageRank starts to climb. (Find some information on how to evaluate outlets in this article on press release optimization.

See Who’s Linking To Your Web Site

You can see all the links pointing to your site via a couple handy tools online. Go to Google.com to see who Google is crediting with a link to you. Enter in the search box [link:www.yourwebaddress.com] without the brackets.

Not all your links are going to show here, though, but you can use Google’s free webmaster tools for more in-depth research if you’re inclined. You can also use the free Yahoo! Site Explorer to see what links Yahoo! shows pointing at your site.

Every month, make it a part of your link-building strategy to check for any new links and build relationships with more web properties. After all, a link is a compliment and a great way to network in addition to an important way to build value for your web site.

About The Author
Stone Reuning is president of SEO Advantage, a search engine optimization company that helps businesses harness the revenue generation potential of their websites. Referenced in books such as “Writing Web-Based Advertising Copy to Get the Sale” and the BusinessWeek bestseller “The New Rules of Marketing and PR”, http://www.seo-advantage.com offers information to help small businesses content online.

Subscribe to Building Mailing Lists

Reblog this post [with Zemanta]

SocialTwist Tell-a-Friend

Ways to Get Fresh Links to Old Content for Better Search Rankings

Google Doesn’t Care if You USED to Get Links

You may have gotten some good links in the past, but don’t count on them helping you forever. Old links go stale in the eyes of Google.

Do you still get links to old content? Tell us why you think that is.

Google’s Matt Cutts responded to a user-submitted question asking if Google removes PageRank coming from links on pages that no longer exist (for example, GeoCities pages that have been shut down). The answer to this question is unsurprisingly yes, but Cutts makes a statement within his response that may not be so obvious to everybody.

“In order to prevent things from becoming stale, we tend to use the current link graph, rather than a link graph of all of time,” he says. (Emphasis added)

Now, this isn’t exactly news, and to the seasoned search professional, probably not much of a revelation. However, to the average business owner looking to improve search engine performance (and not necessarily adapting to theever-changing ways of SEO), it could be something that really hasn’t resonated. Businesses have always been told about the power of links, but even if you got a lot of significant links a year or two ago, that doesn’t mean your content will continue to perform well based on that.  WebProNews has discussed the value of “link velocity” and Google’s need for freshness in the past:

Link velocity refers to the speed at which new links to a webpage are formed, and by this term we may gain some new and vital insight. Historically, great bursts of new links to a specific page has been considered a red flag, the quickest way to identify a spammer trying to manipulate the results by creating the appearance of user trust. This led to Google’s famous assaults on link farms and paid link directories.

But the Web has changed, become more of a live Web than a static document Web. We have the advent of social bookmarking, embedded videos, links, buttons, and badges, social networks, real-time networks like Twitter and Friendfeed. Certainly the age of a website is still an indication of success and trustworthiness, but in an environment of live, real time updating, the age of a link as well as the slowing velocity of incoming links may be indicators of stale content in a world that values freshness.

Do you think link freshness should play a role in search engine rankings? Let Chris and WebProNews know.

So how do you keep getting “fresh” links?

If you want fresh links, there are a number of things you can do. For one, keep putting out content. Write content that has staying power. You can link to your old content when appropriate. Always promote the sharing of your content. Include buttons to make it easy for people to share your content on their social network of choice. You may want to make sure your old content is presented in the same template as your new content so it has the same sharing features. People still may find their way to that old content, and they may want to share it if encouraged.

Go back over old content, and look for stuff that is still relevant. You can update stories with new posts adding a fresher take, linking to the original. Encourage readers to follow the link and read the original article, which they may then link to themselves.

Leave commenting on for ongoing discussion. This can keep an old post relevant. Just because you wrote an article a year ago, does not mean that people will still not add to it, and sometimes people will link to articles based on comments that are left.

Share old posts through social networks if they are still about relevant topics. You don’t want to just start flooding your Twitter account with tweets to all of your old content, but if you have an older article that is relevant to a current discussion, you may share it, as your take on the subject. A follower who has not seen it before, or perhaps has forgotten about it, may find it worth linking to themselves. Can you think of other ways to get more link value out of old content?

Do you get fresh links for old content? Why do you think that is? Share your thoughts with WebProNews.

Related Articles:

> Google’s Treatment of Twitter and Facebook Links
> How Press Releases Can Be Great For Search
> Link Building for Bing Rankings: Dos and Don’ts

About the author:
Chris Crum has been a part of the WebProNews team and the iEntry Network of B2B Publications since 2003. Follow WebProNews on Facebook or Twitter. Twitter: @CCrum237

Subscribe to Building Mailing Lists

Reblog this post [with Zemanta]
SocialTwist Tell-a-Friend

Customer Connections Now Important for Google Results

Google Puts Social Results in the Mix

If you are one of those people still skeptical about the business uses of social media, you may be interested to know that Google’s Social Search is no longer just an experiment. Though it does have the beta tag on it, it is now mainstream. This is something we’ve all known would come sooner or later, but now it’s here.

Have you established enough connections to do well in social search? Leave your comment t WebProNews.

If people were already seeing different search results from one another before, that is really going to be true now, now that Google is plugging results based on the individual’s social circle into any given SERP. This is one of the many ways SEO is changing, and it would appear that any business looking to get some play in Google search, would do well to have as many connections established as possible, via various social networking sites and tools.

Keep in mind that the social circle is based upon information that Google has about you from your Google account. You can see your list of connections anytime from here (assuming you have a Google account). It pulls connections from your Google Contacts, and any services you have listed on your Google profile (assuming you have services listed on your profile). If you have Twitter listed for example (Facebook connections are not public), anyone you are connected to through one of those services is fair game for potential search results.

Google’s thinking is that if the user is connected to certain people, results from those people will have relevance because you know and trust them. Google says, “You can improve social search results for your friends and contacts by linking to content you have created such as blogs, photos and videos on your Google profile.”

“We’ve been having a lot of fun with Social Search. It’s baby season here on our team — two of us just had little ones, and a third is on the way,” the company says in the announcement. “We’re all getting ready to be parents for the first time and we have lots of questions. So, what do we do? We search Google, of course! With Social Search, when we search for [baby sleep patterns], [swaddling] or [best cribs], not only do we get the usual websites with expert opinions, we also find relevant pages from our friends and contacts. For example, if one of my friends has written a blog where he talks about a great baby shop he found in Mountain View, this might appear in my social results. I could probably find other reviews, but my friend’s blog is more relevant because I know and trust the author.”

Appearing in social search results means:

1. Make sure you have all of your important links on your Google Profile.
2. Make as many connections as possible.
3. Encourage customers to follow you via social networks.
4. Participate in social media so people will engage with you.
5. Encourage sharing of content (there are plenty available social media buttons)
6. Include social network info on business cards/signage, etc.
7. Include social network info in your online advertising
8. There are probably many more worthwhile tips (if you have any, share them in the comments at WebProNews).

Google’s social search doesn’t end with regular web search. They’re adding it to image search, and who knows what else. Look for a lot more features to become part of social search, as Google leaves that Beta tag on. Let’s not forget that Gmail only left beta last year, and I don’t have to tell you they’ve added a lot to that over the years.

Just remember that social results will always be clearly marked as such on Google’s SERPs. They will be accompanied by a heading that says “Results from your social circle”. Still, for traditional SEO it is just one more thing to compete with as far as page real estate. That’s why social is a much more of an important part of search than ever.

Google has been making many moves over the last couple years that seem to slowly turn it more and more into its own social network. Now that its profiles have a direct impact on search results, how people view Google in this light is likely to change significantly. Once more and more average users start to realize the social features are being integrated more into their everyday searches, they may find themselves getting sucked into using Google as more of a social tool, as opposed to just search.

What are your thoughts on Google’s social search? Let WebProNews know.

Related Articles:

> Google Profiles Go to the SERPs
> Google Launches Social Search Experiment
> Can Search Engine Optimization Survive Google?

About the Author:
Chris Crum has been a part of the WebProNews team and the iEntry Network of B2B Publications since 2003. Follow WebProNews on Facebook or Twitter. Twitter: @CCrum237

| Subscribe to Building Mailing Lists

Reblog this post [with Zemanta]
SocialTwist Tell-a-Friend

A Markup That Could Have Big Implications for SEO

RDFa Could Play an Increasingly Big Role in Search

RDFa, which stands for Resource Description Framework in attributes, is a W3C recommendation, which adds a set of attribute level extensions to XHTML for embedding rich metadata within web documents. While not everyone believes that W3C standards are incredibly necessary to operate a successful site, some see a great deal of potential for search engine optimization in RDFa.

In fact, this is the topic of a current WebProWorld thread, which was started by Dave Lauretti of MoreStar, who asks, “Are you working the RDFa Framework into your SEO campaigns?” He writes, “Now under certain conditions and with certain search strings on both Google and Yahoo we can find instances where the RDFa framework integrated within a website can enhance their listing in the search results.”

Lauretti refers to an article from last summer at A List Apart, by Mark Birbeck who said that Google was beginning to process RDFa and Microformats as it indexes sites, using the parsed data to enhance the display of search results with “rich snippets”. This results in the Google results you see like this:

RDFa in play

“It’s a simple change to the display of search results, yet our experiments have shown that users find the new data valuable — if they see useful and relevant information from the page, they are more likely to click through,” Google said upon the launch of rich snippets.

Google says it is experimenting with markup for business and location data, but that it doesn’t currently display this information, unless the business or organization is part of a review (hence the results in the above example). But when review information is marked up in the body of a web page, Google can identify it and may make it available in search results. When review information is shown in search results, this can of course entice users to click through to the page (one of the many reasons to treat customers right and monitor your reputation).

Currently Google uses RDFa for reviews, but this search also displays the date of the review, the star rating, the author and the price range of an iPod, as Lauretti points out.

Best Buy’s lead web development engineer reported that by adding RDFa the company saw improved ranking for respective pages. They saw a 30% increase in traffic, and Yahoo evidently observed a 15% increase in click-through rates.(via Steven Pemberton)

Implications for SEO

I’m not going to get into the technical side of RDFa here (see resources listed later in the article), but I would like to get into some of the implications that Google’s use of RDFa could have on SEO practices. For one, rich snippets can show specific information related to products that are searched for. For example, a result for a movie search could bring up information like:

– Run time
– Release Date
– Rating
– Theaters that are showing it

“The implementation of RDFa not only gives more information about products or services but also increases the visibility of these in the latest generations of search engines, recommender systems and other applications,” Lauretti tells WebProNews. “If accuracy is an issue when it comes to search and search results then pages with RDFa will get better rankings as there would be little to question regarding the page theme.” (Source) He provides the following chart containing examples of the types of data that could potentially be displayed with RDFa:

RDFa Implications

“It is obvious that search marketers and SEOs will be utilizing this ability for themselves and their clients,” says Lauretti. Take contact information specifically. “Using RDFa in your contact information clarifies to the search engine that the text within your contact block of code is indeed contact information.” He says in this same light, “people information” can be displayed in the search results (usually social networking info). You could potentially show manufacturer information or author information.

RDFa actually has implications beyond just Google’s regular web search.
With respect to Google’s Image search, the owner of images can also use RDFa to provide license information about the images they own. Google currently allows image searchers to have images displayed based on license type, and using RDFa with your images lets the search bots know under which licenses you are making your images available (Via Mark Birbeck). There is also RDFa support for video.

Following are some resources where you can learn more about RDFa and how to implement it:

Google Introduces Rich Snippets
Introduction to RDFa
RDFa Primer
About RDFa (Google Webmaster Central)
RDFa to Provide Image License Info
RDFa Microformat Tagging For Your Website
For Businesses and Organizations
About Review Data (Google Webmaster Central)

Google’s Matt Cutts has said in the past that Google has been kind of “white listing” sites to get rich snippets, as Google feels they are appropriate, but as they grow more confident that such snippets don’t hurt the user experience, then Google will likely roll the ability out more and more broadly. This is one thing to keep an eye on as the year progresses, and is why those in the WebProWorld thread believe RDFa will become a bigger topic of discussion in 2010.

WebProNews would like to thank Dave Lauretti, who contributed some findings to this piece.

Update: As I pieced together this article, Google coincidentally announced support for rich snippets for Events.

Related Articles:

> Get Your Breadcrumbs in Google for More Links in Results
> Google Makes it Easier to Tell Where Results Originate From
> Get More Links in Your Actual Google Results

About the author:
Chris Crum has been a part of the WebProNews team and the iEntry Network of B2B Publications since 2003. Follow WebProNews on Facebook or Twitter. Twitter: @CCrum237

|   Subscribe to Building Mailing Lists

Reblog this post [with Zemanta]

SocialTwist Tell-a-Friend

Get Your Breadcrumbs in Google for More Links in Results

Google Talks About Getting Your Breadcrumbs In

Last summer it was discovered that Google was testing breadcrumbs in search results (breadcrumbs being the hierarchical display commonly used in site navigation. For example: Home Page>Product Page>Product A Page). Then in mid-November, Google announced that it was rolling out the use of breadcrumbs in search results on a global basis. What this means for webmasters is that if you can get your breadcrumbs into Google’s results, you essentially have more links on the results page. You have a separate link for each page in the breadcrumb trail.

Do your site’s breadcrumbs show up in Google’s results? Leave a comment at WebProNews.

The company said they would only be used in place of some URLs, mainly ones that don’t give the added context of a link the way that breadcrumbs do. Interestingly, there seems to be an incentive for those who go the breadcrumb route because of the multiple links that you just don’t get with regular search results.

Google Breadcrumbs display

Google’s move was generally well received. This was reflected in the comments from WebProNews readers on our past coverage. For example, a commenter going by the handle Stupidscript said, “It’s definitely a good time to start wrapping your head around the notion of ‘providing context’, because the web is heading into its “semantic” period … where each link will be more or less valuable based on its relationships with and context to information found behind other links.”

Google’s use of breadcrumbs in search results is the focus of a recently submitted question to the Google Webmaster Central team. The question was, “Google is showing breadcrumb URLs in SERPs now. Does the kind of delimiter matter? Is there any best practice? What character to use is best? > or | or / or???” Google’s Matt Cutts responded:

Matt says you should have a set of delimited links on your site that accurately reflect your site’s hierarchy. He also notes, however, that it is still in the “early days” for breadcrumbs.

“Think about the situation with sitelinks,” he says. “Whenever we started out with sitelinks, it took a while before…for example, we added the ability in Google Webmaster Tools where you could remove a sitelink that you didn’t like or that you thought was bad. So we started out, and we did a lot of experiments, and we’ve changed the way that sitelinks look several times. And we have different types of sitelinks (within a page, and the standard ones you’re familiar with). So we’ve iterated over time.”

In this same way, he says, Google is in the early stage with breadcrumbs and he has seen different experiments with them. For example, there have been prototypes where the breadcrumbs were in the rich snippet gray line, above the regular snippet. “Having it in the URL is kind of nice, but it could still change over time,” he says.

He says the best advice he can give is to make sure you have a set of delimited links that accurately reflect your site’s hierarchy, and that will give you the best chance of getting breadcrumbs to show up in Google, but Google will continue to work on ways to improve breadcrumbs. He says any new announcements about it will likely be made on the Google Webmaster blog.

While Matt doesn’t exactly lean toward one way or another with regards to which character to use as asked about in the submitted question, all of the examples I have seen highlighted show the “>” used. That includes examples from Google’s original announcement on the inclusion of breadcrumbs (if you see other ways, please point them out in the comments). Based on that, if I were going to choose one, I’d go with that.

There are three types of breadcrumbs (as described here): path, location, and attribute. Path breadcrumbs show the path that the user has taken to arrive at a page, while location breadcrumbs show where the page is located in the website hierarchy. Attribute breadcrumbs give information that categorizes the current page. Obviously, location breadcrumbs would be the ones Google is using (although with personalized search becoming more of a factor, who knows in the future?).

About the author:
Chris Crum has been a part of the WebProNews team and the iEntry Network of B2B Publications since 2003. Follow WebProNews on Facebook or Twitter. Twitter: @CCrum237

|   Subscribe to Building Mailing Lists

Reblog this post [with Zemanta]

SocialTwist Tell-a-Friend

Google Reveals Factors for Ranking Tweets

Things You Should Know About Real-Time SEO

It’s ok to say “no” to Twitter if that’s your thing. There’s a chance that it just doesn’t fit into your strategy or help you achieve your goals. That’s cool. However, if it is your thing, you may be interested in how Google ranks tweets. That is if search marketing is your thing.

Do you see Twitter as important to an effective search marketing campaign? Share your thoughts with WebProNews.

Google and Microsoft almost simultaneously announced deals with Twitter a few months back, that would give the companies access to tweets in real-time to fuel their respective search engines’ real-time results. Microsoft immediately launched their version, but it was separate from the regular Bing search engine. Google waited a while, but eventually started incorporating real-time results right into regular Google SERPs (including not only tweets, but various other sources).

After the Twitter deals were announced, Bing came out and said, “If someone has a lot of followers, his/her Tweet may get ranked higher. If a tweet is exactly the same as other Tweets, it will get ranked lower.”

Amit Singhal Google was not as vocal about how it would rank tweets and other real-time results, but the company has now shed a bit of light on that via an interview with MIT’s Technology Review. David Talbot interviewed Google “Fellow” Amit Singhal, who has led development of real-time search at the company. According to him, Google also ranks tweets by followers to an extent, but it’s not just about how many followers you get. It’s about how reputable those followers are.

Singhal likens the system to the well-known Google system of link popularity. Getting good links from reputable sources helps your content in Google, so having followers with that some kind of authority theoretically helps your tweets rank in Google’s real-time search.

“One user following another in social media is analogous to one page linking to another on the Web. Both are a form of recommendation,” Singhal says. “As high-quality pages link to another page on the Web, the quality of the linked-to page goes up. Likewise, in social media, as established users follow another user, the quality of the followed user goes up as well.”
But that’s only one factor.

Do you commonly use hashtags in your tweets? If your goal is to rank in Google’s real-time search index, you may want to cut down on that practice, because according to Singhal, that is a big red flag for a lower quality tweet. This seems to be part of Google’s spam control strategy.

Another noteworthy excerpt from the interview:

Another problem: how, if someone is searching for “Obama,” to sift through White House press tweets and thousands of others to find the most timely and topical information. Google scans tweets to find the “signal in the noise,” he says. Such a “signal” might include a new onslaught of tweets and other blogs that mention “Cambridge police” or “Harry Reid” near mentions of “Obama.” By looking out for such signals, Google is able to furnish real-time hits that contain the freshest subject matter even for very common search terms.

Well, we certainly know more about Google’s strategy for tweet ranking now, but there are still plenty of questions about it. What is Google’s stance is on Ghost Tweeting? Are Google’s ranking factors a good reason to create and follow more Twitter lists in hopes for gaining more reputable industry followers?

The factors mentioned aren’t the only ones Google employs. It’s not like Google is going to tell us everything. It also helps to keep in mind that real-time search spans far beyond just tweets. Still, Twitter is clearly a big part of it, and even the significance of tweets themselves will evolve in time.

Google says it hopes to factor in geo-location data (with regards to tweets) into the real-time search results at some point. Google and Twitter engineers frequently collaborate on  real-time search, which Google itself says is evolving.

By the way, it stands to reason that Google’s strategy for ranking tweets probably shares similarities for how it ranks content from other sources drawn from for real-time search.

About the Author:
Chris Crum has been a part of the WebProNews team and the iEntry Network of B2B Publications since 2003. Follow WebProNews on Facebook or Twitter. Twitter: @CCrum237

|   Subscribe to Building Mailing Lists

 

Reblog this post [with Zemanta]
SocialTwist Tell-a-Friend

Creating Google Ready Video Site Maps

So back in December of 2007 Google’s Video team announced changes to their sitemap protocol. Specifically changes to the way Google uses sitemaps to index your video content. Now this can be done for a number of different sites that have videos, but would be an absolute killer change for sites that contain mostly video content, and we all know there are a lot out there.

How do I create my video sitemap according to the standards?

To create your video sitemap, even if you just have a dozen or so videos on your site, or even less, and just want to test out the service to see what kind of exposure your video’s get now that Google can crawl them and pick up meta data about the video such as aspect ratio, runtime, etc. I would take a look at the step by step guide to creating video sitemaps here.

How do I submit my Video Sitemap to Google?

Again, you can use the guide set up on Google’s Webmaster Guide for this topic.  I have also added a screenshot below of where you would go to do the submission.

Google Video Sitemap Submission

If anyone feels like they have more to add to this quick guide on video sitemap creation, submission, feel free to leave comments, tips, suggestions and I will give appropriate credit to whoever submits an actual valid tip that I use, but as always feel free to leave comments about general SEO, SEM and Local Search as well. Questions are usually answered within one business day, and if can’t be answered as a quick FAQ, I’ll let you know that as well.

Another note worth mentioning is that when your videos do end up getting indexed by Google and end up on YouTube there will be the ability to easily embed videos in to certain websites, so in essence this makes your video content “portable”, which is huge.

About the Author:
Geoff Simon is currently a Web Production Assistant with Disney Interactive Media Group working on Disney Family’s website portfolio which includes Family.com, FamilyFun.com, CelebrityParents, iParenting and others. He also maintains a personal blog at http://simon-searchmarketing.com, a small boutique firm specializing in local search, new media and online public relations. Geoff has over 10 years of database, direct and web marketing experience. He can be found on twitter @geoff_simon.

About DevWebProUK
DevWebProUK is for professional developers … those who build and manage applications and sophisticated websites. DevWebProUK delivers via news and expert advice New Strategies In Development.

|   Subscribe to Building Mailing Lists

Reblog this post [with Zemanta]
SocialTwist Tell-a-Friend

« Older Entries