RSS
 

Posts Tagged ‘SEO’

How Schema.org Will Change Your Search Results & What it Means for Marketers

30 Jun

search image

Jeff Ente is the director of Who’s Blogging What, a weekly e-newsletter that tracks over 1,100 social media, web marketing and user experience blogs to keep readers informed about key developments in their field and highlight useful but hard to find posts. Mashable readers can subscribe for free here.

Algorithms aren’t going away anytime soon now that websites have a better way to directly describe their content to major search engines. Earlier this month, Google, Bing and Yahoo came together to announce support for Schema.org, a semantic markup protocol with its own vocabulary that could provide websites with valuable search exposure. Nothing will change overnight, but Schema.org is important enough to bring the three search giants together. Websites would be wise to study the basics and come up with a plan to give the engines what they want.

Schema.org attempts to close a loophole in the information transfer from website data to presentation as search results. As they note on their homepage: “Many sites are generated from structured data, which is often stored in databases. When this data is formatted into HTML, it becomes very difficult to recover the original structured data.”

Simply put, Schema.org hopes to create a uniform method of putting the structure back into the HTML where the spiders can read it. The implications go beyond just knowing if a keyword like “bass” refers to a fish, a musical instrument or a brand of shoes. The real value is that websites can provide supporting data that will be valuable to the end user, and they can do so in a way that most search engines can read and pass along.


How Schema.org Works


Schema.org was born out of conflict between competing standards. Resource Description Framework (RDF) is the semantic standard accepted by The World Wide Web Consortium (W3C). The Facebook Open Graph is based on a variant of RDF which was one reason that RDF seemed poised to emerge as the dominant standard.

Until this month. Schema.org went with a competing standard called microdata which is part of HTML5.

Microdata, true to its name, embeds itself deeply into the HTML. Simplicity was a key attribute used by the search engines to explain their preference for microdata, but simplicity is a relative term. Here is a basic example of how microdata works:

<div itemscope itemtype="http://data-vocabulary.org/Person">
<span itemprop="name">Abraham Lincoln</span> was born on

<span itemprop="birthDate">Feb. 12, 1809</span>.

He became known as <span itemprop="nickname">Honest Abe</span> and later served as <span itemprop="jobTitle">President of the United States</span>.

Tragically, he was assassinated and died on <span itemprop="deathDate">April 15, 1865</span>.

</div>

A machine fluent in Microdata would rely on three main attributes to understand the content:

  • Itemscope delineates the content that is being described.
  • Itemtype classifies the type of “thing” being described, in this case a person.
  • Itemprop provides details about the person, in this case birth date, nickname, job title and date of death.

Meanwhile, a person would only see:

“Abraham Lincoln was born on Feb. 12, 1809. He became known as Honest Abe and later served as President of the United States. Tragically, he was assassinated and died on April 15, 1865.”

Fast forward to the web economy of 2011 and restaurants can use the same technology to specify item properties such as acceptsReservations, menu, openingHours, priceRange, address and telephone.

A user can compare menus from nearby inexpensive Japanese restaurants that accept reservations and are open late. Schema.org’s vocabulary already describes a large number of businesses, from dentists to tattoo parlors to auto parts stores.


Examples of Structured Data Already in Use


Structured data in search results is not new. The significance of Schema.org is that it is now going to be available on a mass scale. In other words, semantic markup in HTML pages is going prime time.

Google has so far led the way with structured data presentation in the form of “rich snippets,” which certain sites have been using to enhance their search listings with things like ratings, reviews and pricing. Google began the program in May 2009 and added support for microdata in March 2010.

A well known example of a customized structured search presentation is Google Recipe View. Do you want to make your own mango ice cream, under 100 calories, in 15 minutes? Recipe View can tell you how.


The Scary Side of Schema.org


Google, Bing and Yahoo have reassured everyone that they will continue to support the other standards besides microdata, but Schema.org still feels like an imposed solution. Some semantic specialists are asking why the engines are telling websites to adapt to specific standards when perhaps it should be the other way around.

Another concern is that since Schema.org can be abused, it will be abused. That translates into some added work and expense as content management systems move to adapt.

Schema.org might also tempt search engines to directly answer questions on the results page. This will eliminate the need to actually visit the site that helped to provide the information. Publishing the local weather or currency conversion rate on a travel site won’t drive much traffic because search engines provide those answers directly. Schema.org means that this practice will only expand.

Not everyone is overly concerned about this change. “If websites feel ‘robbed’ of traffic because basic information is provided directly in the search results, one has to ask just how valuable those websites were to begin with,” notes Aaron Bradley who has blogged about Schema.org as the SEO Skeptic.

“The websites with the most to lose are those which capitalize on long-tail search traffic with very precise but very thin content,” Bradley says. “Websites with accessible, well-presented information and — critically — mechanisms that allow conversations between marketers and consumers to take place will continue to fare well in search.”


Three Things To Do Right Now


  • Audit the data that you store about the things that you sell. Do you have the main sales attributes readily available in machine readable form? Make sure you have the size, color, price, previous feedback, awards, etc. easily readable.
  • Review the data type hierarchy currently supported by Schema.org to see where your business fits in and the types of data that you should be collecting.
  • Check your content management and web authoring systems to see if they support microdata or if they are at least planning for it. Microdata is not just a few lines of code that go into the heading of each page. It needs to be written into the HTML at a very detailed level. For some site administrators it will be a nightmare, but for others who have done proper planning and have selected the right tools, it could become an automatic path to greater search exposure.

Image courtesy of iStockphoto, claudiobaba

More About: bing, business, Google, MARKETING, Schema, schema.org, Search, SEM, SEO, Yahoo

For more Dev & Design coverage:


 
 

Facebook Comments and SEO

16 Mar

Facebook Comments could be the most disruptive feature released by Facebook. Why? Comments are one of the largest sources of meta content on the web. Our conversations provide a valuable feedback mechanism, giving greater context to both users and to search engines.

The Walled Garden

Using Firebug you can quickly locate Facebook Comments and determine how they’re being rendered. Facebook Comments are served in an iframe.

Facebook Comments Delivered in iFrame

This means that the comments are not going to be attributed to that page or site nor seen by search engines. In short, Facebook Comments reside in the walled garden. All your comments are belong to Facebook.

This differs from implementations like Disqus or IntenseDebate where the comments are ‘on the page’ or ‘in-line’. One of the easier ways to understand this is to grab comment text from each platform and search for it on Google. Remember to put the entire text in quotes so you’re searching for that exact comment phrase.

Disqus Comments

Here’s a comment I made at Search Engine Roundtable via Disqus.

Comment on Disqus

Here’s a search for that comment on Google.

Disqus Comment SERP

Sure enough you can find my comment directly at Search Engine Roundtable or at FriendFeed, where I import my Disqus comments.

Facebook Comments

Here’s a comment made via Facebook Comments on TechCrunch.

Comment made via Facebook Comments

Here’s a search for this comment on Google.

Facebook Comments SERP

In this instance you can’t find this comment via search (even on Bing). The comment doesn’t exist outside of Facebook’s walled garden. It doesn’t resolve back to TechCrunch.

I thought of an edge case where Facebook Comments might show up on FriendFeed (via Facebook), but my test indicates they do not.

Comments and SEO

Search engines won’t see Facebook Comments. That is a big deal. Comments reflect the user syntax. They capture how people are really talking about a topic or product. Comments help search engines to create keyword clusters and deliver long-tail searches. Comments may signal that the content is still fresh, important and popular. All that goes by the wayside.

It’s no secret that search engines crave text. Depriving Google of this valuable source of text is an aggressive move by Facebook.

Is this on purpose? I have to believe it is. I can’t know for sure but it’s curious that my Quora question has gone unanswered by Facebook, even when I’ve asked a specific Facebook Engineer to answer.

Comment Spam

Comment Spam

Comment spam is a huge problem. You know this if you’ve managed a blog for any amount of time. Google’s implementation of nofollow didn’t do much to stop this practice. So Facebook Comments is appealing to many since the forced identity will curtail most, if not all, of the comment spam.

This also means that the meta content for sites using Facebook Comments may be more pristine. This should be an advantage when Facebook does any type of Natural Language Processing on this data. A cleaner data set can’t hurt.

Article Sentiment

Extending this idea, you begin to realize that Facebook could have a real leg up on determining the sentiment of an article or blog post. Others might be able to parse Tweets or other indicators, but Facebook would have access to a large amount of proprietary content to mine page level and domain level sentiment.

Comment Reputation

Facebook can improve on sentiment by looking at comment reputation. Here’s where it gets exciting and scary all at the same time. Facebook can map people and their comments to Open Graph objects. It sounds a bit mundane but I think it’s a huge playground.

Suddenly, Facebook could know who carries a high reputation on certain types of content. Where did you comment? How many replies did you receive? What was the sentiment of those replies? What was the reputation for those who replied to you? How many Likes did you receive? How many times have you commented on the same Open Graph object as someone else?

You might be highly influential when commenting on technology but not at all when commenting on sports.

The amount of analysis that could be performed at the intersection of people, comments and objects is … amazing. Facebook knows who is saying what as well as when and where they’re saying it.

PeopleRank

PeopleRank

Facebook Comments could go a long way in helping Facebook create a PeopleRank algorithm that would help them better rank pages for their users. If I haven’t said it recently, Facebook’s Open Graph is just another version of Google’s Search Index.

In this instance, Facebook seems to be doing everything it can to develop an alternate way of ranking the web’s content while preventing Google from doing so. (Or am I projecting my own paranoia on the situation?)

PeopleRank could replace PageRank as the dominant way to organize content.

Traffic Channel Disruption

The traffic implications of Facebook Comments are substantial. By removing this content from the web, Facebook could reduce the ability of Google and Bing to send traffic to these sites. The long tail would get a lot shorter if Facebook Comments were widely adopted as is.

We’ve seen some anecdotal evidence that referring traffic from Facebook has increased after implementing Facebook Comments. That makes sense, particularly in the short-term.

The question is whether this is additive or a zero-sum game. In the long-run, would implementing Facebook Comments provide more traffic despite the potential loss in search engine traffic via fewer long-tail visits?

For publishers, the answer might be yes. For retailers, the answer might be no. That has a lot to do with the difference between informational and transactional search.

Even posing the question shows how disruptive Facebook Comments could be if it is widely adopted. It could be the true start of a major shift in website traffic channel mix.

Share and Tell: Twitter FriendFeed Suggest to Techmeme via Twitter Tumblr Facebook Sphinn del.icio.us email Digg StumbleUpon Reddit Google Bookmarks LinkedIn

 
 

The New SEO is About Relationships and Relevance

21 Feb

The New SEO is About Relationships and Relevance

This content from: Duct Tape Marketing

Last week Google announced an official update to Social Search – something they’ve been playing around with for some time now. The idea behind social search is that if a Google account user does a search for something they will get the most relevant results according to Google now mixed in with results that Google determines are important from those in your social networks.

The news for anyone thinking about SEO is summed by this statement from Google – . . .relevance isn’t just about pages—it’s also about relationships. Google has officially moved from playing with social search to altering the SEO landscape with it.

While the newly socialized results are dependent upon the surfer being logged in to their Google account, the significance from an SEO standpoint is potentially game changing. As Google continues to advance this type of thinking when it comes to placement of search results it will bring the online acts of content creation, network building and social participation to new heights.

Consider the images below – the first is a search for the term “social media system” while logged out of Google and second while logged in. The results are dramatically different. (Click to enlarge)

Results while logged in

Results while logged out

I’ve been begging and pleading with small business owners for the last five years to create and use blogs, claim all the digital real estate and profiles they could and get active building social networks like LinkedIn, Facebook and Twitter. While this is behavior that has long influenced organic search results in a more mathematical way, social search highlights the direct impact this behavior has in ways that should open some eyes.

The good news is that people that have participated fully in social media, network building and content creation may have just received a very positive jolt in the search game. There is still time to adopt this behavior because it may be years before this social search function becomes fully realized, but there’s no way to deny the need to make online network building and participation a primary business practice.

Now, this doesn’t mean that good SEO practices of link building and content creation around keywords goes out the window – those factors will remain extremely important, but social network participation just got a raise in terms of becoming a ranking factor that isn’t controlled by traditional SEO practices.

Here are a couple unscientific initial thoughts:

  • You can’t play without a Google Profile – if you have one go update it now and add more connections
  • Sharing content from your Google Reader account seems to get high marks right now
  • Twitter results are being adding pretty quickly
  • With Google and Facebook locked in war for social, don’t expect Facebook results to matter as much

You connect accounts that you want to be part of your public profile using the Google Profile tool, but you can also connect account privately through your Google Account. (Google is choosing your networks through the Social Graph tool.)

Results will be spotty and odd for some time now, but it’s still time to rethink your entire approach to SEO.

 
 

Google already knows its search sucks (and is working to fix it)

12 Jan

google robotIt’s a popular notion these days Google has lost its “mojo” due to failed products like Google Wave, Google Buzz, and Google TV. But Google’s core business — Web search — has come under fire recently for being the ultimate in failed tech products.

I can only ask: What took so long? I first blogged about Google’s increasingly terrible search results in October 2007. If you search for any topic that is monetizable, such as “iPod Connectivity” or “Futon Filling”, you will see pages and pages of search results selling products and very few that actually answer your query. In contrast, if you search for something that isn’t monetizable, say “bridge construction,” it is like going 10 years back into a search time machine.

Search has been increasingly gamed by link and content farms year by year, and users have been frogs slowly getting boiled in water without realizing it. (Bing has similarly bad results, a testament to Microsoft’s quest to copy everything Google.)

But here’s what these late-blooming critics miss: Yes, Google’s search results do indeed suck. But Google’s fixing it.

The much acclaimed PageRank algorithm, which ranks search results based on the highest number of inbound links, has failed since it’s easy for marketers to overwhelm the number of organic links with a bunch of astroturfed links. Case in point: The Google.com page that describes PageRank is #4 in the Google search results for the term PageRank, below two vendors that are selling search engine marketing.

Facebook, which can rank content based on the number of Likes from actual people rather than the number of inbound links from various websites, can now provide more relevant hits, and in realtime since it does not have to crawl the web. A Like is registered immediately. No wonder Facebook scares Google.

But the secret to Google’s success was actually not PageRank, although it makes for a good foundation myth. The now-forgotten AltaVista, buried within Yahoo and due to be shut down, actually returned great results by employing the exact opposite of PageRank, and returned pages that were hubs and had links to related content.

Google’s secret was that it could scale infinitely on low-cost hardware and was able to keep up with the Internet’s exponential growth, while its competitors such as AltaVista were running on expensive, big machines running processors like the DEC Alpha. When the size of the Web doubled, Google could cheaply keep up on commodity PC hardware, and AltaVista was left behind. Cheap and expandable computing, not ranking Web pages, is what Google does best. Combine that with an ever-expanding data set, based on people’s clicks, and you have a virtuous circle that keeps on spinning.

The folks at Google have not been asleep at the wheel. They are well aware that their search results were being increasingly gamed by search marketers and that this was not a battle they were going to win. The answer has been to dump the famous blue links on which Google built its business.

Over the past couple of years, Google has progressively added vertical search results above its regular results. When you search for the weather, businesses, stock quotes, popular videos, music, addresses, airplane flight status, and more, the search results of what you are looking for are  presented immediately. The vast majority of users are no longer clicking through pages of Google results: They are instantly getting an answer to their question:

Google weather search results

Google is in the unique position of being able to learn from billions and billions of queries what is relevant and what can be verticalized into immediate results. Google’s search value proposition has now transitioned to immediately answering your question, with the option of sifting through additional results. And that’s through a combination of computing power and accumulated data that competitors just can’t match.

For those of us who have watched this transition closely and attentively over the past few years, it has been an amazing feat that should be commended. So while I am the first to make fun of Google’s various product failures, Google search is no longer one of them.

Tags: , , , ,

Companies:

People: