Ask us a question!

Web Moves Blog

Web Moves News and Information

Archive for the 'Search Industry News' Category

Googles Update – Future Website Ranking
The recent shakeup in Google’s search results, which set the SEO (search engine optimization) community buzzing and saw tens of thousands of webmasters watch their site ranking plummet, was in many ways inevitable. Almost all SEO companies and most savvy webmasters had a fairly good handle on what Google considered important. And since SEO, by definition, is the art of manipulating website ranking (not always with the best interests of searchers in mind), it was only a matter of time until Google decided to make some changes.

If you’ve been asleep at the SEO switch, here are a few links to articles and forums that have focused on the recent changes at Google:

Articles:
Site Pro News
Search Engine Guide
Search Engine Journal

Forums:
Webmaster World
JimWorld
SearchGuild

To date, most of the commentary has been predictable, ranging from the critical and analytical to the speculative.

Here’s a typical example from one of our SiteProNews readers:

“I’m not sure what has happened to Google’s vaunted algorithm, but searches are now returning unrelated junk results as early as the second page and even first page listings are a random collection of internal pages (not index pages) from minor players in my industry (mostly re-sellers) vaguely related to my highly-focused keyword search queries.”

So, what is Google trying to accomplish? As one author put it, Google has a “democratic” vision of the Web. Unfortunately for Google and the other major search engines, those with a grasp of SEO techniques were beginning to tarnish that vision by stacking the search result deck in favor of their websites.

Search Engine Optimization Or Ranking Manipulation?
Author and search engine expert, Barry Lloyd commented as follows: “Google has seen their search engine results manipulated by SEOs to a significant extent over the past few years. Their reliance on PageRankT to grade the authority of pages has led to the wholesale trading and buying of links with the primary purpose of influencing rankings on Google rather than for natural linking reasons.”

Given Google’s dominance of search and how important ranking well in Google is to millions of websites, attempts at rank manipulation shouldn’t come as a surprise to anyone. For many, achieving a high site ranking is more important than the hard work it takes to legitmately earn a good ranking.

The Problem with Current Site Ranking Methods
There will always be those who are more interested in the end result than on how they get there and site ranking that is based on site content (links, keywords, etc.) and interpreted by ranking algorithms will always be subject to manipulation. Why? Because, for now, crawlers and algorithms lack the intelligence to make informed judgements on site quality.

A short while ago, author, Mike Banks Valentine published an article entitled “SEO Mercilessly Murdered by Copywriters!”. The article rightly pointed out SEO’s focus on making text and page structure “crawler friendly”. Other SEO authors have written at great length about the need for “text, text, text” in page body content as well as in Meta, Heading, ALT, and Link tags. They are all correct and yet they are all missing (or ignoring) the point which is that the “tail is wagging the dog”. Search engines are determining what is relevant, not the people using those engines. Searchers are relegated to the role of engine critics and webmasters to being students of SEO.

SEO manipulation will continue and thrive as long as search engines base their algorithms on page and link analysis. The rules may change, but the game will remain the same.

Therein lies the problem with all current search engine ranking algorithms. SEO’s will always attempt to position their sites at the top of search engine results whether their sites deserve to be there or not, and search engines will continue to tweak their algorithms in an attempt to eliminate SEO loopholes. If there is a solution to this ongoing battle of vested interests, it won’t come from improving page content analysis.

Incorporating User Popularity Into Ranking Algorithms
The future of quality search results lies in harnessing the opinions of the Internet masses – in other words, by tying search results and site ranking to User Popularity. Google’s “democratic” vision of the Web will never be achieved by manipulating algorithm criteria based on content. It will only be achieved by factoring in what is important to people, and people will always remain the best judge of what that is. The true challenge for search engines in the future is how to incorporate web searcher input and preferences into their ranking algorithms.

Website ranking based on user popularity – the measurement of searcher visits to a site, pages viewed, time viewed, etc. – will be far less subject to manipulation and will ensure a more satisfying search experience. Why? Because web sites that receive the kiss of approval from 10,000, 100,000 or a million plus surfers a month are unlikely to disappoint new visitors. Although some websites might achieve temporary spikes in popularity through link exchanges, inflated or false claims, email marketing, pyramid schemes, etc., these spikes would be almost impossible to sustain over the long-term. As Lincoln said “You can fool some of the people all the time. You can fool all the people some of the time. But you can’t fool all the people all the time.” Any effective ranking system based on surfer input will inevitably be superior to current systems.

To date, none of the major search engines have shown a serious interest in incorporating user popularity into their ranking algorithms. As of this writing, ExactSeek is the only search engine that has implemented a site ranking algorithm based on user popularity.

Resistance to change, however, is not the only reason user data hasn’t made its way into ranking algorithms. ExactSeek’s new ranking algorithm was made possible only as a result of its partner arrangement with Alexa Internet, one of the oldest and largest aggregator’s of user data on the Web. Alexa has been collecting user data through its toolbar (downloaded over 10 million times) since 1997 and is currently the only web entity with a large enough user base to measure site popularity and evaluate user preferences in a meaningful way.

The Challenges Facing User Popularity Based Ranking
1. The Collection Of User Data: In order for web user data to play a significant role in search results and site ranking, it would need to be gathered in sufficient volume and detail to accurately reflect web user interests and choices. The surfing preferences of a few million toolbar users would be meaningless when applied to a search engine database of billions of web pages. Even Alexa, with its huge store of user data, is only able to rank 3 to 4 million websites with any degree of accuracy.

2. Privacy: The collection of user data obviously has privacy implications. Privacy concerns have become more of an issue in recent years and could hinder any attempt to collect user data on a large scale. The surfing public would need to cooperate in such an endeavor and be persuaded of the benefits.

3. Interest: Web search continues to grow in popularity with more than 80% of Internet users relying on search engines to find what they need. However, with the exception of site owners who have a vested interest in site ranking, most web searchers have not expressed any serious dissatisfaction with the overall quality of search results delivered by the major engines. Harnessing the cooperation and active participation of this latter and much larger group would be difficult, if not impossible.

The future of web search and website ranking belongs in the hands of all Internet users, but whether it ends up there depends on how willing they are to participate in that future.

Author Bio:
Mel Strocen is CEO of the Jayde Online Network of websites. The Jayde network currently consists of 12 websites, including ExactSeek.com (http://www.exactseek.com) and SiteProNews.com (http://www.sitepronews.com). Mel Strocen (c) 2003

Google’s “Florida Dance”
As most people have noticed by now, the last Google update, dubbed “Florida” has generated a lot of uproar amongst the business community and the Internet economy. This latest update is a clear example of the power of Google and what it can mean to a large number of commercial websites that depend on their rankings for the success of their business model and their viability.

There are many old and well-established commercial websites that have lost considerable rankings and that suffered a staggering loss of traffic in the process.

To the best of my knowledge, this is the largest and the most important algorithm change in the 5-year history of Google. Since November 15, Google has started to impose what some call the Over Optimization Penalty, or OOP. It’s still a bit too early to speculate what effect the OOP will have on some sites, but it might suggest to me that it may indeed be a form of static penalty, meaning whatever modifications you make now will probably not get your site back into the top results, at least not until the next Samba!

Therefore, you aren’t likely to see some affected sites return to the top for their target phrases until Google releases the OOP from those sites it was applied to. My feeling is there probably won’t be anything done until the next Google dance which won’t happen probably until December 15 or a bit later I think.

On December 2nd, somebody came to me and handed me his website that a previous SEO, a competitor of mine seemed to have “overly optimized” on certain important keywords of his. Although this article is certainly NOT meant to spam the search engines in any way, the following are techniques that can be used as a last recourse, if you feel some of the pages in your site have been affected in such a way.

Fine-Tuning A Website To The OOP
As is most apparent to many, Google is still experimenting with its new algorithm and continues to adjust as we are heading into the busy Christmas and Holiday season. One thing is certain: there appears to be no stability in the new algorithm.

For that reason, I would just sit and wait to see what the December update will bring us before committing to any changes whatsoever

Detecting If The OOP Has Been Applied To A Particular Page
You can determine if your site had any OOP penalty applied in any way by simply entering your keywords in a Google search, but you will need to add an -exclude for a unique string of characters. Example:

Your main keyword or key phrase blahblahblah

The -blahblahblah string of text doesn’t matter one bit- it’s simply a string of nonsense characters; you can type in anything you wish and it will work (!)

What is important is that you put the “-” character, which informs the Google algorithm to exclude the new spam filter which appears to cause the OOP penalty.

After running this simple test, if you see your page reappearing back under these conditions, it’s highly likely the OOP is in effect for that particular keyword or key phrase on that page.

Note that there is a possibility Google might modify how this type of search works, in an effort to prevent some people from seeing the results, but without the penalty filter. However, at the time I wrote this, that search technique enabled me to detect which of the penalized web pages have been affected by the OOP, so it is a pretty fairly accurate test.

More On The OOP Penalty
The OOP penalty is mostly related to certain incoming links to a page that is in fact penalized in a certain way. Also, on-site absolute links might trigger the OOP. In a particular case where I have seen a site receive an OOP ranking penalty, there were incoming text-links that matched exactly the keywords for which I made the search. Thus, if the incoming links match exactly with the text in the title of your page, your headline tags or your body text, it is my observation that the OOP penalty could be triggered by Google’s new algorithm.

Of course, it is extremely premature at this point to speculate on the exact formula Google is using to achieve this, but if a rather large number of your incoming links match exactly with the keyword or the key phrase used in the search box, it is in fact possible to trigger the over-optimization penalty in my view.

The Ad Word Conspiracy Debate
Since all of this can seem a bit strange, and since a lot of opinions or various ideas have been circulated in forums and different articles when all of this started in mid-November, and especially since many (including me) have discovered that the OOP penalty doesn’t seem to be consistently applied across all the Web, there are some people that have suggested Google might be in the center of a conspiracy to force companies and site owners to spend a lot more money on its AdWords PPC campaigns. Personally, I do NOT believe or participate in that philosophy and I don’t think it would be in Google’s best long-term interest to do that.

However, the OOP penalty does appear in fact to be applied mostly to higher traffic keywords and key phrases, the sort that commercial websites use the most. In retrospect, this does give some credibility I guess to those that think there might be a conspiracy theory to all of this, for what it’s worth.

Make no mistake about this. It is in fact the biggest and the most drastic algorithm change I have ever seen in the short 5-year history of Google. If a few pages in your site seem to suffer of the OOP penalty, let’s look at some ways to get out of this.

Some Solutions To The OOP Problem
For the most part, the OOP penalty comes from incoming links that perfectly matches the keywords or key phrases on a specific title tag or headline once on the affected page. There are a couple of ways to repair this. The first would be to reduce the quantity of the incoming links to your affected pages.

Another good alternative would be to modify the title or headline of your affected page (or pages), so that you have fewer occurrences of the exact keyword or key phrase that originally triggered the OOP penalty. For example let’s say the key phrase in your incoming link that carries the penalty is Construction & Building Materials and you have Construction & Building Materials in your title tag, as well as in your description tag and in your H1’s or H2’s. I would change the title tag to be ‘Special Construction and Building Supplies’ and see what that does.

You should also modify it and do the same for the text body of that page. In an ideal situation, it is imperative to try to achieve a ‘good balance’ that Google would consider as spam-free and not artificial. It is my thinking that too many incoming links with your exact key phrases in them, combined with too much on-page optimization and exact-matching text located within the target areas, i.e. the title tag, headline tags, H1’s, H2’s etc will likely trigger the OOP filter.

Conclusion
In light of all that has happened with update Florida, and with the drastic changes that Google has massively implemented to its PageRank’ algorithm, you should expect to see more upcoming changes of this algorithm, perhaps even before December 15, at which time some observers think Google might start dancing again.

As I have written in previous articles on this and as I was interviewed and quoted by the New York Post on November 27, I still believe that if your site has had little or almost no OOP penalties, in that case I would stay put and not do anything for the time being.

This article is simply meant to show a few techniques I have used to limit ‘some incurred damages’ already done to a site by an SEO firm that would have overly optimized a site, previous to the critical November 15 date. If your site was optimized using only Google-accepted techniques, according to their Terms of Use agreement, your site should not have suffered any OOP penalties whatsoever.

Author:
Serge Thibodeau of Rank For Sales

The Google Update Uproar
‘Be careful what you wish for, it may come true’

While at Ad-Tech, I lamented the clogging of Google’s results with spam filled sites. I asked Google to clean up its index. Although I’m sure there’s no connection between the two (unless Sergei and Larry are paying a lot more attention to me than I thought) Google responded just a few weeks later with the Florida update. And boy, have they responded big time!

If you haven’t ventured into an SEO forum for awhile, you might not have heard of the Florida update. It’s Google’s latest dance, and it’s a doozy. It appears that Google is trying to single handedly shut down the entire affiliate industry.

The scene is awash with guessing and speculation. Was it a Google mistake? A plot to force advertisers to move to AdWords for placement? Barry Lloyd did a good job of trying to bring sense to the mayhem. I’d like to jump in with some further research we’ve done and my own thoughts of what’s happening with the Google index.

A Florida Guide
First of all, the Florida update was rolled out November 16th. It appears to be a new filter that is applied to commercially based searches, triggered by certain words in the query. The filter clears out many of the sites that previously populated the top 100. In several tests, we found the filter generally removes 50 to 98% of the previously listed sites, with the average seeming to be 72%. Yes..that’s right: 72% of the sites that used to be in Google are nowhere to be seen!

Who’s Missing?
The target is pretty clear. Its affiliate sites, with domains that contain the keywords, and with a network of keyword links pointing back to the home page of the site. The filter is remarkably effective in removing the affiliate clutter. Unfortunately, legitimate commercial sites with lower page rank are being removed as well. There seems to be a PageRank threshold above which sites are no longer affected by the filter. We’ve seen most sites with PageRank 6 or above go through unscathed.

And the Secret Word is’
The filter also appears to be activated only when search queries contain certain words. For example, a search for ‘Calgary Web Design Firms’ activated the filter and cleared out 84% of the sites, while a search for ‘Calgary Database Development’ didn’t activate it. Search volumes are roughly equivalent for both phrases. The filter seems to be activated by a database of phrase matches, and doesn’t appear to be affected by stemming. For example, ‘Panasonic fax machines’ activates the filter, but none of these words as a single search phrase does. ‘Fax machines’ activates the filter, but ‘Panasonic machines’ doesn’t.

Also, it seems that only a few single word searches activate the filter. We found that jewelry, watches, clothing, swimwear, shelving, loans and apartments all activated the filter. Other terms that you would think would be bigger targets for spam, including sex, cash, porn, genealogy, MP3, gambling and casino don’t activate the filter. Obviously, when you look at these words, Google is more concerned with commercialization than spam.

Volume, Volume, Volume
Another factor is whether the filter is tripped or not seems to be search volume. Any commercial searches with volumes over 200 per month (as determined by Overture’s search term suggestion tool) seemed to trip the filter. Searches under that threshold seemed to remain unfiltered. For example, a search for ‘Oregon whitewater rafting’ (about 215 searches last month) activated the filter, while a search for ‘Washington whitewater rafting (about 37 searches last month) didn’t.

What is Google Thinking?
Obviously, given the deliberate nature of the implementation, this isn’t a hiccup or a mistake by Google. This was a well thought out addition to the algorithm. And in the most competitive searches, it produces much better results than did the ‘pre-Florida’ index. If you search for ‘New York Hotels’ today, you’ll find almost all of the affiliate clutter gone.

Where the problem occurs is in the less competitive searches, where there’s not a sufficient number of PageRank 6 or higher sites to fill the vacuum caused by the filter. If you do a search now for most phrases you’ll find the results are made up of mainly directory and irrelevant information sites. In cleaning house, Google has swept away many sites that should have stuck. As an example, visit Scroogle.org and search for ‘Calgary web design firms’. Scroogle is from the deliciously twisted minds of Google Watch, and gives graphic representation of the bloodshed resulting from Florida. In the pre-Florida results, the top 10 (all of which were wiped out by the filter) included 6 Calgary based web designers and 1 in Vancouver (two of the remaining results were additional pages from these firms). The other result was a directory called postcards-usa.com with a page of design firms from around North America. Eight of the 10 results were directly relevant, one was somewhat relevant and one was of questionable relevancy for the geographically specific search.

In the filtered results, there is not one web design firm from Calgary. The top 4 listings are directory site pages, two of which are not even specific to Calgary. Ranking 5 and 6 belong to Amazon.com pages selling a book on web design (nothing to do with Calgary other than a reader review from someone who lives there). Rankings 7 and 8 go to pages about evolt.org, a non profit organization of web designers, and a profile on a Calgary based member. Listing 9 goes to the web design page of an abysmal web directory, again not specific to any region. And listing 10 goes to an obvious link farm. Of the 10 results, none of them were relevant.

Google’s Next Move?
Pulling out the crystal ball, which in hindsight was amazingly accurate 2 weeks ago, here’s what I think will happen. The Florida filter will not be revoked, but it will be tweaked. It’s doing an amazing job on the ultra competitive searches, but the algorithm will be loosened to allow inflow of previously filtered sites to bring relevancy back to the less competitive searches. Hopefully, the sites finding their way back into the index will be better quality legitimate commercial sites and not affiliate knock offs. Google has to move quickly to fix the relevancy for these searches, because they can’t afford another blow to the quality of their search results.

I really don’t believe that Google purposely implemented the filter to drive advertisers to AdWords, but that is certainly a likely side effect. The most dramatic impact will be the devastation of the affiliate industry. Just 3 short weeks ago I listened to 4 major internet marketers say they didn’t bother with organic SEO because their affiliate partners did it for them. Those days are over. If Google was targeting anyone with Florida, it was affiliate sites. A number of forum posts indicated that Google was taking aim at SEO. I don’t believe so. I think Google is trying to wipe out bad SEO and affiliate programs and unfortunately there are a number of innocent bystanders who got hit in the crossfire. But every indication from Google itself (both from posts to forums and in replies to help requests) seems to indicate that Florida is a work in progress.

Author:
Gord Hotchkiss

Bayesian Spam Filter & Google
On November 26, Stephen Lynch, journalist at the New York Post picked up the phone and initiated a telephone interview with me about an article I wrote on the previous day. The article was in relationship to the current November Google update “dance”, dubbed “Florida”.

The following day, Mr. Lynch wrote this article and it was published in the New York Post, offering his comments and, without being technical, explaining some of the negative effects such an update can have on the average site owner or Webmaster.

As the latest “Florida” monthly Google update ‘dance’ has shown us, having a website highly-ranked on the Internet’s number one search engine, Google– if your search rankings precipitously drop as much as some did and without warning, it can spell a devastating blow to some online stores or certain commercial websites.

In the last 10 days, a lot of articles have also been written by some of my colleagues, some in the SEO field and some, like Seth Finkelstein who are more in favour of the free flow of information that the Internet can provide.

In this article, I will attempt to describe some of the spam-filtering techniques that Google is reported using during this Florida “dance”. This spam-filtering technology is based on the Bayesian algorithm.

The Inner-Workings of a Spam Filter for a Search Engine
For quite a long time now, Google’s search results have been under attack by search-engine spammers that continuously attempt to mask search results, in the end, cluttering the search engines with irrelevant information in their databases.

With the ever-growing popularity of Google and as it tries to handle more and more searching all over the Web, the temptation to foul the search results has become attractive to certain spammers, leading to substantial degradation in the quality and relevance of Google’s search results. Since Google is mostly concerned of quality search results that are relevant, it is now cracking down on these unscrupulous spammers, with new spam-filtering algorithms, using Bayesian filtering technology.

At the end of October 2003, Google deployed their new Bayesian anti-spamming algorithm, which appeared to have its search results crash when a previously identified spam site would have normally been displayed. In fact, the searching results were completely aborted when encountering such a spam-intended site. See “Google Spam Filtering Gone Bad” by Seth Finkelstein for more technical information on how this spam elimination algorithm works at Google.

The First Shoe That Fell
On or around November 5th, this spam problematic was in fact reduced significantly, resulting from the “kicking-in” of these new Bayesian anti-spam filters. Although not perfect, this new Bayesian spam-filtering technology seemed to have worked, albeit there were some crashes in some cases.

On or about November 15th 2003, Google, as it always does every month, started “dancing”, performing its needed monthly and extensive deep crawl of the Web, indexing more than 3.5 Billion pages. This update had some rather strange results, in a way reminding some observers of a previous major algorithm change done in April of 2003, dubbed update Dominick, where similar and very unpredictable results could be noted across the Web.

It was generally observed that, many ‘old’ and high-ranking sites, some of which were highly regarded as ‘authoritative’, which were certainly not spammers in any way, appeared to fall sharply in their rankings or would disappear entirely from Google’s search results.

Since then, there have been many explanations, some not too scientific, that attempted to answer this event that some have categorized as “serious”. For an example of some of the best of these explanations, read an article that Barry Lloyd wrote: “Been Gazumped by Google? Trying to make Sense of the “Florida” Update!”.

More on the Bayesian Spam Filter
Part of my research and the observations I have done in this matter point to the Bayesian spam filter that Google started to implement in late October. A “Bayesian spam filter” is a complex algorithm used in estimating the probability or the likelihood that certain content or material detected by Google is in fact spam. In its most basic format, the Bayesian spam filter determines if something “looks spammy” or if, on the other hand, it is relevant content that will truly help the user.

To a certain degree, the Bayesian algorithm has proven efficient in the war against spam in the search engines. Being ‘bombarded’ by spam as much as Google has been for the past couple of years, it has no choice but to implement such anti-spam safeguards to protect the quality and relevancy of its search results.

However, it is the general feeling in the SEO community that, unfortunately, the current Bayesian algorithm implementation seems to have extreme and unpredictable consequences that were practically impossible to be aware of beforehand.

On the outset, one of the problems with estimating the probability or likelihood that certain content does have spam in it is, given very huge datasets, such as the entire Internet for example, many “false success stories” can and will occur. It is exactly these false success stories that are at the centre of the current problem.

Since this whole event began to unwind, there are many people that have noted in tests and evaluations that, making the search more selective, differentiating such as trying to remove an irrelevant string tends to deactivate the new search results algorithm, which in turn effectively shuts down the newly-implemented Bayesian anti-spam solution at Google.

One More Observation
While we are still on the subject of the new filter, but getting away from the topic of spam-related issues, as a side note, while doing some testing with the new Florida update, I did notice that Google is now ‘stemming’. To my knowledge, it’s the first time that Google does offer such an important search feature. How does stemming works? Well, for example, if you search for ‘reliability testing in appliances’, Google would suggest you ‘reliable testing in appliances’.

To a certain degree, variants of your search terms will be highlighted in the snippet of text that Google provide each accompanying result with. The new stemming feature is something that will certainly help a lot of people with their searching for information. Again, Google tries to make its searches the most relevant they can be and this new stemming feature seems like a continuation on these efforts.

Conclusion
In retrospect, and in re-evaluating all the events that have happened on this major dance, it is clear that Google is still experimenting with its newly-implemented algorithm and that there are many important adjustments that will need to be done to it to make it more efficient.

Spam being a growing problem day by day, today’s modern search engines have no choice other than to implement better and more ‘intelligent’ spam-filtering algorithms that can make the difference between what is considered as spam and what isn’t.

The next 30 days can be viewed by some as being critical in the proper ‘fine-tuning’ and deployment of this new breed of application in the war against spam. How the major search engines do it will be crucial for some commercial websites or online storefronts that rely solely on their Google rankings for the bulk of their sales.

In light of all this, perhaps some companies in this position would be well advised in evaluating other alternatives such as PPC and paid inclusion marketing programs as complements. At any rate, it is my guess that search will continue to be an important and growing part of online marketing, both locally, nationally and on a global basis.

______________
References:

1) An anticensorware investigation by Seth Finkelstein
http://sethf.com/anticensorware/general/google-spam.php

2) Better Bayesian filtering by Paul Graham
http://www.paulgraham.com/better.html

Author:
Serge Thibodeau of Rank For Sales

Google’s Next Big Move
(Will your website be ready, or will you be playing catch-up six months too late?)

November 2003 might go down in history as the month that Google shook a lot of smug webmasters and search engine optimization (SEO) specialists from the apple tree. But more than likely, it was just a precursor of the BIG shakeup to come.

Google touts highly its secret PageRank algorithm. Although PageRank is just one factor in choosing what sites appear on a specific search, it is the main way that Google determines the “importance” of a website.

In recent months, SEO specialists have become expert at manipulating PageRank, particularly through link exchanges.

There is nothing wrong with links. They make the Web a web rather than a series of isolated islands. However, PageRank relies on the naturally “democratic” nature of the web, whereby webmasters link to sites they feel are important for their visitors. Google rightly sees link exchanges designed to boost PageRank as stuffing the ballot box.

I was not surprised to see Google try to counter all the SEO efforts. In fact, I have been arguing the case with many non- believing SEO specialists over the past couple months. But I was surprised to see the clumsy way in which Google chose to do it.

Google targeted specific search terms, including many of the most competitive and commercial terms. Many websites lost top positions in five or six terms, but maintain their positions in several others. This had never happened before. Give credit to
Barry Lloyd of www.SearchEngineGuide.com for cleverly uncovering the process.

For Google, this shakeup is just a temporary fix. It will have to make much bigger changes if it is serious about harnessing the “democratic” nature of the Web and neutralizing the artificial results of so many link exchanges.

Here are a few techniques Google might use (remember to think like a search engine):

1. Google might start valuing inbound links within paragraphs much higher than links that stand on their own. (For all we know, Google is already doing this.) Such links are much less likely to be the product of a link exchange, and therefore more likely to be genuine “democratic” votes.

2. Google might look at the concentration of inbound links across a website. If most inbound links point to the home page, that is another possible indicator of a link exchange, or at least that the site’s content is not important enough to draw inbound links (and it is content that Google wants to deliver to its searchers).

3. Google might take a sample of inbound links to a domain, and check to see how many are reciprocated back to the linking domains. If a high percentage are reciprocated, Google might reduce the site’s PageRank accordingly. Or it might set a cut- point, dropping from its index any website with too many of its inbound links reciprocated.

4. Google might start valuing outbound links more highly. Two pages with 100 inbound links are, in theory, valued equally, even if one has 20 outbound links and the other has none. But why should Google send its searchers down a dead-end street, when the
information highway is paved just as smoothly on a major thoroughfare?

5. Google might weigh a website’s outbound link concentration. A website with most outbound links concentrated on just a few pages is more likely to be a “link-exchanger” than a site with links spread out across its pages.

Google might use a combination of these techniques and ones not mentioned here. We cannot predict the exact algorithm, nor can we assume that it will remain constant. What we can do is to prepare our websites to look and act like a website would on a “democratic” Web as Google would see it.

For Google to hold its own against upstart search engines, it must deliver on its PageRank promise. Its results reflect the “democratic” nature of the Web. Its algorithm must prod webmasters to give links on their own merit. That won’t be easy or even completely possible. And people will always find ways to turn Google’s algorithm to their advantage. But the techniques above can send the Internet a long way back to where Google promises it will be.

The time is now to start preparing your website for the changes to come.

Author Bio:
David Leonhardt is an online and offline publicity specialist who believes in getting in front of the ball, rather than chasing it downhill. To get your website optimized, email him at info@thehappyguy.com. Pick up a copy of Don’t Get Banned By The Search Engines or of Get In The News.

Where Is The Search Industry Headed?
With all of what happened in the search industry in the last 9 to 10 months, one cannot neglect the fact that this industry is in for a lot of changes. Independently of the hypothesis that Google becomes a public company or not is irrelevant. The industry is facing major changes on its own.

For example, Yahoo is now the world’s second largest search property. After having acquired Overture a few months ago, it is now trying to battle a ‘level ground’ with Google. Competition will be fierce. Expect more mergers, buyouts and acquisitions in the coming months.

For example, late on November 20, Yahoo made a firm proposal to acquire a Beijing-based Web company for about $120 million in cash and stock. The company, with the unusual name of 3721, if acquired, would in effect benefit Yahoo of a new business for selling domain names in China. Yahoo would still maintain its strong search position in that country, for which many consider a rapidly growing market for Internet companies. Domain names’ Well, let’s call this a diversification away from Yahoo’s normal search operations. Still, such moves will be less uncommon in the near future.

Will the search market continue to grow in 2004? Yes, very much so. Depending to whom you talk to, estimates range anywhere from 10 to 30% growth. Some even expect higher numbers than that. What’s important is analyzing the trend. I am of those that think this growth trend is definitely on the increase and I expect it to continue well into 2004.

The Google ‘Wildcard’
A few weeks ago, Google stirred up a lot of news when it was widely reported in the press that it would probably come out with an IPO (Initial Public Offering) in February or March of 2004, effectively becoming a public company, joining its Wall Street rivals such as Yahoo, Overture and even LookSmart for that matter. On top of all that, Google hinted that, if there is an IPO, it would probably be of the auction type, in other words, it would probably bypass the large investment bankers, which as some observers have quoted such a gesture to be “uncommon”.

There were even some reports and articles in the press hinting that Bill Gates and Microsoft were in talks with Google, discussing a possible merger or acquisition of the Number One search engine. Bill Gates then categorically denied those allegations on November 17. The situation is in fact getting a bit cloudy.

Speaking of Microsoft, its no secret to nobody that Microsoft has been very busy lately, quietly developing its own search engine in the background. It even has a beta version already online in the United Kingdom, France, Italy and Spain. It’s only a question of time before Microsoft comes out with a full-fledged search engine that will probably make Google even more nervous that it is.

Then again, will it simply integrate its secret search engine as an overall component of its long-talked about Longhorn new operating system slated for late 2004, early 2005? As we have witnessed in 1995, when Netscape became a public company, it then saw its Navigator browser being overtaken by Microsoft’s Internet Explorer browser, which Microsoft had conveniently integrated into its Windows operating system. Could something similar happen again with Longhorn, this time the victim being Google? Again, only time will tell.

2004 And Beyond
Developments similar to what we are currently seeing will be extremely important to determine the exact course of action the search industry will probably take next. It promises to be very exciting. One thing is certain: the stakes are getting much higher as time goes by. Expect to see more PPC and PFP (Pay-for-Performance) advertising models. The field of search engines is certainly transforming certain parts of the advertising industry as we used to know it less than ten years ago.

It is my prediction that the following twelve months will be extremely critical to many of the largest and even the smallest players in the search industry. Coming back to Google, there are some that think it could probably raise between $ 10 and $ 15 billion with an IPO. It is estimated that, privately owned Google’s revenues will approximate between 700 million to 950 million for fiscal 2003. Others are wondering what Google would do with all that new money in its coffers. If all that money would be used to continue the development of its current search engine and initiate new R&D into other search-related technology, then many think it would be a good thing in deed.

Conclusion
There is no conclusion to any of this, as nobody can safely predict the outcome. However, buckle up your safety belts, since I think we could be into a few air pockets in the coming months.

Expect Google to continue to ‘fine-tune’ its PageRank algorithm in the coming weeks and months. Additionally, expect some of the other major search engines to do the same, in the never-ending battle for quality and relevancy in the search results.

What IS important and what I can safely predict here is the fact that, more than ever, companies, businesses, website owners and Webmasters alike continue to optimize their sites as much as they can. The ones that do will continue to reap the benefits of their efforts and should be amply rewarded in the search results and, hopefully in the conversion rates of their websites.

Author:
Serge Thibodeau of Rank For Sales

New Legal Guidelines
The marketing environment online has been changing over time to reflect new needs and to remove new problems. E-mail may no longer be the “killer app” it was, what with the evolving changes taking place.

With ISPs filtering email at ever-increasing rates as consumers complain about the volumes of junk e-mail (SPAM or UCE) they’re receiving; with spammers getting more and more aggressive (and ingenious) with their tactics; and with consumers continually complaining to politicians to “do something about it;” life for the newsletter publisher is no longer simple.

It used to be that accepting signups and sending your newsletter was the EASY part � putting it together and getting subscribers to find you was the hard part. Not any more.

Now you have to deal with a myriad of laws � laws which may or may not apply to you, which vary by location, and laws which you may be completely unaware of.

Many states in the United States have laws which prohibit certain types of email marketing. These are usually based on how the email address is acquired and what the contents of the email actually are. California, of the states in the Union, has the most stringent laws.

In addition to this, many member countries of the European Union have passed or are in the process of passing similar legislation against unsolicited commercial email.

“How does this affect me?” The newsletter publisher asks. Well, the way these laws are written, you could be in violation even if your entire list of subscribers are opt-ins. How?

Since California’s law is the most stringent and since estimates say that 20% of Internet traffic around the world originates, passes through, or is served from that state, we’ll look at the law there. Most laws in other places are not as strict, but many countries in the EU are working on laws that will be similar. Plus California is commonly known as a “test zone” for laws in the United States.

The law defines an “unsolicited commercial email advertisement” as being any email sent without the “direct consent” of or without a “preexisting or current business relationship” with the receiver. Interestingly, the receiver doesn’t have to be a California resident because the law states that if the email originates or has been sent “within, from and to” the state of California, it is covered. So if your server is located in California and you send email through it that someone doesn’t like, you could be subject to the law � even if you’re a resident of New York and the receiver is in Washington!

The other crux of this law is the definition of “direct consent.” It is defined as “�the recipient has expressly consented to receive e-mail advertisements from the advertiser, either in response to a clear and conspicuous request for the consent or at the recipient’s own initiative.”

The penalties for violating this law are immense: $1,000 for each offending e-mail and up to $1,000,000 per incident, plus actual damages and attorney’s fees.

The up-side to all of this is that the law, as written, is full of holes. There are a myriad of ways to get around trouble with it, but there is definitely an increased risk to email marketers. After all, a new law means that it’s easier for those with complaints to force legal action, which means your chance of ending up in court is higher than it was before.

My personal opinion is that as soon as this law is used (it goes into effect on January 1, 2004 in California), it will be challenged Constitutionally and probably fail to hold up because of the ambiguous wording of much of the legislation.

It is still a good idea, however, for the e-zine publisher to make sure their email collection techniques are on the up-and-up: double- opt-in if possible, collect names as well as emails, provide VERY easy unsubscribe options (links in every email are the best), and don’t abuse your list.

Most of us are following good guidelines and have nothing much to worry about. Just make sure you aren’t setting yourself up for anything.

Author Bio:
Aaron Turpen is the proprietor of Aaronz WebWorkz, a full-service online company catering to small and home-based businesses. Aaronz WebWorkz offers a wide variety of services including Web development, newsletter publishing, consultation, and more.

Keyword Ownership
Have you ever got one of those silly emails that offers to let you own a keyword? Silly question. How many such emails do you get every day?

A number of such services regularly email me offering keyword ownership of premium keywords for $300/year. They say that anyone can type the keyword I bought in the address bar of Internet explorer, instead of typing in a URL, and they will be sent directly to my site. In total it seems that there are about 2% of Internet users worldwide who have enabled one type or another of this system, spread out between a few competitive services.

Data shows that between 4% and 7% of search queries are performed by entering something in the address bar. By default for IE users, these searches are automatically routed through to MSN search. Many of us however have installed so much software over time, and unknowingly, some of this software has re-routed these search queries to other search portals, such as iGetNet, or others. This often happens if you’ve installed any file sharing software. We have all heard & read about how many extra ‘features’ come with programs like Kazaa. This means that your default search from the address bar may no longer be MSN, and may have been rerouted elsewhere, but the basic principle still applies. Of the queries that are actually run from an address bar, at least half of them are unintentionally instigated by people mistyping the desired URL. This means that between 2% and 4% of Internet users actually search via their address bar.

So how exactly do these address bars work? There are many of these companies offering this kind of service, with each one of them selling the very same keywords to different and sometimes competing companies. To make things worse, the keywords you might buy will only work with the issuing company’s proprietary address bar plug-in. Then, to actually offer search capabilities from the address bar, each of these service providers needs to get individual Internet users to download and install their plug-in, and remember to run searches from the address bar.

How effective can a marketing strategy of this nature be when the various tools are not interchangeable, there are numerous competitors selling the same key words to different companies, and you are targeting only a small fraction of Internet users? If your ad is being displayed because it’s similar to the search query, are you paying for irrelevant results? This can happen. If there is not a perfect match to a search query, the next closest match may be displayed.

Competing with these companies is any search engine that offers its own toolbar. You can download a toolbar from any number of engines, and run searches on any key word or phrase quickly and easily. You then get the search engines selection of closest matches, from all the web sites they have indexed. They offer more than just one choice, and don’t cost anything.

Who Started This?
Started in 1998, Realnames was the first company that tied searching via the address bar to a web browser. At the time, it was touted as a value added solution for businesses around the world who were attempting to get their products found quickly, but didn’t want customers to have to wade through a sea of Web addresses to reach their destination.

In part, it was deemed necessary because so few web site operators were search engine savvy, and fewer still knew anything about search engine optimization and promotion. What the Realnames solution did was allow a web site operator to buy a keyword, and then when any user of Internet Explorer would type that keyword into the IE address toolbar, they would get directed to the web site that owned the keyword.

The company hoped to profit from businesses which wanted to reach Internet users who would type keywords into their browsers address bar instead of remembering the URL, or going through a standard search interface.

Unfortunately for the company, the service was entirely dependent on Microsoft; and when Microsoft stopped supporting the technology in May 2002, the company was forced to close. The reason it was so totally dependent was simple. Unlike the new companies on the market today, Realnames did not depend on an end user downloading and installing a plugin, instead it was essentially integrated into Internet Explorer by Microsoft. Therefore everyone who used IE automatically had the plugin.

The Legal Question
Each of the companies offering these services has a policy designed to ensure that a web site only buys keywords related to their content, and their review process is designed to keep cybersquatters from hijacking popular names and products. Unfortunately, there is no way to guarantee that any one of these keyword ownership services adheres to any naming standard, or even ensures that any purchaser has the legal right to any of the terms they are buying. This means that the rights to copyrighted material like “Pepsi” or generic words like “business” could end up in the hands of the first buyer. While Pepsi is a well known brand name, there are millions of copyrighted and trademark protected terms, covered in multiple jurisdictions. It would not be cost effective or practical for these services to police copyright and trademark infringement.

In the summer of 1999, the U.S. Court of Appeals for the Ninth Circuit, denied Playboy’s request for an injunction barring a search engine from selling advertising based on the terms playboy and playmate. In the precedent setting ruling regarding keyword advertising, Judge Stotler of the United States District Court in Santa, Ana, California, dismissed a lawsuit brought by Playboy Enterprises against the search engine Excite, Inc. and Netscape. The ruling limited the online rights of trademark holders, as it recognized that a trademark may be used without authorization by search engines in advertising sales practices.

Playboy claimed that the search engines were displaying paid banner ads from pornographic web sites whenever “playboy” or “playmate” were used as a search term. As the owner of the trademarks for both terms, Playboy argued that the use of its trademarks for a third party sales scheme was trademark infringement and branding dilution.

In the ruling dismissing Playboy’s case, the Judge found that Excite had not used the trademarks “playboy” and “playmate” in an unlawful manner. This was because Excite had not used the trademarked words to identify Excites own goods or services and therefore trademark infringement laws did not apply. It was further determined that even if there was trademark usage, there was no infringement because there was no evidence that consumers confused Playboy products with the services of Excite or Netscape.

What About Within Meta Tags?
Is it illegal to use trademarked terms in your Meta tags? Sometimes. The problem occurs with how and why you are using the terms. Web sites that use the tags in a deceptive manner have lost legal battles. However, legitimate reasons to use the terms have resulted in successful defenses.

In a case involving Playboy, the firm was able to prove trademark infringement, based on use of their trademark in Meta tags, URL and content on the web site. The case was filed by the firm against web site operators for stuffing their web pages with the words Playboy and Playmate hundreds of times. Furthermore, the defendants were also using the terms Playboy and Playmate in the site names, URLs, and slogans. In this case the Judge ruled for Playboy, as there was a clear case of trademark infringement.

In the separate case, Playboy vs. Terri Welles, the court refused Playboy’s request. The reason was simple. Terri Welles was Playboy’s 1981 Playmate of the Year. She had used the terms “Playmate” and “Playboy” on her web pages and within her Meta tags, and the Court felt she had a legitimate right to use them to accurately describe herself, and to ensure that the search engines could catalog her web site properly within their databases. Playboy’s appeal was dismissed on Feb. 1, 2002.

In Summary
It is clear that if you have a legitimate reason to use a trademarked word or phrase in your web site you can. You may also rent their ownership from one of the keyword ownership companies. Be careful, though, it is possible that may get sued.

Does the technology work? Yes, but only for some of the approximately 3% of Internet users worldwide who have installed any one of a variety of competing plugins that enable this type of searching. I stress a fraction of the 3%, as you would need to buy the keywords from each individual vendor to ensure reaching all 2%.

Author Bio:
Richard Zwicky is a founder and the CEO of Metamend Software, a Victoria, B.C. based firm whose cutting edge Search Engine Optimization software has been recognized around the world as a leader in its field. Employing a staff of 10, the firm’s business comes from around the world, with clients from every continent. Most recently the company was recognized for their geo-locational, or GIS technology, which correlates online businesses with their physical locations, as well as their cutting edge advances in contextual search algorithms.

Google API’s
Lately, I’ve been getting a few questions about the Google API’s. Although mostly intended for Web developers, search engine implementations or Internet applications that query the Internet using the Google database, Google’s API could be of some help for you, depending on your general level of knowledge or expertise in Web technology and your programming skills.

This article is meant only as an introduction to Google’s API’s. If the need warrants it, I would be pleased to write a more detailed and in-depth article on them at a later date.

What Is A Google API?
The Google API stands for ‘Application Programmable Interface’. As it’s name implies, it is an interface that queries the Google database to help programmers in the development of their applications.

At this point, it is important to remember that all of Google’s APIs are only available in beta version, which means they are mostly still in their initial trial release and that there could still be a few adjustments required to some of them, although I must honestly say that I am quite pleased with what I saw so far.

The Google API’s consist basically of specialized Web programs and specialized scripts that enable Internet application developers to better find and process information on the Web. In essence, Google APIs can be used as an added resource in their applications.

How Can Google API’s Be Used Effectively?
In the real world, application programmers, developers and integrators write software programs that can connect remotely to the Google API’s. All data communications are executed via the ‘Simple Object Access Protocol’ (SOAP), which is a Web services standard as defined by the industry. The SOAP protocol is an XML-based technology meant to easily exchange information entered into a Web application.

Google’s API can better assist developers in easily accessing Google’s web search database, empowering them in developing software that can query billions of Web documents, constantly refreshed by Google’s automated crawlers. Programmers can initiate search queries to Google’s vast index of more than three billion pages and have results delivered to them as structured data that is simple to analyse and work with.

Additionally, Google API’s can seamlessly access data in the Google cache, while at the same time provide checking in the spelling of words. Google APIs will in fact implement the standardized search syntax used on many of Google’s search properties.

Use Of Google’s API’s In The Real World
Here I will give a few examples of real-life Web applications that could effectively use Google API’s. I will only give a few examples, but obviously, there can be many more.

‘ Querying the Web, using non-HTML interfaces, such as complex industrial software or command-line interfaces used in certain Unix applications

‘ Processing specialized market information, research and analysis of discrepancies in certain data types used in various industries

‘ Initiating regularly scheduled search requests to track the Internet for new and updated information on a specific subject

Currently, Google issues each programmer or developer who registers to use the APIs a set limit of one thousand queries per day. Some think that number could increase in the future, but for now, that is the imposed limit at the ‘Googleplex’.

Google’s API is an experimental program that is provided free to anybody that agrees to its terms. As of now, the available resources to fully implement and support the program are rather limited, which explains why there is a 1,000 queries a day limit on all accounts.

Registering For Your Google API
In order to be able to use and implement any Google API’s you must first agree to the terms and promise to abide by Google’s rules concerning it’s API’s. You will also have to create a Google ‘API account’. Once you have done all that, Google will email you a personal license key to use with your APIs.

Remember that when you build an application, you must integrate your API account number in your code. That way, every time your Web application makes a request or queries Google’s database, your license key is part of the query string.

More On The Google API Account
Creating an API account with Google is simple. All you need to do is to go to http://www.google.com/apis/ and simply follow the easy instructions on your screen. You will be required to provide your email address and a password. Currently, Google rules and regulations forbid you from having more than one account.

One word of caution about Google API’s: remember that you cannot create or develop any industrial application or commercial querying program or software using Google’s API technology, without first obtaining a valid and written consent from Google.

Reference: Google Inc., http://www.google.com/apis/

Author:
Serge Thibodeau of Rank For Sales

Search Engine To Improve
Wouldn’t it be nice if the search engines could comprehend our impressions of search results and adjust their databases accordingly? Properly optimized web pages would show up well in contextual searches and be rewarded with favorable reviews and listings. Pages which were spam or which had content that did not properly match the query would get negative responses and be pushed down in the search results.

Well, this reality is much closer than you might think.

To date, most webmasters and search engine marketers have ignored or overlooked the importance of traffic as part of a search engine algorithm, and thus, not taken it into consideration as part of their search engine optimization strategy. However, that might soon change as search engines explore new methods to improve their search result offerings. Teoma and Alexa already employ traffic as a factor in the presentation of their search results. Teoma incorporated the technology used by Direct Hit, the first engine to use click through tracking and stickiness measurement as part of their ranking algorithm. More about Alexa below.

How Can Traffic Be A Factor?
Click popularity sorting algorithms track how many users click on a link and stickiness measurement calculates how long they stay at a website. Properly used and combined, this data can make it possible for users, via passive feedback, to help search engines organize and present relevant search results.

Click popularity is calculated by measuring the number of clicks each web site receives from a search engine’s results page. The theory is that the more often the search result is clicked, the more popular the web site must be. For many engines the click through calculation ends there. But for the search engines that have enabled toolbars, the possibilities are enormous.

Stickiness measurement is a really great idea in theory, the premise being that a user will click the first result, and either spend time reading a relevant web page, or will click on the back button, and look at the next result. The longer a user spends on each page, the more relevant it must be. This measurement does go a long way to fixing the problem with “spoofing” click popularity results. A great example of a search engine that uses this type of data in their algorithms is Alexa.

Alexa’s algorithm is different from the other search engines. Their click popularity algorithm collects traffic pattern data from their own site, partner sites, and also from their own toolbar. Alexa combines three distinct concepts: link popularity, click popularity and click depth. Its directory ranks related links based on popularity, so if your web site is popular, your site will be well placed in Alexa.

The Alexa toolbar doesn’t just allow searches, it also reports on people’s Internet navigation patterns. It records where people who use the Alexa toolbar go. For example, their technology is able to build a profile of which web sites are popular in the context of which search topic, and display the results sorted according to overall popularity on the Internet.

For example a user clicks a link to a “financial planner”, but the web site content is an “online casino”. They curse for a moment, sigh, and click back to get back to the search results, and look at the next result; the web site gets a low score. The next result is on topic, and they read 4 or 5 pages of content. This pattern is clearly identifiable and used by Alexa to help them sort results by popularity. The theory is that the more page views a web page has, the more useful a resource it must be. For example, follow this link today –

http://www.alexa.com/data/details/traffic_details?q=&url=http://www.metamend.com/

– look at the traffic details chart, and then click the “Go to site now” button. Repeat the procedure again tomorrow and you should see a spike in user traffic. This shows how Alexa ranks a web site for a single day.

What Can I Do To Score Higher With Click Popularity Algorithms?
Since the scores that generate search engine rankings are based on numerous factors, there’s no magic formula to improve your site’s placement. It’s a combination of things. Optimizing your content, structure and meta tags, and increasing keyword density won’t directly change how your site performs in click-tracking systems, but optimizing them will help your web site’s stickiness measurement by ensuring that the content is relevant to the search query. This relevance will help it move up the rankings and thus improve its click popularity score.

Search Engines Can Use The Click Through Strategy To Improve Results
Search engines need to keep an eye to new technologies and innovative techniques to improve the quality of their search results. Their business model is based on providing highly relevant results to a query quickly and efficiently. If they deliver inaccurate results too often, searchers will go elsewhere to find a more reliable information resource. The proper and carefully balanced application of usage data, such as that collected by Alexa, combined with a comprehensive ranking algorithm could be employed to improve the quality of search results for web searchers.

Such a ranking formula would certainly cause some waves within the search engine community and with good reason. It would turn existing search engine results on their head by demonstrating that search results need not be passive. Public feedback to previous search results could be factored into improving future search results.

Is any search engine employing such a ranking formula? The answer is yes. Exactseek recently announced it had implemented such a system, making it the first search engine to integrate direct customer feedback into its results. Exactseek still places an emphasis on content and quality of optimization, so a well optimized web site, which meets their guidelines, will perform well. What this customer feedback system will do is validate the entire process, automatically letting the search engine know how well received a search result is. Popular results will get extended views, whereas unpopular results will be pushed down in ranking.

Exactseek has recently entered into a variety of technology alliances, including the creation of an Exactseek Meta Tag awarded solely to web sites that meet their quality of optimization standards. Cumulatively, their alliances combine to dramatically improve their search results.

ExactSeek’s innovative approach to ranking search results could be the beginning of a trend among search engines to incorporate traffic data into their ranking algorithms. The searching public will likely have the last word, but webmasters and search engine marketers should take notice that the winds of change are once again blowing on the search engine playing field.

Author Bio:
Richard Zwicky is a founder and the CEO of Metamend Software, a Victoria, B.C. based firm whose cutting edge Search Engine Optimization software has been recognized around the world as a leader in its field. Employing a staff of 10, the firm’s business comes from around the world, with clients from every continent. Most recently the company was recognized for their geo-locational, or GIS technology, which correlates online businesses with their physical locations, as well as their cutting edge advances in contextual search algorithms.