Ask us a question!

Web Moves Blog

Web Moves News and Information

Archive for the 'Build Your Website' Category

01
Mar
2004

Hyphens Made Easy

Introduction
Your readers judge you on the way you write.

This applies whether you’re writing advertising copy, a college or business report, a web site, or the next great novel; and it’s these judgements that will determine the success or failure of your venture.

For example, would you buy a book if you flipped through the pages and saw spelling errors? Probably not. Such errors would detract from the credibility of what was written. Similarly, the Internet is full of web sites offering to tell you how to write fantastic advertising copy that will triple your sales.

The irony is that most of these sites look like they’re written by an illiterate. You know the ones: spelling errors, poor grammar, ridiculous punctuation, and way too many exclamation marks.

Good, solid writing skills are necessary whether you’re writing for business, college or fiction. In this article, I’m going to look at a frequently misunderstood area: hyphens.

Yes, it sounds dull; I admit it. Wait, though, before being tempted to put this article to one side, and test yourself with these real-world questions.

Q1. Why do many dictionaries list “infra-red” with a hyphen, but “ultraviolet” without?

Q2. Why does only the first of the following sentences need a hyphen?
We will discuss public-safety issues.
We will discuss issues of public safety.

Q3. Which of these is the preferred spelling:
co-ordinator or coordinator?
mid 1990s or mid-1990s?
selfesteem, self-esteem or self esteem?

Are you certain of all your answers? If not, read on, and we’ll cover some simple guidelines for using hyphens. (You’ll also find the answers to the questions above.)

Seven simple steps for using hyphens
1. The prefix “self” is nearly always hyphenated; e.g. self-esteem, self-image, self-conscious.

2. When the prefix “ex” is used to mean former, it is always hyphenated; e.g. ex-wife, ex-premier, ex-treasurer.

3. Most of the time, other prefixes don’t need a hyphen; i.e. most dictionaries list “coexist” not “co-exist.”

4. We do sometimes use a hyphen after a prefix, though, if the main word is only one syllable; e.g. infra-red. By comparison, ultraviolet doesn’t need a hyphen (according to most dictionaries) because the main word is not one syllable.

5. Use a hyphen after a prefix in order to separate a doubled vowel; e.g. pre-empt, de-emphasise. There are some exceptions, though. Many modern dictionaries spell “cooperate” and “coordinate” without hyphens.

6. We tend to hyphenate compound words only if they come before a noun, not after. For example, we write a “public-safety issue” with a hyphen, but “an issue of public safety” is written without one.

7. Use a hyphen after the prefix if the main word has a hyphen of its own; e.g. non-customer-focussed approach.

Armed with these simple guidelines, you’ll soon be using hyphens like an expert. Good luck! 🙂

Author Bio:
You’ll find many more helpful tips like these in Tim North’s much applauded range of e-books. Free sample chapters are available, and all books come with a money-back guarantee. http://www.BetterWritingSkills.com

Introduction
Who is more important to you as a webmaster: the visitor or the search engine?

Needless traffic may be important to websites that seek advertising revenue (well, they too need targeted traffic) but you need traffic that caters to your requirement. So for a successful website, both are important. Search engines send relevant visitors to your website and visitors do business with you. Both should be an important factor when you sit down and plan the architecture of your website. The misconception that lots of senseless traffic is good for business has been shattered.

Your website should read convincing both to your target visitors, and the search engines. In fact, you should treat search engines too as your visitors because if you optimize for your visitors, you automatically optimize for the search engines. Once you follow the steps listed below, there is a big chance that you’ll create a website optimized both for the search engines and your visitors.

Well-written content
Both visitors and search engines like well-written content. In fact every one of us appreciates an interesting read. There used to be a time when lots of crap took place to please search engines. Lots of keywords and phrases were stuffed needlessly into the web pages to make them rank higher. Scores of “doorway” pages were created to lead visitors to websites. They made search engines happy but confused the visitors, nullifying the advantage and ultimately, forcing the search engine companies to alter their search algorithms.

But what really matters is the quality content. If you have no content, or irrelevant content, what’s the use of getting hundreds of visitors daily? You need to have website content that is user-focussed; you need a web copy that talks to the visitors. The copy on your website needs to supply the information your visitors need to arrive at some decision. The content should be written in an interesting manner, in an absorbing manner. All the information that your visitor should get, should be there on your website, in a straight, non-cryptic language.

This re-affirms that professional content developers are as important as professional web developers, if not more. Badly written content can prove costlier than you can imagine.

Well-connected pages
All your pages should be accessible to both people and search engines. When the search engines visit your website, they should be able to jump from link to link. It should be like an inter-connected network where one can go anywhere from anywhere. Many web developers create a sitemap that contains links to all the pages on the website so that once the search engine finds that page it can go to all the links on the page.

Anyway, irrelevant pages have no business being on your website and relevant pages should be within one or two clicks away from your visitors (unless they are password-protected).

Less use of frills
Frills like Flash and DHTML look cool but if they don’t solve any purpose other than let you show-off how you can make geometric figures dance around the screen, you should avoid using them. The search engine crawlers like the plain-vanilla text. Showcase frills only if youre selling them (if youre a Flash designer or a graphic artist). A company selling organic manure doesn’t benefit much by having a Flash website that shows bags of manure appearing here and there like apparitions.

Use keywords sparingly
The search engine companies have finally realized that actual content is better than nonsensical repetition of keywords. Of course keywords are important, but not because they are “keywords”, but because they are needed there. For instance, if you sell organic manure, you have this phrase on your website because you need to specify what you sell (unless you belong to some underground organization that uses coded language to communicate).

There is no need for a keyword or a key-phrase to appear more than three or four times on your page. In fact on Google you spoil your ranking if you use keywords excessively. Let them appear at the top, somewhere in the middle, and then in the end. That does the trick. Weave around them a nice context.

There are people who do this as a profession and it really pays in the long run to hire a content writer who can write optimized content for you.

Update frequently
Both search engines and people like updated content.

If your visitors expect to see new content on your website whenever they come, they’ll come again and again, and they’ll come with greater frequency. Search engines too want to show content that is rapidly updated so that they can display the latest information. Make it a routine to put something new every second or third day, even if it is one paragraph.

Use clean HTML
Clean code loads quicker and gets crawled (this sounds creepy!) by the search engines faster. If the success of your website really matters to you and if you want to create your own web pages instead of hiring a professional web developer, you should spend at least a few days learning HTML. A search-engine friendly website doesn’t need much HTML to learn and it will show clean content to your visitors without unnecessarily increasing the load time. Avoid using graphical tools and use a text editor instead. It sounds daunting in the beginning but once your realize its benefit, you’ll be more than eager to write HTML rather than use a tool that produces lots of unnecessary junk code.

The efforts mentioned above take time to show result, but they are long lasting and they fetch you the desired results.

Author Bio:
Amrit Hallan is a freelance copywriter, copy editor and a writer. He also optimizes web page content for higher Search Engine ranking. Read his weekly essays and articles by subscribing to amritscolumn-subscribe@topica.com For Copywriting and Copy Editing Services, visit: http://www.amrithallan.com.

Search Engine Friendliness
A good looking and user friendly website is an extremely important asset to your success on the Internet. However without traffic, even a well designed site will not produce results for you. The best websites are those that are both attractive and easy to use by your human users, and at the same time, convenient for the search engine robots that are trying to find and collect data from your site.

Oftentimes a site that may look good to your eye has some design flaws that impair its search engine friendliness. Here are a few things to look for when designing new sites or optimizing an existing site.

Your first line of text
1. Where does your first line of text begin? You may think, “that’s easy, the first line of text is right at the top?” If you view your web page using Notepad or the html view of popular editors you may be surprised to find that the first line of your actual searchable text may be pushed down, 100 lines or more, by long strings of java script and by the html code that defines your tables.

The higher your text appears in this html view of the site, the easier it is for the robot to find it and put it in the search engine data base. You can save space in your html code by copying your java script and placing it in an external file uploaded to your server. Instead of having 50 lines of java script commands in your html code, there will only be one line pointing to the separate file with the java script.

Similarly if you simplify your table structure, your searchable text will become more prominent. The left-hand navigation bar, for example, with its separate graphic elements each in its own row, may be a place where you can economize on your code by merging the rows into one cell.

Graphic-Predominance
2. Is your website graphics-predominant, at the expense of searchable text? If your site begins with a splash page, such as a lovely page-filling picture of the ocean and no text except, “enter here”, then you are wasting a big opportunity. Search engines consider your main page, the one you reach when you land at www.yourcompany.com, to be the most important page. Your main text with its important keywords should be on your first page. If you already have splash page, you should consider scrapping it altogether, or at least adding a paragraph with a powerful capsule description of your activity.

If your site has a flash-only first page then the text message on that page is not visible, except for what you are able to put in your title and description tags. Search engine robots cannot read the text message that has been put in the form of a flash movie.

If you want to use flash, and also do well in search engine rankings it is better to make a hybrid page where the flash is surrounded by a normal html page with text. The text around the flash movie should be optimized so that the page ranks well in search engine queries for your important keywords.

Important text as graphics?
3. Have you unknowingly rendered important text as a graphic? If your site is about “wireless widgets made in California” then you would want some prominent text near the top of the page with these words. You may already have it but the text has been changed into a beautiful gif or jpg graphic either by your designer or by your html editing program. Search engines will not give that nice-looking graphic the same importance as it would text written as an H1 or H2 header. Some popular html editors render entire paragraphs as gif graphic images. All the text that appears in the image becomes almost invisible to the search engines. I say almost invisible because you can always put an alt text for any graphic, however this alt text is not weighed as heavily as normal text set as bold or in headers. So, check your pages and make sure that your text is normal text and not an image.

Link structure
4. Can Search Engines Follow Your Site’s Link structure? If your site employs a drop-down menu that is run with java script, then search engines may find your main page, but they won’t follow the links to your interior pages. Similarly if your navigation area is an image map, a graphic with “hot spots” that link to your various internal pages then the search engines cannot and will not find the other pages of your site. To get maximum traffic it is imperative to have as many of your pages as possible indexed in the big search engines. You can accomplish this by adding a text-based navigation area at the bottom of your pages or a site-map page with text links to all your interior pages.

If you pay attention to these design considerations, you can greatly improve your site’s chances of getting a top ranking in search engine queries for your most important keywords.

Author Bio:
Donald Nelson is a web developer, editor and social worker. He has been promoting web sites since 1995 and now runs A1-Optimization a company that provides low-cost search engine optimization and submission services. He can be reached at support@a1-optimization.com

Introduction
Building a successful SEO (Search Engine Optimization) campaign requires a lot of time and hard work. Search engines are constantly changing their algorithms and it’s up to you to make the necessary adjustments to accommodate these changes. Keeping track of all of your optimized pages can be a daunting task. However, you can avoid unnecessary confusion by organizing your optimized pages in a streamlined fashion. Although not common practice, this is one of the most important steps in any successful SEO campaign.

What do I mean by “organized?” Simply, that you should develop a clear plan on how your pages will be named and where they will be situated on your web site. You need to be able to easily identify and track what pages have been indexed by what engine and what pages need to be updated. One way to achieve this is to adopt a “naming convention”.

Example 1.
Your company web site sells widgets. You have a list of 5 of your most important keywords and you’ve optimized these keywords for 4 search engines. That’s a total of 20 optimized pages. You have a robots.txt file set up to prevent search engine ‘A’ from indexing pages that are intended for search engine ‘B’ and so on.

Let’s examine the drawbacks to this naming convention:

Keyword Page Name Engine
widgets widgets.htm Google
blue widgets bluewidgets.htm Google
red widgets redwidgets.htm Google
black widgets blackwidgets.htm Google
purple widgets purplewidgets.htm Google
widgets widgets2.htm MSN
blue widgets bluewidgets2.htm MSN
red widgets redwidgets2.htm MSN
black widgets blackwidgets2.htm MSN
purple widgets purplewidgets2.htm MSN
widgets widgets3.htm AltaVista
blue widgets bluewidgets3.htm AltaVista
red widgets redwidgets3.htm AltaVista
black widgets blackwidgets3.htm AltaVista
purple widgets purplewidgets3.htm AltaVista
widgets widgets4.htm Hotbot
blue widgets bluewidgets4.htm Hotbot
red widgets redwidgets4.htm Hotbot
black widgets blackwidgets4.htm Hotbot
purple widgets purplewidgets4.htm Hotbot

1. The words in your page names are not very distinct. This is important because a search engine cannot determine if bluewidgets.htm is made up of two distinct words “blue” and “widgets.” You need to find a way to separate these keywords in the page name or you will not get credit for the keyword in the file name.

2. Your page names are not easily identifiable. When you run a Reporter mission, you will see your pages indexed with the number appended to the keyword phrase in the file name. At first glance, this doesn’t tell you the engine for which the page is optimized. You need to be as descriptive as possible.

3. Using a robot.txt file can diminish your exposure throughout all of the search engines. I explain this in the next section.

Now, let’s take a look how we can modify our page names in order to get credit for the keywords, and allow you to easily identify them in the corresponding search engine while gaining maximum exposure.

Example 2.
Below, you’ll see an example of how I have added hyphens to separate keywords in the page name. Also, I’ve appended an engine indicator to the file name, so it will be easy to distinguish what page is optimized for which engine.

Keyword Page Name Engine

widgets widgets.htm Google
blue widgets blue-widgets-gg.htm Google
red widgets red-widgets-gg.htm Google
black widgets black-widgets-gg.htm Google
purple widgets purple-widgets-gg.htm Google
widgets widgets-ms.htm MSN
blue widgets blue-widgets-ms.htm MSN
red widgets red-widgets-ms.htm MSN
black widgets black-widgets-ms.htm MSN
purple widgets purple-widgets-ms.htm MSN
widgets widgets-av.htm AltaVista
blue widgets blue-widgets-av.htm AltaVista
red widgets red-widgets-av.htm AltaVista
black widgets black-widgets-av.htm AltaVista
purple widgets purple-widgets-av.htm AltaVista
widgets widgets-hb.htm Hotbot
blue widgets blue-widgets-hb.htm Hotbot
red widgets red-widgets-hb.htm Hotbot
black widgets black-widgets-hb.htm Hotbot
purple widgets purple-widgets-hb.htm Hotbot

I respectively use abbreviations such as “gg” for Google, “ms” for MSN, and so on. You don’t have to use my abbreviations. However, make sure the naming convention that you implement is consistent. That’s the most important thing.

Tip: Please be careful when creating an “engine indicator.” Do not spell out the entire engine name in your filename. For instance, avoid naming your page like this:

blue-widgets-google.htm

Although it has not been proven, Google and other crawlers could potentially flag this page as a doorway page because it thinks you are creating it specifically to rank high on that engine.

You might be thinking, “I’ve created a robot.txt file, so I don’t have to worry about search engine ‘A’ indexing pages that are intended for search engine ‘B.’ Yes, that is correct. However, if you use a robot.txt file for this purpose, you could be cheating yourself from gaining maximum exposure across all of the search engines.

If you do not use a robot.txt file, you will notice that search engine ‘A’ will index pages optimized for search engine ‘B.’ This is exactly what you want. In order to do this, you must be very careful because you do not want to have similar content that could be flagged as spam.

It is completely possible to optimize several different pages that target the same keyword, and create content so unique that you will not be flagged for spam. As I mentioned, this will maximize your exposure across all of the search engines, while allowing you to increase the overall unique content of your site.

I can’t tell you how many times engine ‘A’ has picked up pages that I’ve optimized for engine ‘B’ and ranked the ‘B’ pages higher than those I specifically optimized for ‘A.’ So, if at all possible, only use a robot.txt file to protect your confidential content from being indexed.

One final Tip
Try to avoid creating sub directories solely for the purpose of storing optimized pages for a specific search engine. Storing all of your optimized pages in your root directory gives you a better chance at higher rankings because most crawlers give more weight to pages found in the root directory. In this case, it is better to sacrifice the organization and shoot for the higher rankings.

Author Bio:
This article is copyrighted and has been reprinted with permission from Matt Paolini. Matt Paolini is a Webmaster/Tech Support Specialist for FirstPlace Software, the makers of WebPosition Gold. He’s also an experienced freelance Search Engine Optimization Specialist and Cold Fusion/ASP.NET/SQL Server developer/designer. For more information on his services, please visit http://www.webtemplatestore.net/ or send him an email at webmaster@webtemplatestore.net

Robots Exclusion Protocol
Back in the spring of 2003, I wrote an article on the Robots exclusion protocol that generated a lot of emails and many questions. I am still getting them, so it seems that a more extended article is warranted. Often referred to as the ‘Robots.txt file’, the Robots exclusion protocol can be a very important part of your search engine optimization program, but it needs to be carefully implemented to be successful.

If used incorrectly, this small and ‘innocent looking’ text file can cause a lot of problems. It can even cause your site from being excluded from the search engine’s databases, it it’s not written correctly. In this extended article, I will show you how to correctly write it and make the Robots exclusion protocol an important part of your SEO efforts in attaining good visibility in the major search engines.

How The Robots Exclusion Protocol Works
Some of you may ask what is it and why do we need it? In a nutshell, as it’s name implies, the Robots exclusion protocol is used by Webmasters and site owners to prevent search engine crawlers (or spiders) from indexing certain parts of their Web sites. It could be for a number of reasons, such as sensitive corporate information, semi-confidential data, information that needs to stay private, or to prevent certain programs or scripts from being indexed, etc.

A search engine crawler or spider is a Web ‘robot’ and will normally follow the robots.txt file (Robots exclusion protocol) if it is present in the root directory of a Website. The robots.txt exclusion protocol was developed at the end of 1993 and still today remains the Internet’s standard for controlling how search engine spiders access a particular website.

If the robots.txt file can be used to prevent access to certain parts of a web site, if not correctly implemented, it can also prevent access to the whole site! On more than one occasion, I have found the robots exclusion protocol (Robots.txt file) to be the main culprit of why a site wasn’t listed in certain search engines. If it isn’t written correctly, it can cause all kinds of problems and, the worst part is, you will probably never find out about it just by looking at your actual HTML code.

When a client asks me to analyse a website that has been online for about a year and it still isn’t listed in certain engines, the first place I look is the robots.txt file. Once I have corrected it and written it for his website, and once I have optimized his most important keywords, usually the rankings will go up within the next thirty days or so.

How To Correctly Write The Robots.txt File
As the name implies, the ‘Disallow’ command in a robots.txt file instructs the search engine’s robots to “disallow reading”, but that certainly does not mean “disallow indexing”. In other words, a disallowed resource may be listed in a search engine’s index, even if the search engine follows the protocol. On the other hand, an allowed resource, such as many of the public (HTML) files of a website can be prevented from being indexed if the Robots.txt file isn’t carefully written for the search engines to understand.

The most obvious demonstration of this is the Google search engine. Google can add files to its index without reading them, merely by considering links to those files. In theory, Google can build an index of an entire Web site without ever visiting that site or ever retrieving its robots.txt file.

In so doing, it is not violating the robots.txt protocol, because it’s not reading any disallowed resources, it is simply reading other web sites’ links to those resources, which Google constantly uses for its page rank algorithm, among other things.

Contrary to popular belief, a website does not necessarily need to be ‘read’ by a robot in order to be indexed. To the question of how the robots.txt file can be used to prevent a search engine from listing a particular resource in its index, in practice, most search engines have placed their own interpretation on the robots.txt file which allows it to be used to prevent them from adding resources or disallowed files to their index.

Most modern search engines today interpret a resource being disallowed by the robots.txt file as meaning they should not add it to their index. Conversely, if it’s already in their index, placed there by previous crawling activity, they would normally remove it. This last point is important, and an example will illustrate that critical subject.

The inadequacies and limitations of the robots exclusion protocol are indicative of what sometimes could be a bigger problem. It is impossible to prevent any directly accessible resource on a site from being linked to by external sites, be they partner sites, affiliates, websites linked to competitors or, search engines.

Even with the robots.txt file, there is no legal or technical reason why they should be used, least of all by humans creating links, for which the standards were not written. In itself, this may not seem a bad idea, but there are many instances when a site owner would rather exclude a particular page from the Web. If such is the case, the robots.txt file will, to a certain degree help the site owner achieve his or her goals.

What Is Recommended
Since most websites normally change often and new content is constantly created or updated, it is strongly recommended that the Robots.txt file in your website be re-evaluated at least once a month. If necessary, it only takes a minute or two to edit this small file in order to make the changes required. Never assume that ‘it must be OK, so I don’t need to bother with it’. Take a few minutes and look at the way it’s written. Ask yourself these questions:

1. Did I add some sensitive files recently?
2. Are there new sections I don’t want indexed?
3. Is there a section I want indexed but isn’t?

As a rule of thumb, even before adding a file or a group of files that contain sensitive information that you don’t want to be indexed by the search engines, you should edit your Robots.txt file before uploading those files to your server. Make sure you place them in a separate directory. You could name it: private_files or private_content and add each of those directories to your Robots exclusion file to prevent the spiders from indexing any of those private directories.

Also, if you find that you have files in a separate directory but you want them indexed, if those public files have been on your server for more than a month and are still not indexed, have a look at your Robots.txt file to make certain there are no errors in any of it’s commands.

Examples Of A Properly Written Robots.txt File
In the following example, I will show you how to properly write or edit a Robots.txt file. First, never use a word processor to write or edit these files. Today’s modern word processors use special formatting and characters that will not be understood by any of the search robots and could lead to problems, or worse, it could cause them to ignore the Robots file completely.

Use a simple ‘pure vanilla text’ editor of the ASCII type or any text editor of the Unix variety. Personally, I always use the Notepad editor that comes on any Windows operating system. Make certain you save it as ‘robots.txt’ (all in lower case). Remember that most Web servers today run Unix and Linux and are all case sensitive.

Here is a carefully written Robots.txt file:

User-agent: Titan
Disallow: /

User-agent: EmailCollector
Disallow: /

User-agent: EmailSiphon
Disallow: /

User-agent: EmailWolf
Disallow: /

User-agent: ExtractorPro
Disallow: /

The user-agent is the name of the robot you want to disallow. In this example, I have chosen to disallow Titan, EmailCollector, EmailSiphon, EmailWolf and ExtractorPro. Note that many of these robots are from spam organizations or companies that are attempting to collect email addresses from websites that will probably be used in spam. Those unwanted robots take up unnecessary Internet bandwidth and slows down your Web server in the process. (Now you know where and how they usually get your email address from). It is my experience that most of those email collectors usually obey the Robots.txt protocol.

Conclusion
Properly implementing the Robots exclusion protocol is both a simple process and takes very little time to enforce. When used as it is intended, it can ensure that the files you want indexed in your website will be indexed. It will also tell the search robots where they are not welcomed, so you can really concentrate in managing your online business in the safest way possible, away from ‘inquisitive minds’.

Author:
Serge Thibodeau of Rank For Sales

Introduction
With the Robots.txt protocol, a webmaster or web site owner can really protect himself if it is done correctly. Today, web domain names are certainly plentiful on the Internet. There exists a multitude of sites on just about any subject anybody can think of. Most sites offer good content that is of value to most people and can certainly help with just about any query. However, like in the real world, what you see is not always what you get.

There are a lot of sites out there that are spamming the engines. Spam is best defined as search engine results that have nothing to do with the keywords or key phrases that were used in the search. Enter any good SEO forum today and most spam topics in daily threads usually point to hidden text, keyword stuffing in the meta tags, doorway pages and cloaking issues. Thanks to newer and more powerful search engine algorithms, these domain networks that spam the engines are increasingly being penalized or banned all together.

The inherent risks of getting a web site banned on the basis of spam increases proportionately if it appears to have duplicate listings or duplicate content. Rank for $ales does not recommend machine-generated pages because such pages have a tendency of generating spam. Most of those so-called ‘page generators’ were not designed to be search engine-friendly and no attention was ever given to engines when they were designed.

One major drawback of these ‘machines’ is that once a page is ‘optimized’ for a single keyword or key phrase, first-level and at times second-level keywords tend to flood results with listings that will most assuredly look as 100% spam. Stay away from any of those so-called ‘automated page generators’. A good optimization process starts with content that is completely written by a human! That way, you can be certain that each page of your site will end up being absolutely unique.

How Do Search Engines Deal With Duplicate Content?
Modern crawler-based search engines now have sophisticated and powerful algorithms that were specifically designed to catch sites that are spamming the engines, especially the ones that make use of duplicate domains. To be sure, there are perfectly legitimate web sites whose situation can certainly be informative. However, and as the following example will clearly demonstrate, that is not always the case.

We will take this practical example of where there are actually three identical web sites, all owned and operated by the same company, where the use of duplicate content is evident. Google, Alta-Vista and most other crawler-based search engines have noticed and indexed all three domains. In this scenario, the right thing to do is to make use of individual IP addresses and implementing a server re-direct command (a 301 re-direct). An alternative to this would be to at least provide unique folders or sub-directories and using the Robots.txt exclusion protocol to disallow two of the three affected domains.

That way the search engines wouldn’t index the two duplicate sites. In such cases, the Robots.txt exclusion protocol should always be used. It is in fact your best ‘insurance’ against getting your site penalized or banned. In the above example, since that was not done we will look at duplicate content and assess where the risk of getting a penalty is the highest. We will list and describe the indexing of these three sites as being site one which is the main primary domain, site two and finally, site three.

The four major crawler-based engines that were analyzed were Google, Teoma, Fast and Alta-Vista. All three domain names point to the same IP address, which actually made it simpler to use Fast’s Internet Protocol filter to discover that there was really no more than three affected domains in this example. However, all three web sites are directed to the same IP address AND content folder! Such a scenario makes them in fact exact duplicates, raising all the duplicate content flags in all four engines analyzed.

Even if all three sites share the same Robots.txt file, the hosting arrangement and syntax in the Robots.txt file does nothing that is effective to help this duplicate content problem. Major spider-based search engines which rely a lot on hypertext to compute relevancy and importance as most do today, are best at discovering and dealing with sites that delve into duplicate content issues. As a direct result, a webmaster runs a large risk of having duplicate content in these engines because their algorithm makes it such a simple task to analyse, sort out and finally reject these duplicate content web sites.

If a ‘spam technician’ discovers duplicate listings, chances are very good they will take action against such offending sites. The chances actually increase when a person, often a competitor files a spam complaint or that a certain site is ‘spam-dexing’ the engines. To be sure, any page caused by duplicate content can improperly “populate” a search query. The end result is unfairly dominating most search results.

Marketing Analysis And PPC “Landing” Pages
In order to better analyse specific online marketing campaigns or surveys, some companies at certain times have in fact duplicate sites or operate PPC (Pay-per-Click) landing pages. It is important in such cases not to neglect to use the Robots.txt exclusion protocol to manage your duplicate sites. Disallow spiders from crawling duplicate sites by properly editing the right syntax in the Robots.txt file. Your index count will certainly decrease, but that is the right thing to do and you are actually performing the search engines a service. In such a case, a webmaster needs not to worry of impending penalties from the engines.

If these businesses or their marketing departments are in fact running marketing tests or surveys, there is usually more than just one domain that could potentially appear in the actual results pages of the engines. In such cases, I strongly recommend writing or re-writing all content all over and making certain that no real duplicate content gets to be indexed. One way to achieve that is to use some form of meta refresh tag or Java script solution to actually direct visitors to the most recent versions of pages while their webmasters get the Robots.txt exclusion protocol written correctly.

The Java script would effectively indicate where it is intended to redirect, assuring it can put the final document in its proper place. A ‘301 server redirect’ command is always the best thing to use in these cases and constitutes the best insurance against any penalties, as it will inform the search engines that the affected document (s) have in fact moved permanently.

Author:
Serge Thibodeau of Rank For Sales

Introduction
In case you don’t know it, the meta description tag or ‘Meta Desc’ as we call them are what people searching the web will see in the SERP’s (Search Engine Results Pages). That, along with your important title tag information I explained in one of last week’s articles on this website. Simply put, the words placed within your meta description tag can help a page to rank higher in the search results, if done correctly.

Definition of the “Meta Description Tag”
The meta description tag is just simply a snippet of HTML code that belongs inside the HEAD section of a Web page. To be real effective, it has to be placed after the title tag and before the meta keyword tag. The proper syntax for the meta description tag is:

META

Functioning Of The Meta Description Tag
When applied correctly, the functioning of this important tag is twofold. The actual words placed within this tag are given some crucial weight with most major search engines today and can really help a page to rank higher in the search results for specific keywords and key phrases. Just as important, the words placed in the meta description tags appear under the title in a search engine’s list of results, giving searchers a much better idea of what that page is all about.

If no information is supplied for that tag, or if it is omitted from the HTML code of a web page, the search engines will often use the first few words that appear on that web page as the description of the site that appears on search results pages. We’ve all seen search results pages with some that look like this:

“Maria’s Gourmet Restaurants” ‘ [homepage] [about us] [hours] [contact us].

If the search results look like this it is simply because Maria or her site designer either forgot or neglected to write a meta description tag in her or his HTML code. The search engine did retrieve in fact the first few words on the page, but they happened to be some navigational links. As anyone can readily see, not only does this look awkward, it fails completely in offering searchers any information they can use that could otherwise be of value to them.

Surveys indicate that most people tend to skip over search results that look similar to what you just saw and they then click on the next link that will offer them more relevant information describing what is on that next page.

How To Really Design META Description Tags That Will Work
As we have just seen, because the meta description tag actually serves two functions, it must be carefully thought about differently than the title tag and meta keyword tag. Personally, I use both of those tags mostly for high search engine results rankings. But the meta description tag should also be thought of as a marketing vehicle along with being a tool for high search rankings. I recommend that it utilizes your most important keywords and key phrases for that particular page. Additionally, make certain it is written in a way that will entice searchers to click on your link instead of your competitors.

If you read my previous articles on this website, and if you have access to a professional copywriter for the sales copy for your website, you could take an important descriptive sentence from the sales copy and place it in the meta description tag. Even if your page wasn’t professionally written, you should look to find a line that will work for this function. Some optimization experts recommend using the first line of text on your page if you don’t know which one to use. I think that is a good place to start, although there are certainly more you can look at.

If you have what you think is an appropriate first line, then you could try it and then test the results in the search engines. There are some that will tell you the search engines don’t give the description tag nearly as much importance as they give the title tag. Some of that could be true. However, and talking from experience, I do know that some major search engines do in fact index the words placed in the meta description tag right into their database, and therefore it is important for you to get some of your most important keywords into them.

There are also some people that will tell you that the first words in this tag are often given more weight than the latter words. Because of this, I always try to write the important keywords at the beginning. I also usually try to use the same first words that I’ve used in my title tag as the first words in my meta description tag whenever at all possible. I usually limit this tag to one good but short descriptive sentence. Generally speaking, some search engines will index approximately 150 characters of your meta description tags. The longest ones I have seen so far are in HotBot.

Additionally, try to not repeat words in the meta description tag. However, one thing you could do is use various forms of words in the tag, example: plural/singular, present tense/past tense or “ing” forms of words or verbs and so on and so forth.

Finally, always make sure that all your meta description tags are actual sentences, not just simply a list of keywords or key phrases. If you create good meta description tags, you can often use them as the descriptions you would enter in search engine directories such as Yahoo, the Open Directory (DMOZ), Global Business Listing and LookSmart.

Author:
Serge Thibodeau of Rank For Sales

Don’t mess with those links! When you’re designing your site, you should leave your text links in their natural state–blue and underlined.

We all want to be creative and not do the bland, expected, normal thing. We want to change our links to red, green, yellow, even black–anything but blue. And we have the urge to take off those underlines.

Resist the temptation. It’s hard. But there’s a good reason to leave them alone. (more…)

A lot of times, browsing a website is sort of like playing hide and seek.

Visitors know the information they need is probably available somewhere on the site, but it’s lurking underneath several layers of pages. They are required to go find it.

These websites have the attitude that it is enough to merely provide good information or products. However, they don’t worry about compelling visitors to go in a specific direction or take a specific action.

There may be lots of good information or services available on the site, but visitors are left to take the initiative and pursue it themselves. (more…)

A question that I frequently hear is “Do I really need to have my own domain name?” The one word answer is “YES”. If you put up your site with some of the free web hosting services, the only company who benefits is the web hosting company. The last person who benefits is you. There are a number of reasons why having your own domain name is a must:

1) When you have your own domain name, the address of your web site will be of the form http://www.yoursite.com. On the other hand, if you put up your site on one of the free servers, the address of your web site will be something like http://www.somefreewebsite.com/yoursite/. Which of these two sounds more professional? Which of these two is smaller and is hence easier to remember? I leave you to make the judgment. (more…)