Ask us a question!

Web Moves Blog

Web Moves News and Information

Archive for the 'Search Engine Optimization' Category

For awhile we have known that speed is an important factor in Search Ranking but over the last week we have seen an interesting notification for this being tested.  As reported on SearchEngineLand, Google has been testing a bright red “Slow” warning that can be found in Search Engine Results Pages (SERP) for sites that are slower than normal.  This way users can be warned that clicking the link will result in a slow page load time. (more…)

UPDATE 10/19: Google has confirmed the rollout of Penguin 3.0

I’ve only got one anecdote to support my suspicion, but a client who was a victim of Negative SEO lost all of their top 10 rankings, and as of today they are all back 100%!

Google Penguin Update

Photo Credit: omoscowonder.com

Backstory: This client  hired us  in December 2013 to figure out why their rankings and organic traffic had dropped overnight on Oct 5, 2013 (date of the last Penguin update).  We analyzed their back links using a variety of tools and didn’t take us long to discover they had been a victim of negative SEO.  We found over a period of 3 months there were random spammy links added to their site to the tune of about 20,000 links a month!  They had never done their own link building nor hired an SEO. Other than these links they had a very small link profile, with just natural links and only a few.

Because the rankings and traffic did drop right in line with the Penguin update, and there were no warnings in Webmaster Tools of a manual penalty, we knew this was an algorithmic penalty associated with these links.

It took us about 3 months to discover and disavow these links, and the reason why is interesting. The links were not showing up on the page every time – they would only occasionally show up upon a refresh.  Each refresh of these spam pages was filled with about 100 random links, and each refresh showed a new 100 links.  So any link checker would find the links some times, but not other times. We had no choice but to continue to run a link checker, 3 different link checkers as a matter of fact, about 1x a week.  Each time we would discover new links that we hadn’t disavowed yet.  It was a brilliant sneaky black hat SEO attack.

By around February, we were confident we had disavowed about 90% of the spam links. But no recovery.

As we have all learned since then, a site must wait on Google to update the Penguin algorithm before any changes in rankings or recovery can take place. Unfair? Absolutely. Especially in the case of Negative SEO, which is real and does work.  And in this case it took Google over a year to update Penguin.  Imagine us reassuring this client that as soon as Google runs their update, their rankings will be back. “When will that be?” “Uh, Google won’t tell anyone, should be within 3 more months”.

In the meantime, we worked hard on content marketing, blogging for the client, getting high quality content published on their site and blog, building out Google Local pages, hoping that when the update hit, their site would be stronger than ever.

SUCCESS!  ALL of their previous important keywords are back to the top 10, as of today October 18, 2014. Some of them even higher!

We are torn between being thrilled and being sickened that Google can allow something like this to happen.

 

January 2013 – Last spring Google posted about Responsive Web Design on their official webmaster central blog and though the flavor of their article was fairly mild, they made it very clear that their “commitment to accessibility” includes a very important message to web designers – “Mark up one set of content, making it viewable on any device.”

(more…)

I’m sure by now everyone has seen that Google allows you to set up authorship credit for the content that you create.  Credit is given by a picture of the author along with a link to the author’s Google Plus page as well as a link that allows you to read more posts by the author.

SEO Moves - Author Credit in SERP Example

Setting this up for a blog with only one author that is posting content is pretty straight forward and there is a lot of good information available on how to do it.  The problem comes when you have a website or blog that has multiple or more than one authors posting content.  Unfortunately the steps that would allow this to work in a single author instance do not work when there are multiple authors and you would end up with the wrong author receiving credit for the content.

After a fair amount of searching I was still not satisfied with any of the answers I had found on how to create the authorship credit when there is more than one author.  A lot of the posts that I found contained old and outdated information or steps that are honestly not necessary to this process.  Finally after piecing together a bunch of information, I was able to find a solution to my problem that was surprisingly even easier than I had expected! In these next couple sections I will cover how to configure the author credit for both of these scenarios (single author or multiple authors) in WordPress.

(more…)

SEO Moves - Waves - Photo Credit: flickr.com/photos/onigiri_chang/Aaron Wall of SEOBook recently predicted that, in 2013, SEOs who “remain overly-public will continue to invent language to serve their own commercial purposes while chastising those who do not fall in line.” I appear to be living up to (the first part) of that promise because I’m calling it: the breakthrough ranking factor of 2013 will be “waves,” a term I just made up.

This will be a somewhat speculative post, so I feel compelled to say that these opinions are my own, and don’t necessarily reflect the opinions of Northcutt as a whole.
Where did this crazy idea come from? It started with the realization that, back in 2009, Google’s Chief Economist told McKinsey Quarterly “I keep saying the sexy job in the next ten years will be statisticians. People think I’m joking, but who would’ve guessed that computer engineers would’ve been the sexy job of the 1990s?”


(more…)

Google Logo - SEO MovesIf you work in the field of SEO, you probably understand that Google controls everything. We constantly bend to its will and try to outthink it at every turn. Just as space travel is unpredictable because we haven’t yet experienced much of it, SEO is also a largely new frontier and we seldom know what to expect from our environment. Our environment, of course, is Google. But what if Google didn’t exist? Where would we look for sites? How would we get links? Your brain is probably boiling over with great ideas right now, and that’s the point of this whole thing—if we eliminate Google from the equation entirely, those paths that we come up with are almost completely organic.

Google is extremely popular with both the general public and with SEO professionals, but it often locks us inside of a box. At some point we’re not exploring the web on our own, and instead we are relying on an algorithm and some web spiders to explore for us. We can break out of this box and choose our own destination in a natural, organic way. Considering the question “what if Google didn’t exist?” is a great way to answer the question “where can I get more links?”

What If?

Playing “what if?” is a fun, but sometimes dangerous, game. It’s easy to get stuck down in the mire of negativity and use “what if?” to fuel your own pessimistic fire. If you use it correctly, however, “what if?” can be a great catalyst for ideas and innovation. For example, think about the popular post apocalypse genre of fiction, where a shovel might become the protagonist’s best weapon, best tool and best friend. Similarly, in a world without Google, a message board buried somewhere inside of a mediocre site with low domain authority might become an excellent research tool. After all, if all of these people are willing to brave an underwhelming site just to talk to each other and share about a topic, that means they’re passionate about it. Passion leads to great info, great leads on new sites and useful links. Google does exist, of course, but thinking outside of that box produces some interesting results.

(more…)

Speedometer Page Load - SEO Moves

Page loading time is crucial to keeping visitors on your site and
maximizing conversions. Studies have been done that show the maximum
time people are willing to wait for a page to load is less
than 5 seconds
. Make them wait more than that, and it’s game over.
They’ll hit their back button, never to return. It’s vitally important,
then, to make sure your web site is loading as fast as possible.

Sure, having a super-beefy server helps, but one important aspect of
having a fast loading website is reducing the size and number of your
page assets as much as possible. This can be a real challenge within the
current state of the Web. Web pages are becoming increasingly more
complex globs of code that require a huge amount of assets to display
and function properly. In addition to the plain old html, a ton of
javascript, css files, and small images all have to be downloaded in the
background for the page to fully render in a browser.

“What’s the big deal?” I hear you ask. “All my visitors are on
broadband, and the js/css/images are only a few KB extra – hardly a drop
in the bucket!” Now, this may very well be true. However, the fact is
that the actual size of your files are only a small part of the overall
cost incurred on a page load. There is a much more subtle bottleneck
that has nothing to do with file size: The maximum concurrent connection limit.
This is a limit the browser enforces which dictates how many
connections can be open simultaneously to a single server. Even if
you’re on a super-fast connection, your browser will still limit the
maximum number of files you can download at one time. This number varies
from browser to browser, and may change slightly depending on connection
speed and web server configuration. The actual values for Internet
Explorer, Firefox, and Chrome are below:

  • IE 7 and below: 2 – 4
  • IE 8 and above: 6 – 4
  • Firefox 3: 6
  • Firefox 3 and above: 15
  • Chrome: 6

Combine and Minify

Combine Assets - Funnel - SEO MovesIt can be helpful to think of a concurrent concurrent limit as the end
of a funnel that your page assets pour through. Naturally, the more
assets you have the longer it takes for them to get through the funnel.
Making your assets smaller helps them pour through faster, but still,
only so many can go through at once no matter how small they are. The
key is to combine them into as few files as possible, thereby reducing
the connection limit bottleneck. Making your files smaller AND combining
them is a win-win situation. Smaller files + fewer connections =
faster loading site.

Minification

To “minify” a file simply means to strip out all the “human readable”
parts of the file such as indentation, line breaks, comments,
extraneous whitespace, long variable names, etc. E.g all the stuff that
makes it easy for a human to read, but which a computer couldn’t care
less about.

Consider the following snippet of JavaScript before being minified:

(more…)

Dog Napping - Photo Credit: zentofitness.comIf your SEO rankings are plummeting, one of the most effective local internet marketing techniques to help bring it back around is to take a NAP:

N ame

 

A ddress

 

P hone Number

Ultimately, we’re talking about local citation building.  Citation building can have a powerful effect on your SERP positioning for keywords and searches returned using local data and terms.  However, it’s not as easy as simply requesting inclusion in Google Places and getting a couple of links from sites like the online Yellow Pages.  The following are 4 simple steps for taking a NAP, AKA; building effective and long-lasting local citations.

(more…)

Computer Programming Guest Blog PostIt’s pretty much common knowledge:  Web developers hate SEO experts.  In all fairness, however, the feeling is mutual.  But there are some good reasons for this culture clash.

“Same Thing” Sickness

One thing that SEO’s hate about web developers has to do with the way they execute or fail to carry out a very specific request.

A case in point:  An SEO requests a developer to create a 301 redirect between pages.  The developer does a meta-redirect or a 302 redirect citing that it’s the “same thing”.

From the developer’s perspective, it’s the same thing for the user, but from an SEO standpoint, it affects the search engine rankings.

The Death of Optimization

Developer skills and SEO techniques go hand in hand, so when if a developer fails to do their job, then it doesn’t matter what the SEO team does.  Even with a copy of Google’s secret algorithm in hand, the site won’t rank if the site won’t work.

A case in point:  A client implements some redesign elements.  Suddenly, traffic drops by 30%.

The problem:  Many of the pages don’t load like they should and the ones that do load show 500 server errors.  The developer failed to spot the errors during the development process.

The result:  3 weeks of seriously diminished traffic.

The “I Know SEO” Syndrome

This is a contagious disease that developers get that can quickly spread to other developers.  If you have ever heard a developer say something like I’m pretty good at SEO, it can usually be translated into I’ve read a little about SEO and therefore I pretty much know more than you do.

But wait a minute, SEO’s.  You aren’t immune, either.  There is a related syndrome called “I can code”.

A case in point:  An SEO expert successfully builds a Word Press site and suddenly deems themselves a web developer.

The Real Problem

At the root of the culture clash between coders and SEO’s are their driving philosophies.  Business classes that teach search engine optimization focus on uniqueness.  After all, differentiating yourself from the competition is a good thing.  On the other hand, computer science classes center on making everything the same.  Each discipline takes a different approach to reaching the same result:  stability and efficiency.

(more…)

Like any other art form, web design is completely subjective. A web site might look like a thing of beauty to one person, and a complete mess to another person. There is, after all, no accounting for taste, and everyone’s tastes are different. However, there’s more to a web site’s design than merely its appearance. A web site design can have an enormous impact on conversions and even the most subtle design decisions can have a big effect. For example, a user might be more inclined to click on a green “Buy Now!” button more so than a red one. Finding a good balance between a site that looks good and a site that performs well in terms of conversions can be a real challenge.

How then can something as subjective as web design be analyzed in an objective manner to find the most effective design? One widely technique is A/B testing. In a nut shell, A/B testing sets up two or more control groups: Group A will see one version of the site, while Group B will see another version. This way various design elements can be tested and compared.

A/B Testing Representation

But is A/B testing really the best way to determine the most effective web design? Perhaps not. This excellent blog post by Steve Hanov suggests another method for finding the best design. Best of all its fully automated. Set it, forget it, and the page will “learn” which elements result in the most conversions.

In his post, Steve outlines the epsilon-greedy algorithm, also known as the multi-arm banded problem. Given a set of variations for a particular page element, the algorithm can make an ‘educated’ decision on which element to show based on its past performance. The best performing page elements are displayed the most frequently.

The algorithm records of the number of times a particular page element was displayed, and the number of times the element resulted in a conversion. However, the algorithm will also adapt to change. If a page element’s conversions begin to decrease, the algorithm will start to adapt and display different variations. The best part of this is that you can set up different variations of page elements one time, and let the computer do the work of figuring out which variations are the most successful. Pretty neat stuff!

Armed with this knowledge, I set out to try a few experiments with it, the result of which is Robo_AB_Tester, a small PHP class library I created which implements the epsilon-greedy algorithm. You can give it a try here.

 

Robo_AB_Tester tries to abstract away as many implementation details as possible and create a simple interface that is, hopefully, easy to integrate into a PHP based website. Once it is set up, it will:

  • Allow you to test multiple elements per page
  • Allow you to specify any number (A/B/C/D/E…) variations of each element.
  • Detect on page events for the tested elements (e.g clicks, form submits, etc)
  • Handle all ajax communication between your web page and Robo_AB_Tester
  • Keep track of how many times the elements were displayed
  • Keep track of how many times a user interacted with the element
  • Autonomously determine the best performing elements

For more details, see the demo page.