Ask us a question!

Web Moves Blog

Web Moves News and Information

Archive for the 'Tools' Category

Magento LogoMagento is an incredibly flexible ecommerce platform. However, even with all the flexibility it offers, it can still be a challenge to make it bend to your will.

Recently, I needed to have a few specific categories show the products in grid mode, while all the other categories show products in list mode. This should be straightforward since the products that appear on a Magento category page can appear in either a vertical list, or in a grid. And, depending on how Magento is configured, the user can choose which view to display the products in.

(more…)

SESIf there’s one often overlooked aspect of deploying a website, it’s email delivery. Sure, you take into account your website bandwidth, DNS, server performance, etc… But email always seems to come low on the totem pole. I suppose this might be due to the fact that it’s ubiquitous. You use it every single day and never really think about all messy underpinnings of it. And, yes, email as it is today is pretty much a mess… I like to think of email as a throw-back to a simpler, more wholesome time when people actually trusted each other. A time before messages from Nigerian princes and bogus pharmaceutical ads filled your inbox. Sadly, those days are gone and the once simple and elegant SMTP protocol now includes a huge pile of kludges and baggage that must be dealt with, such as:

  • SPF records
  • DKIM
  • RBLs
  • IP address reputation
  • Mail server management
  • Reverse DNS pointers
  • Spam Filters
  • Sending Limits
  • Weak Passwords

When you think about it, it’s miracle that it works at all. Or perhaps it’s a testament to the resiliency of the original email spec, that it continues to chug along in spite of all the abuse that occurs. I figure it’s probably a bit of both.

(more…)

Mobile vs Standard WebsiteThe present trend is that of mobility and the same applies to computers as well and that is the reason why more and more people are opting for internet enabled smartphones as they help them stay connected even while on the move. However, this has posed a certain type of problem for websites and website developers as many of them are not mobile friendly and hence not accessible through the mobile. The inability of websites to connect with the customers through mobiles could mean a loss of customers, which no business can afford. Therefore, website owners and developers have to work towards creation of websites that are available for the mobiles as well.

However, the task of creating or building a mobile version of the website is not very difficult as there are several tools that ease the process of creating mobile versions of websites. Some of these tools are discussed below:

(more…)

Pinterest Logo

Tracking Pins With the Pinterest Button

By: Bob Tantlinger

Recently I was tasked with logging social media interaction on a site utilizing the “buttons” (what do you call those anyway) of Twitter, Facebook, Google+, LinkedIn, and Pinterest.

We wanted to be able to record not only when a social media button was clicked, but when an actual share, like, or whatever took place. In other words, we needed to know that the user actually did the share. Nothing very difficult. Most of the big players in social media have handy APIs that let you subscribe to events they fire off when a share takes place, which makes this fairly straight forward. In a perfect world it WOULD be easy, but there’s -always- a monkey wrench lurking around the corner ready to ruin your day. In this case the monkey wrench was a royal “Pin in the Ass.” I am referring to, of course, Pinterest.

Pinterest is the newest social media fad, so their button is popping up all over the place at an alarming rate. Everyone is rushing to get their images pinned to the worlds biggest pin board. But there’s a problem. While Pinterest’s “Pin it” button works fine, they offer no offical API, so unlike the other social media services, there’s not much you can do with the Pin It button. You can stick it on your site, and that’s it. You cannot track events, such as when a “pin” occurs, or even when someone simply clicks on the darn thing.

(more…)

In addition to bringing you the most relevant results, search engines are many times fighting over presenting the most up-to-date pages to the searcher. That’s why Google has those time-related filters in the left, just below the “type” filters. Although recent content might be well of importance only to news-seekers, Google thinks otherwise. Long before the recent “Panda” update to its indexing algorithm that is being talked about all over the world in during last week, Google has made numerous adjustments to its ranking rationale, with frequently updated websites getting “bonuses” in SE placements.

Yet another step in the same direction was done several days ago, although no official announcement has been made. It seems, Twitter is getting more credit within Google, which has decided to present recent tweets in the search results. In addition, the results also show user’s picture. But more important is the fact that the link is the tweet is included in the SERP’s, making it a valuable inbound link for the featured website.

It has to be noted, that the above only applies to recently posted tweets (the exact amount of time could not be determined, but from my testing it is probably several hours, and after that the results return to the usual “join twitter to follow”. If you want to see those results, by the way, it is very advisable to include the word “Twitter” in your search query.

Google shows a tweet

One of the challenges in the recent Man vs Machine Jeopardy game, which was easily won by the computer, Watson, was coping with contextual questions. I was not only about facts, but understanding what the question is about.

Another fact is that people searches become more and more specific. Once, people would just type “cheese”. Today, as the number of webpages is immense, the search has to be more specific, stating what exactly you would like to know:  “cheese vs cholesterol”, “where to buy cheese”, “how was the cheese invented” etc.

William Tunstall-Pedoe, co-founder of Trueknowledge.com, claims that his website is the future of search – at least for those who want to get answers to specific questions. Instead of typing search terms, you may just enter a question (“what is cheese”) and get a brief paragraph with an answer. Additionally, you will have some useful facts about the product and further at the page you could see related links (“external answers”) . There is also a possibility to add you answer, if you think you know better and want to contribute. To the right, you will see other sections, such as “other ways this question is asked” and related questions that are waiting to be answered (“Can you answer these questions?”).

My personal hope is that since Trueknowledge.com has been founded by scientists, it should present a viable alternative to the popular Wikipedia, which, unfortunately, has numerous incorrect facts in its articles…

With the recent Goolge’s algorithm update (which was quickly called “Farmer’s Update”, as it seriously affects the so-called “content farms”) and Blekko’s removal of twenty famous websites from its results, it seems that fighting spam is the hottest issue in the search engine market.

Indeed, when we face certain enemy, it is very advisable to know about him as much as you can. So, what is this “spam”? The answer is clear – something annoying and useless. The first occurrence of spam is said to happen in the 19th century, when many honorable English gentlemen received an urgent telegram with an advertising content.

When we are talking about search results, however, spam is not easily defined. Usually, it means irrelevant pages that happen to have a keyword in them. But this has been handled a while ago. The search algorithms are far more advanced than 10 years ago, when one could fill the page with meaningless phrases and get a high SE ranking.

The problem has switched to using a good-written content (grammatically that is), which provides little useful information. It keeps repeating the same things again and again, so while looking “normal article” for the bot/spider, for the human being it is simply a waste of time. That’s what “content farm” means – a website that has constantly generated and frequently updated content, which has little value in it. That’s what Blekko and Google are fighting. The problem is that technically it is very hard to distinguish between “useful” and “useless” content – even for a human, let alone an indexing bot…

Google Chrome is currently world’s third popular browser, after the Internet Explorer and Mozilla FireFox. Although it is only two-and-something “years old”, the Chrome has quickly gained popularity due to extensive advertizing campaign – Google can afford that, of course. However, in order for advertizing campaign to succeed, the featured product should be, at the very least, adequate. Moreover, newly introduced items (in almost any industry) should offer certain innovation or at least a slightly different approach.

When Google launched the Chrome their basic idea was “simplicity and minimization”. No menu, no additional toolbars, small symbols, partially hidden option bar… Everything was designed to maximize the internal area – the one that has the website loaded. And this idea had actually worked – many people find the Chrome design to exactly suit their needs.

It seems that Google is taking these ideas one step further. Reportedly, when discussing several possible layouts for the next version of Chrome, Google designers even consider the option of removing the URL bar – well, not exactly removing, but hiding it when it is inactive. It would pop-up whenever the user needs it, allowing new URL entry.

As already said, this is only one of the four possible layouts, but the trend is clear. Bigger viewport, less menus – the idea is extremely appealing for those who like to surf the web from their iPad or SmartPhone.

Numerous websites reported that Google is ignoring the page title tags, replacing them with something “equivalent”. The discussion started on the WebmastersWorld forum with several “upset” webmasters claiming that Google is shoeing page titles that are different from what is included in the page HTML.

Well, instead of being upset, I would rather try to understand the issue. Not a big fan of Google myself, I still recognize the fact that whatever they do, they do for a reason. Matt Cutts says that “We (Google-J.S.) reserve the right to try to figure out what’s a better title.” Of course, one can shout “who are you to determine a better title for my page”, but the answer to this is pretty simple – they are GOOGLE, world’s number one search engine. And with the SPAM issue so hot, their quest of fighting “crawler-fooling-techniques” is understandable.

So, when does Google try to find an alternative title for a page? According to Google’s John Mueller, this happens when “the titles are particularly short, shared across large parts of the site or appear to be mostly a collection of keywords.” Those titles are regarded as “inappropriate” by the search engine that will try to replace them with “other text on the page”.

Some might see this as a violation of rights and yet another step towards global domination by the greedy Google. What I see is the basic principle of SEO – offer a solid and interesting content in your website and you will rank high. Page title should definitely be remarkable and unique, providing the user with the most important info about the page. So, instead of complaining about Google policy, go and check your page titles. If they are good I am sure Google won’t touch them.

It is no secret that Google is dominating the search engine market. However, it is also no news that there are people (about 35% of users, actually) who prefer other search engines – such as Yahoo!, Bing and others. Blekko is gaining some ground as does GoDaddy… It this situation, it was only a matter of time when another level of search is created, one level “above” search engines. www.megasearches.com is exactly this thing – a search that combines several search engines results, presenting it to the user in a tabbed way.

The site, created by Arshat Ali Suzon, a California State University graduate student, integrates results from Google, Yahoo!, Bing, Ask, AOL, Wolfram Alpha, Baidu, Snap and Wikipedia – you can browse through those by clicking the appropriate tab on the search page. You can also define which search engines you are after, by pressing the “+” sign; creating several “configurations” is also possible. In addition, there is a “More>>>” tab, that will present a list of eleven more search engine sites (including Lycos, AltaVista, WebCrawler and About.com). By clicking on each of those, you will be taken to a new page, presenting results produced by the selected SE.

The site has been around for about eight months now and already has gathered several thousand of adept in 25 countries. Is this mega-search the next big thing in the search industry? Let’s wait and see.