Bob boasts 13+ years programming experience. The list of software and web applications he’s developed from scratch, single handedly, is enormous. He handles all back end website development, database, PHP, and software development. If we think of it, Bob can build it for our clients. He’s faster than any programmer we’ve encountered. His technical knowledge is vast, and he is an expert at Java programming. Plus he can write, plus he can play guitar, plus he is really really nice and patient. We think he may be a robot.
Earlier this year Google began including AMP listings into its mobile search results. AMP, short for Accelerated Mobile Pages, is a specification for creating slimmed down pages for mobile devices. It’s a subset of standard HTML but with restrictions. In addition, Google caches AMPs on its own CDN to provide the fastest retrieval possible. Any AMPs appearing in Google Search results are linked to these cached pages.
You may be asking yourself why we need such a thing when mobile phones are already capable of displaying “ordinary” websites. It’s a good question, and admittedly that was my first question when I first learned of AMP. The AMP project says the purpose of AMP is to give mobile users a better, faster web experience. But what’s wrong with the current user experience on mobile phones? Is it really bad enough to warrant an entire new web page specification when we already have HTML 5?
phpMyAdmin is a handy tool for administering mySQL from an easy to use web interface. However, leaving such a powerful tool open to the entire world can be downright dangerous and is something that should be avoided if possible. Ultimately it’s best to keep it off your production server (or any other server you care about). However, if you absolutely must use phpMyAdmin, you should restrict who can access it. Below is a quick and easy tweak that will only allow access to it from a specific IP address. This tweak assumes an Ubuntu LAMP stack, but should work fine on any Linux distribution, although paths may be different.
Recently, I needed to have a few specific categories show the products in grid mode, while all the other categories show products in list mode. This should be straightforward since the products that appear on a Magento category page can appear in either a vertical list, or in a grid. And, depending on how Magento is configured, the user can choose which view to display the products in.
If there’s one often overlooked aspect of deploying a website, it’s email delivery. Sure, you take into account your website bandwidth, DNS, server performance, etc… But email always seems to come low on the totem pole. I suppose this might be due to the fact that it’s ubiquitous. You use it every single day and never really think about all messy underpinnings of it. And, yes, email as it is today is pretty much a mess… I like to think of email as a throw-back to a simpler, more wholesome time when people actually trusted each other. A time before messages from Nigerian princes and bogus pharmaceutical ads filled your inbox. Sadly, those days are gone and the once simple and elegant SMTP protocol now includes a huge pile of kludges and baggage that must be dealt with, such as:
When you think about it, it’s miracle that it works at all. Or perhaps it’s a testament to the resiliency of the original email spec, that it continues to chug along in spite of all the abuse that occurs. I figure it’s probably a bit of both.
AWS makes it easy to take snapshots of your EBS volumes. However, if you have many volumes, a way to automate and rotate snapshots becomes essential. There are many solutions out there to handle automated snapshots. One such excellent solution is the ec2-automate-backup script. Setting this script on a cron job, you can snapshot all your volumes in a specific region. Below is how I’ve set this up.
I have multiple EBS volumes attached to multiple EC2 instances. I needed a way to take a daily snapshot of all volumes. In addition, I needed the snapshots to rotate, such that only the last 7 days worth of snapshots would be kept.
I chose to create a small EC2 instance specifically for running ec2-automate-backup from a cron job that will backup the volumes of all my production instances.
WordPress can make for a great CMS, but sometimes it’s not always feasible or practical to use it for your entire site. One common scenario is to use WordPress for a blog that runs along side a site built with a completely different technology. However, a problem arises in this situation: What if you want to display blog posts on your non-WordPress site?
Page loading time is crucial to keeping visitors on your site and
maximizing conversions. Studies have been done that show the maximum
time people are willing to wait for a page to load is less
than 5 seconds. Make them wait more than that, and it’s game over.
They’ll hit their back button, never to return. It’s vitally important,
then, to make sure your web site is loading as fast as possible.
Sure, having a super-beefy server helps, but one important aspect of
having a fast loading website is reducing the size and number of your
page assets as much as possible. This can be a real challenge within the
current state of the Web. Web pages are becoming increasingly more
complex globs of code that require a huge amount of assets to display
and function properly. In addition to the plain old html, a ton of
background for the page to fully render in a browser.
“What’s the big deal?” I hear you ask. “All my visitors are on
broadband, and the js/css/images are only a few KB extra – hardly a drop
in the bucket!” Now, this may very well be true. However, the fact is
that the actual size of your files are only a small part of the overall
cost incurred on a page load. There is a much more subtle bottleneck
that has nothing to do with file size: The maximum concurrent connection limit.
This is a limit the browser enforces which dictates how many
connections can be open simultaneously to a single server. Even if
you’re on a super-fast connection, your browser will still limit the
maximum number of files you can download at one time. This number varies
from browser to browser, and may change slightly depending on connection
speed and web server configuration. The actual values for Internet
Explorer, Firefox, and Chrome are below:
It can be helpful to think of a concurrent concurrent limit as the end
of a funnel that your page assets pour through. Naturally, the more
assets you have the longer it takes for them to get through the funnel.
Making your assets smaller helps them pour through faster, but still,
only so many can go through at once no matter how small they are. The
key is to combine them into as few files as possible, thereby reducing
the connection limit bottleneck. Making your files smaller AND combining
them is a win-win situation. Smaller files + fewer connections =
faster loading site.
To “minify” a file simply means to strip out all the “human readable”
parts of the file such as indentation, line breaks, comments,
extraneous whitespace, long variable names, etc. E.g all the stuff that
makes it easy for a human to read, but which a computer couldn’t care
Like any other art form, web design is completely subjective. A web site might look like a thing of beauty to one person, and a complete mess to another person. There is, after all, no accounting for taste, and everyone’s tastes are different. However, there’s more to a web site’s design than merely its appearance. A web site design can have an enormous impact on conversions and even the most subtle design decisions can have a big effect. For example, a user might be more inclined to click on a green “Buy Now!” button more so than a red one. Finding a good balance between a site that looks good and a site that performs well in terms of conversions can be a real challenge.
How then can something as subjective as web design be analyzed in an objective manner to find the most effective design? One widely technique is A/B testing. In a nut shell, A/B testing sets up two or more control groups: Group A will see one version of the site, while Group B will see another version. This way various design elements can be tested and compared.
But is A/B testing really the best way to determine the most effective web design? Perhaps not. This excellent blog post by Steve Hanov suggests another method for finding the best design. Best of all its fully automated. Set it, forget it, and the page will “learn” which elements result in the most conversions.
In his post, Steve outlines the epsilon-greedy algorithm, also known as the multi-arm banded problem. Given a set of variations for a particular page element, the algorithm can make an ‘educated’ decision on which element to show based on its past performance. The best performing page elements are displayed the most frequently.
The algorithm records of the number of times a particular page element was displayed, and the number of times the element resulted in a conversion. However, the algorithm will also adapt to change. If a page element’s conversions begin to decrease, the algorithm will start to adapt and display different variations. The best part of this is that you can set up different variations of page elements one time, and let the computer do the work of figuring out which variations are the most successful. Pretty neat stuff!
Armed with this knowledge, I set out to try a few experiments with it, the result of which is Robo_AB_Tester, a small PHP class library I created which implements the epsilon-greedy algorithm. You can give it a try here.
Robo_AB_Tester tries to abstract away as many implementation details as possible and create a simple interface that is, hopefully, easy to integrate into a PHP based website. Once it is set up, it will:
For more details, see the demo page.
Rich snippets are all the rage these days. Ever since Google started enhancing their search results with these extra tidbits of information, everyone is rushing to update their web sites with the metadata to enable them. So what is the benefit of having a “rich” search result for your site? Good question. Other than giving the search engine user a little bit of extra bit of detail, I suppose there’s also a subtle psychological factor that kicks in. Someone might be more inclined to click on a search engine result that has a 5 star rating and a friendly face than one that doesn’t. Plus, they’re just plain cool. Who doesn’t want to add bling to their search results? But this only scratches the surface. There’s much much more to them than that.
Rich Snippets, as Google calls them, are actually semantic markup. The idea of marking up some sort of document with meta information for the benefit of machines is not a new idea. Semantic markup is as old as information technology its self. For example, a Word document contains metadata about its author, and a digital photo contains meta data about the camera it was taken with. You might, for instance, store your digital snapshots in a photo archiving program which uses this semantic data to filter your photos by date taken, lens type, flash used, etc. So, in essence, metadata is data about data.
It’s should be clear, then, how this “data about data” can be extremely useful to search engines. It can provide a search engine the ability to derive a semantic meaning from a document’s meta information rather than having to rely purely on the abstract, human understandable, concepts within the text of the document. Searches can become less about keywords in text documents and more about relationships between semantical data types.
To illustrate this point further, consider the following search: Find all restaurants with a 3.5 star or better rating on the Las Vegas strip that specialize in Italian OR Mexican cuisine AND are open after 11 PM on Sunday nights AND do NOT require reservations. On the semantic web, rather than a list of links to restaurant web sites that may or may not match your given criteria, you might get a list of “restaurant result objects” that DO match exactly that criteria and never even have to visit the restaurant’s web site. This is where the real power of semantic data lies. Instant information aggregation.
This “semantic web”, also, is not a new idea. In fact, Tim Berners-Lee himself envisioned the world wide web as a kind of “Semantic Network Model” and even the earliest HTML specifications included the concept of meta tags, which you are undoubtedly familiar with. Later iterations, such as XHTML, took this idea a step further. Most notably is the RDFa specification, which has been around for quite some time.
A while back I was working on a project that required the GUI to allow the user to dynamically add, remove and rearrange various form fields contained in table rows. The tricky part was that the UI needed to have this functionality for several different types of elements across several different forms. For instance, one set of fields was for adding and removing specifications to a product while another set of fields was for adding images to a product. Thus, I needed a solution that would be flexible enough to work across virtually any type of form elements.
Naturally, I turned to JQuery. I first took a look around within JQuery’s plugin ecosystem to see if perhaps there was already a plugin that might do the job. While I did find a few different plugins for adding removing form elements, none of them did exactly what I needed, specifically re-arranging items… So, I was left with either trying to hack the functionality into an existing plugin, or roll up my sleeves and write my own. I choose the later option, since JQuery’s excellent extension mechanism makes writing plugins a fairly straightforward process. The result is the plugin below, which I call dynoTable.
DynoTable makes an html table editable. With it you can:
Getting started with dynoTable is a snap. First make sure you have JQuery, and the dynoTable plugin, included in your page like so: