Just a moment...

Category Archives: How it works

Faster hosting == more conversions and better SEO

Sophisticated clients love to chat with us about search engine optimization (SEO), keyword placement, guest blogging, pay per click (PPC) and other techniques to increase the traffic to their websites and ultimately add to their bottom line.  They spend hours each month tracking changes in traffic patterns and spend thousands of dollars on advertising. They optimize the smallest details and change landing pages to rework traffic flow. But often, their results plateau and they don’t know where to look for the next step.

An important factor for search engine rank is site speed.  Search engines pay close attention to the speed of your website, because faster websites operated by more sophisticated webmasters are typically more important.   In fact the search engines consider it so important they give you tools to measure and help make your site faster:

https://developers.google.com/speed/pagespeed/
http://developer.yahoo.com/yslow/

More importantly, faster websites convert better. People don’t like to wait, even if the difference is barely perceptible.  You’re competing  against the likes of Amazon who has an unlimited budget and teams of experts watching for slowdowns.  A 2-4 second lag on each page, makes your business look like it’s being run by amateurs.  Often, repeat business is based on a “feeling” rather than actual product quality.  So you want customers who are just as impressed by the speed of your website as the quality of your product and customer service.

Getting rid of perceptible slowdowns is so important that even your iPhone or Android have built-in transitions such as swiping, sliding, zooming effects, spinning circles and “loading” progress bars. This eye candy is not just pretty, it’s there to help pass the time between launching applications.

Smart websites are built the same way, everything is optimized to reduce perceptible wait time and help remove any obstacles from the visitor to complete their purchase.  By the way, perceptible and actual load times are not the same thing.  It’s possible to have your pages start to render in the browser before the entire content has been downloaded from the server.

Website speed typically has more to do with the structure of the website rather then the size of pages or images. For instance, a page that consists of large blocking files which must be downloaded before the website starts to render, is going to be perceptually slower than one which places the blockers farther down the chain. The first option is to rearrange these files to load only when needed and move all external includes as far down the chain as possible. Google Analytics, for instance, should be loaded at the very bottom of the page, below all static content such as images.

However, this is not always an option because most websites today use a content management system (CMS) such as WordPress, Drupal, Joomla or Magento, which come with a prebuilt structure that cannot be rearranged. Additionally, plugins and themes for these systems have dependencies that require a certain structure.

But there are other easy things that can be done to dramatically improve the speed of delivery.

slow website
Original, pre-optimization

In the above example, each line item is an individual file. This is the breakdown of how long it takes to download each file. We are mostly interested in the top 10 entries’ blue area. As you can see the very first files are javascript (navigation) and css (design). These are large text files which block rendering and collectively soak up a full second of customer’s time.

The lowest hanging fruit in this case is to compress large text files in flight between the server and the visitor’s browser. Text files such as .js and .css, compress easily and are the biggest bang for your buck. Such a file may range between 50 and 500 KB but will compress 50-80%. When taken in aggregate for all your css, js and html/php files, it’s a very significant time savings.  Today’s servers have such an overabundance of hardware resources that the additional strain on CPU and RAM is not noticeable.  The trick is that it takes less time for the server to compress, transfer and have your browser decompress, than to simply transfer an uncompressed file.  Besides, this functionality has been built into browsers for a long time and it’s silly not to take advantage of it.

While the server and browser already support this functionality, it typically needs to be enabled explicitly on the server.  However, this needs to be done the correct way or it can backfire. Trying to compress the wrong type of file has serious penalties, so be careful.

Fast website optimization
Optimized, much much faster

This is the same website after server side optimizations have been applied. Notice how the blue portion of the top 10 lines have disappeared completely or have become significantly smaller.

Furthermore, the rest of the website files are downloaded much sooner and the website takes only 1.5 seconds to begin rendering instead of 2.6 seconds which is nearly twice as fast.

The takeaway here is the extreme reduction in transfer time is easily accomplished even without structural changes to the website.  It only has to be done once and now we can sit back and enjoy happier visitors and some very impressed search engines.

Denial of Service: 07/31/2013

This afternoon we experienced a massive distributed denial of service attack against the eBoundHost network.

Although we have been working to mitigate a DDoS starting at Noon CST (GMT -6) without customer impact, the attack escalated and began to impact customers at 1 PM CST. The attack was fully resolved at 1:52 PM CST.

Typically these events are handled seamlessly in the background without impacting our end users. However, the scale of this event was unprecedented and the skill of the attackers was considerable. Our network engineers were able to resolve this problem and have the impacted network segments back online within 55 minutes.

As always, we will use this incident as a learning opportunity to see how to adapt to the evolving attack vectors and protect our customers from downtime in the future.

If you have questions about any aspects of this attack, please reach out to our support team. We are here for you.

Regards,

Artur
eBoundHost

Network monitoring

After a conversation with a concerned customer we need to clear up the network graph situation.  The graph located here, is showing uneven traffic with gaps and dips and seems to drop down to near “0”.

We are absolutely NOT experiencing an outage or interruption.  We are just pushing so much traffic that we are hitting a limitation of our monitoring software.  When it reaches a peak our software becomes confused and  “wraps” around to start at zero.  The graphing software assumes that nobody could possibly push this much traffic, so it thinks the data is invalid.

We are looking for other graphing software options right now and should have a fix shortly.

Cloud Computing and Web Hosting

Having just come back from the HostingCon 2008 industry trade show, my head is still buzzing from all the presentations.  Everyone from hardware vendors (ICC-USA) to the great satan himself (Microsoft) was presenting their wares and as much as I hate to admit it, even the microsoft presentation was very impressive.

There was a lot to see this year from the vendors, but far more interesting were the presentations and group learning sessions.  Far less time was spent on the technical aspect this year than in previous years.  There were no “how to install XEN paravirtualization” classes;  instead, many sessions focused on the business itself, such as graphically mapping the company’s track record, how to evaluate your hosting company’s net worth, and some very interesting QA sessions with industry leaders who spoke about managing growth and valuation of assets.  Very grown-up conversations indeed!

The thing I noticed more this time than in past years is how the keywords “cloud computing” and “grid hosting” were thrown around the room.  It’s the new Web 3.0 terms that have had little meaning in the past but are now, all of the sudden, more tangible.

So what’s the deal? Grid/Cloud computing means: a “process/computation” moved off a single server into a “cloud” of computers.  A group of servers (can be tens of thousands) is presented as a single unit to handle a task with the combined power of all the processors/ram/storage.  In this model, all systems are the same when it comes to both hardware and software, and are completely interchangeable. Meaning that you can take any two systems and swap their physical locations in the cloud, they will not need to be reconfigured.  All systems perform the same task.

While this model works very well for parallelizable tasks such as graphics processing and mathmatical computation, it simply does not work for hosting a website.  A “grid” or “cloud” as they are being presented are far less useful than a “cluster” of servers.  While I may be too picky, I feel terminology is very important.  In our case, a Cluster is a group of systems, each performing a specialized task (web, mysql, dns, email), and presenting a unified interface to the Internet.  Here, you cannot take a server from the MySQL group and swap it with the server from the eMail group and expect them to work.  They are very distinct systems with differing hardware and software configurations.

This “cluster” model is a very old concept and is the only way to host the largest websites, such as facebook, myspace, youtube, google, yahoo.  When the existing systems are approaching saturation point and load spirals higher and higher, you simply add another front-end machine to the effected segment of the cluster to offload some of the processing.  When web traffic goes up, add another front end apache server.  Too many SQL queries, add another mysql machine.  In essense a Cluster is a collection of Clouds/Grids.  Each Cloud handles its specialized task and contributes to the performance of the main Cluster.

What I find infuriating is that some providers are talking about Cloud or Grid computing as though it’s the next step in VPS hosting.  This is so misleading that it makes my teeth grind.  There is no way to run a single VPS instance over a cluster/cloud/grid of computers.

When they market their VPS service this way, it makes a client believe that if the server that hosts their VPS, has a meltdown, their own system will continue to run on the rest of the cloud without interruption.  In fact, what happens is that a crashed system is a crashed system, and your VPS instance will also go down in smoke with the rest of the server.  And while it may be restarted almost immediately, it will still have downtime.

Also, they claim that you can scale out your VPS to unlimited levels, implying that it’s a trivial task to add more processing power.  The way they handle this is by adding other VPS instances of the same system and splitting the traffic with a load balancer.  This has it’s own tremendous issues because you cannot take a normal website, split it into two or more instances and expect it to function properly.  Websites have to be designed especially to handle this scenario.  For instance, MySQL files cannot be written to at the same time by two instances of MySQL without experiencing some very serious corruption.

More than that, this new “Cloud” model is billable based on usage of cpu cycles, bandwidth, disk access.  This means that you never really know how much it’s going to cost at the end of the month.  This is especially wonderful in the case of a Denial of Service attack which can burn through server resources like there is no tomorrow.

Today’s providers who claim to live in the Cloud, are using traditional hosting technology masked with a very fancy control panel.  In my opinion, cloud computing still has a long way to go before it’s going to be useful for our industry.  Today this technology is useful to a handful of customers.  For the rest of us, we have Shared hosting, VPS hosting, and Dedicated hosting.  With the incredible pace of technological advancement of individual servers, I don’t see a reason to move to the cloud.  Today’s web servers are more powerful than yesterdays supercomputers and this trend will continue for a long time yet.

Last thought, when the Cloud is going to become useful, we will add it to our product line.  For now, our products are every bit as useful as anything available today on the market, and without any fancy buzz words.

Denial of Service

It’s good to be popular but it definitely comes with it’s own problems.  For instance, today, some clever folks decided to run a Distributed Denial of Service attack on the eBoundHost.com domain name.  They knocked us out of the web for a little bit of time, but luckily our monitoring system sounded an alarm and a tech was dispatched to fix the problem.

What happened?  A standard server simply cannot cope with several hundred servers trying to access a website at the same moment.  At first things work fine, then they slow down and finally, the server runs out of allowed processes.  The Apache web server is now effectively useless, hence the title: Denial of Service attack.

How does the attack happen?  Someone’s grandmother receives an email on her AOL account that promises to have pictures of her favorite relatives.  She opens the picture only to infect her computer with the most nasty Trojan virus known to mankind.  This Trojan proceeds to let his friends know that there is a party happening at grandmas house.  They come to visit and also infect the computer.  All sorts of fun things can be installed this way, for instance, software that turns this computer into a node on a botnet.  This botnet zombie is now fully in control of some 16 year old in Vietnam/Russia/Turkey/etc, and this computer can now participate in things like sending spam or a Denial of Service attack.

There are definitely ways to deal with this kind of situation.  First off, there are devices you can buy that deal with known DDoS patterns.  There are lists of known zombie ip addresses that you can block out on the router.  There are ways to deal with this type of situation.  Luckily this does not have all that often, and it is usually enough to merely let the attack work itself out.

This time the attackers were nice enough to have left us a signature of their work, and for that we are very grateful.  It really made the cleanup effort much easier.  So I wanted to say the following, we know you are out there and we know what you can do, and we are very impressed 😉

Happy holiday weekend everyone!

How it works: server hardware

About servers. Everyone reading this post is making a connection to a server. In fact, you are making a connection to at least a couple. There is a server in your office or home that is allowing you to proxy onto the internet, most likely a wireless router, which connects through another server, the DSL or cable modem. There is a caching DNS resolver server on your ISP. An entire army of router servers between your home and our data center. And the last server in the chain is our web server, which actually hosts this content.

Lets narrow down the definition of a Server. We are not going to talk about IBM mainframes or Sun UltraSPARC based blade systems. Today, we care only about the servers which comprise the majority of the infrastructure of the websites you visit. These are normal computers just like you have in your house or office, with the exception of being confined into more efficient packaging. They use familiar Intel or AMD processors, normal DDR2 or faster RAM, and SATA hard drives. What really separates them from home PC’s is the software. But software is not what this blog is about.

Here is what one of our older servers looks like (below):

2u

To compare, here is one of eBoundHost’s newest servers.  This form factor is unofficially called the ‘pizza box’ due to its small dimensions.

1utop

The first thing you will notice is that the new server is not as tall. Our older hardware uses 2u (units of space) while the new servers use 1u. This allows for greater density. Some servers use as much as 7u but these are specialty machines that are filled to the brim with hard drives in gigantic RAID arrays.

Side to side comparison:

old and new

These servers fit into specialty (read expensive) racks that have 42u of storage in each rack. This means that when filled with 2u servers, we can only install 21 machines instead of 42 1u servers. It’s a dramatic difference when you talk about a server room full of racks such as in our facility:

Racks
Of course the entire 42 units are not available for servers, there are switches, power distribution units, firewalls, intrusion detection equipment.  All considered, we are happy to have 30 servers in one rack.

There is also the consideration of electricity and heat. A rack full of servers eats electricity like a hungry SUV, and produces just as much heat pollution. 30 servers stacked on top of each other, blowing air into the same direction, require an amazing amount of cooling, which needs big air conditioners that move a lot of tonnes of air. That’s all I’m going to say about that. Data center challenges is going to be saved for another blog entry.

To jump back into server hardware. Here is the same 1u server without its cover.

1uinternals3

Motherboard, CPU, heatsink, RAM, hard drive and a very powerful cooling fan. Seems simple enough. Another picture:

1uinternals1

Every server is custom built. When an older machine comes off line, we generally sell it through eBay and build a new server to take its place. The nature of hardware is such that components wear out and fail eventually. Our clients and our reputation are far too important, so we give old hardware the boot and use all new equipment.

Here are some servers in action. The following pictures may not be completely safe for geeks, they may cause weakening of the knees and a desire to run out and fix something. Please refrain, it will pass:

1u servers

These (above) are dedicated servers. Inventory tags have been obfuscated in order to protect the innocent.

(below) Are some specialty machines which have 15k SAS (fast/expensive) hot swappable hard drives in RAID array. Used for our shared servers, VPS machines, and some powerful dedicated servers.

Swappable

Each server is built by our staff. We love them so much that we have hundreds of them 😉

More to follow, there is so much to cover: data center, operating systems, server software.

The fuss about FOSS

Recently i have been following some very interesting conversations in the Free Open Source Software (FOSS or just OS) community. In case you have been on the moon for the last 10 years and have not had any news updates, I’ll fill you in. There are entire communities of people who are continually building all types of software that they distribute for free on the Internet. These people are computer programmers, graphic artists, copywriters and others. The software they build ranges from an operating system like GNU/Linux or to the online encyclopedia Wikipedia, and my personal favorite, TiVo.These projects are built by mostly unpaid programmers who contribute their free time and knowledge to build a better program/operating system/encyclopedia. They are driven by an altruistic desire to build a better “whatever.” They, then release this “whatever” to the masses for free, who in turn take these programs and hopefully make money from their distribution or by providing support. Ideally, whenever someone makes money from these projects, they will turn around and support the project they are using, thereby supporting the programmers who in turn will be able to make an even better program. So this is an ongoing cycle with contributions helping to fund development.

Sometimes these free programs are backed by large companies such as Sun (creators of Java) or TiVo. They often distribute a core system free of charge and hope to get a user base hooked on their product so they could then sell them the advanced software with more robust features. And sometimes companies find it easier to find a open source project and build their own system on top of it. For example, there are wonderfully powerful SQL systems available for free, MySQL and PostgreSQL. They have been in development for 7+ years and are such powerful systems that the vast majority of today’s Internet applications are based on one or the other. It would be foolish for a small (or large) company to start building a product from the very ground up.

The potential of such integration is HUGE. Company A only needs to make sure that their product integrates with the FOSS project B. Company A does not need to worry about any potential security threats or unexpected crashes due to project B. Instead, the programmers for project B take care of all such issues. This way company A is able to focus on improving the usability of their own product, which, coincidentally, does not even have be FOSS.

So this gets us to the really interesting part, sometimes Open Source projects are not compatible with each other. You would think that these programmers would be smart enough to allow these programs to integrate, but the problem is not what you think. Sometimes their licenses are incompatible!

The two licenses that I’m most concerned about are the GPL and BSD licenses. The BSD license says “do whatever you want with this code” which means that you are free to take code and even distribute it as closed source, proprietary programs. Coincidentally, the Apple X operating system is based on FreeBSD with some (very major) changes. So according to this license, Apple is able to take the current FreeBSD code, change whatever they want to change and distribute it as an independent project. Thankfully, Apple was kind enough to contribute significant improvements back to the project, but they did not have to. The BSD license allows them complete independence. The company does not have to release its trade secrets, only what they chose to. However, it is in their own best interest to contribute back and make sure that FreeBSD continues to be a vibrant operating system. This way, for the next release of the Apple software, they simply grab the FreeBSD code base, apply their own changes to it, and they have an up to date system with full security patches.

So the point of this blog is the other, and probably more important license, the GPL. This one says, that you are free to distribute this program as long as you make all the changes available to the public. So any company that modifies the source code of any project, thereby improving it, will have to open up its changes (including trade secrets) to the greater community, and if these changes are quality, they are then going to be integrated into the main project. Linux is distributed under this license. The added complexity of GPL is that there are 2 separate GPL’s on the table today GPLv2 and GPLv3. The v3 is newer, more complicated and puts restrictions on how the code can be used. We are slowly making the transition from v2 to v3 but a lot of companies are unable to make this transition because they would have to open up their closed source addons to GPL’d software, which were allowed under v2. So companies have to make a choice, use old and buggy GPLv2 software or upgrade to GPLv3 and lose business. This is a big issue that is beyond the scope of this blog, and we’ll see how things work out in the coming years.

So as far as we’re concerned, the major difference between the two licenses are

BSD: you should do the Right Thing but we won’t force you
GPL: you have to do the Right Thing and contribute back to the project

As a active reader of the FreeBSD mailing list and Slasdot and various Linux lists, I have recently started to see a lot of chatter about how the GPLv3 license is superior to everything else. People froth at mouth (or keyboard) and spew such hatred towards the other camp. After reading through 10 pages of this drivel, I realized that these people are wrong. They are trying to make a better FREE license by putting more restrictions on how it can be used. This is ridiculous.

Seems to me that they forgot what all of this is about and are trying to bite the hand that feeds them. If you take out corporate money out of open source software, a lot of important projects will collapse. The real support ($$$) comes from corporations that support projects which help to make money. No corporate profit means no support. Live and let live, if someone makes good money using FOSS, more power to them!

Last time I checked, making money is good for the person/company making money and for the entire community which benefits from the added support, either through sponsorship or awareness which leads to more pubic support. Companies like Google, have entire teams of programmers working on Open Source software and contribute millions to improving other ongoing projects.

Not to mention that you simply cannot outlaw capitalism which is what they seem to be trying to do. If people are forced into the corner, they will find another exit. And guess what, the BSD license is not a bad alternative to GPL’d software. If the GPL people keep pushing, they will just drive away developers to the other camp.

FOSS is great for all of us. Everyone should step back, take a deep breath and refocus on building better software rather than bickering about nonsense.

Oh, and for the record, this entire document is entirely not open for redistribution without my permission. How is that for a license 🙂




Just a moment...
Just a moment...