Just a moment...

Category Archives: Hosting

A Great Cyber Monday Domain Deal from eBoundHost.com!


FREE DOMAIN

What makes this deal so great? Because it’s completely free. That’s right, not 50%, 60% or even 90% off. The domain name registration is completely and totally free for one year.  Just pick a .com, .net. .org (or almost any other TLD) domain, use this coupon: FDCMD2013 and register it at a 100% discount. Nothing to lose, a free domain to gain!  If you choose not to keep it registered past the first year, we won’t hold it against you.

http://eboundhost.com/domains

This deal is available 12/2/2013 thought 12/9/2013. Hurry!

COUPON CODE : FDCMD2013

*UPDATE (12/05/2013):  The coupon is no longer available as all the slots were filled faster than anticipated!

Faster hosting == more conversions and better SEO

Sophisticated clients love to chat with us about search engine optimization (SEO), keyword placement, guest blogging, pay per click (PPC) and other techniques to increase the traffic to their websites and ultimately add to their bottom line.  They spend hours each month tracking changes in traffic patterns and spend thousands of dollars on advertising. They optimize the smallest details and change landing pages to rework traffic flow. But often, their results plateau and they don’t know where to look for the next step.

An important factor for search engine rank is site speed.  Search engines pay close attention to the speed of your website, because faster websites operated by more sophisticated webmasters are typically more important.   In fact the search engines consider it so important they give you tools to measure and help make your site faster:

https://developers.google.com/speed/pagespeed/
http://developer.yahoo.com/yslow/

More importantly, faster websites convert better. People don’t like to wait, even if the difference is barely perceptible.  You’re competing  against the likes of Amazon who has an unlimited budget and teams of experts watching for slowdowns.  A 2-4 second lag on each page, makes your business look like it’s being run by amateurs.  Often, repeat business is based on a “feeling” rather than actual product quality.  So you want customers who are just as impressed by the speed of your website as the quality of your product and customer service.

Getting rid of perceptible slowdowns is so important that even your iPhone or Android have built-in transitions such as swiping, sliding, zooming effects, spinning circles and “loading” progress bars. This eye candy is not just pretty, it’s there to help pass the time between launching applications.

Smart websites are built the same way, everything is optimized to reduce perceptible wait time and help remove any obstacles from the visitor to complete their purchase.  By the way, perceptible and actual load times are not the same thing.  It’s possible to have your pages start to render in the browser before the entire content has been downloaded from the server.

Website speed typically has more to do with the structure of the website rather then the size of pages or images. For instance, a page that consists of large blocking files which must be downloaded before the website starts to render, is going to be perceptually slower than one which places the blockers farther down the chain. The first option is to rearrange these files to load only when needed and move all external includes as far down the chain as possible. Google Analytics, for instance, should be loaded at the very bottom of the page, below all static content such as images.

However, this is not always an option because most websites today use a content management system (CMS) such as WordPress, Drupal, Joomla or Magento, which come with a prebuilt structure that cannot be rearranged. Additionally, plugins and themes for these systems have dependencies that require a certain structure.

But there are other easy things that can be done to dramatically improve the speed of delivery.

slow website
Original, pre-optimization

In the above example, each line item is an individual file. This is the breakdown of how long it takes to download each file. We are mostly interested in the top 10 entries’ blue area. As you can see the very first files are javascript (navigation) and css (design). These are large text files which block rendering and collectively soak up a full second of customer’s time.

The lowest hanging fruit in this case is to compress large text files in flight between the server and the visitor’s browser. Text files such as .js and .css, compress easily and are the biggest bang for your buck. Such a file may range between 50 and 500 KB but will compress 50-80%. When taken in aggregate for all your css, js and html/php files, it’s a very significant time savings.  Today’s servers have such an overabundance of hardware resources that the additional strain on CPU and RAM is not noticeable.  The trick is that it takes less time for the server to compress, transfer and have your browser decompress, than to simply transfer an uncompressed file.  Besides, this functionality has been built into browsers for a long time and it’s silly not to take advantage of it.

While the server and browser already support this functionality, it typically needs to be enabled explicitly on the server.  However, this needs to be done the correct way or it can backfire. Trying to compress the wrong type of file has serious penalties, so be careful.

Fast website optimization
Optimized, much much faster

This is the same website after server side optimizations have been applied. Notice how the blue portion of the top 10 lines have disappeared completely or have become significantly smaller.

Furthermore, the rest of the website files are downloaded much sooner and the website takes only 1.5 seconds to begin rendering instead of 2.6 seconds which is nearly twice as fast.

The takeaway here is the extreme reduction in transfer time is easily accomplished even without structural changes to the website.  It only has to be done once and now we can sit back and enjoy happier visitors and some very impressed search engines.

Denial of Service: 07/31/2013

This afternoon we experienced a massive distributed denial of service attack against the eBoundHost network.

Although we have been working to mitigate a DDoS starting at Noon CST (GMT -6) without customer impact, the attack escalated and began to impact customers at 1 PM CST. The attack was fully resolved at 1:52 PM CST.

Typically these events are handled seamlessly in the background without impacting our end users. However, the scale of this event was unprecedented and the skill of the attackers was considerable. Our network engineers were able to resolve this problem and have the impacted network segments back online within 55 minutes.

As always, we will use this incident as a learning opportunity to see how to adapt to the evolving attack vectors and protect our customers from downtime in the future.

If you have questions about any aspects of this attack, please reach out to our support team. We are here for you.

Regards,

Artur
eBoundHost

Godaddy’s Outage

On 9/10/2012 Godaddy, the popular web hosting, and domain registrar became victim to online hacking or Denial of Service attack (DoS). The number of impacted websites was reported to be in the millions. It was without a doubt a difficult day for many of the small businesses hosted on Godaddy.

The infamous hacker group “Anonymous” claimed responsibility, but at this time it is unclear if it was a DoS, hack, or technical failure. Reasons for this attack on Godaddy at the moment are also unclear.

We would like to express our heartfelt sympathies for anyone who has fallen victim to these attacks. Many don’t consider the impact on everyday people, even though Godaddy was the one targeted.  The real casualties of this attack were as always, hardworking small business people who had no idea about Anonymous or their crusade.

Thunderbirds Last Flight

Earlier this week, the Mozilla Foundation put out an announcement that distills to this:

To be more specific, Mozilla will no longer focus on developing innovations for Thunderbird but will keep it safe and stable … Mozilla will also provide all the infrastructure required for new, community-developed features to be integrated in upcoming Thunderbird releases.

In a nutshell, they are announcing the retirement of Thunderbird. Citing the reason of the project not being their “top priority,” which, in plain English means they lost the war to Gmail & Co on the mail side and they are going to focus like a laser so the same thing won’t happen on the browser. As of June 2012, Firefox holds 30% less market share than Google Chrome and is losing ground quickly.

Contrary to popular belief, open source software is just as expensive to develop as any commercial product, but the costs come in different ways. The same, professional high quality programmers, spend weeks and months contributing their code to a versioning repository where it’s reviewed, QA’d and then released. This is the same intense and expensive process as followed by Microsoft, Google, Apple and any other large development house. Sometimes these coders are working for a corporation with their full time being devoted to a single project, sometimes they are working out of their basement as unpaid volunteers. The Mozilla Foundation is no longer participating in this effort which means the open source world just lost an important ally. Thunderbird has been under the Mozilla umbrella since inception and it’s not likely that the OSS community will keep it going. Of course bug fixes and security patches will be available for some time but the writing on the wall is clear.

Which is pretty sad for users like us who are not using gmail or webmail interface and require a fast and flexible IMAP application to plow through a 10GB mailbox at local application speed, keyboard shortcuts and unrivaled ease of use.

However, time moves on and we need to look ahead. The IT landscape is littered with the remains of indispensable applications like ACT!, Eudora, Google Wave and others that had a sizable following at one time.

We’re currently considering other options like Mac Mail and Outlook Express to recommend to our customers, and we’d love to hear what you’re using.

Primary link soft failure

Thursday, September 16th 10:25 GMT -6

Ongoing network outage impacting a significant portion of the eBoundHost network has been traced to one of our main peering points. The peering provider has a “soft” failure and as such was not demoted from “preferred” status.

The link was not completely down since the outage was behind the next peering point and the router had to be failed over by a network technician rather than the router itself.

The regular secondary failover peer uses a diverse network which was apparently using the same provider as their primary and is now in process of switching them off. We estimate this outage to last another 10-15 minutes at the most.

This kind of network failure/recovery happens quite often, our systems are designed to automatically fail over. Such outage have impacted customers only 3 times in the last 10 years. We do not expect this to happen again any time soon but just in case, the primary network provider will be kept down until we can verify that everything works.

The joy of (more) speed!

After blowing through deadline after deadline, our new bandwidth carrier delivered our newest ultra fast backbone connection.  What they lack in intra-department coordination they more than make up for with the quality of their bandwidth.  The new connection has great ping times from around the world.

  • Chicago:   12 ms
  • Stanford University: 32 ms
  • Czech Republic: 141 ms
  • Italy: 141 ms
  • Sweden:  128 ms

Our customers should see a decent response time improvement as well as raw download speed improvement.

Happy browsing everyone!

Just to prove a point!

For a very good reason this article is being written and published entirely on my blackberry.

If nothing else it proves the point that you never know how your website is going to be accessed.

Like it or not, there is now a whole new dimension of compatibility to test for when designing your new website.

Cloud Computing and Web Hosting

Having just come back from the HostingCon 2008 industry trade show, my head is still buzzing from all the presentations.  Everyone from hardware vendors (ICC-USA) to the great satan himself (Microsoft) was presenting their wares and as much as I hate to admit it, even the microsoft presentation was very impressive.

There was a lot to see this year from the vendors, but far more interesting were the presentations and group learning sessions.  Far less time was spent on the technical aspect this year than in previous years.  There were no “how to install XEN paravirtualization” classes;  instead, many sessions focused on the business itself, such as graphically mapping the company’s track record, how to evaluate your hosting company’s net worth, and some very interesting QA sessions with industry leaders who spoke about managing growth and valuation of assets.  Very grown-up conversations indeed!

The thing I noticed more this time than in past years is how the keywords “cloud computing” and “grid hosting” were thrown around the room.  It’s the new Web 3.0 terms that have had little meaning in the past but are now, all of the sudden, more tangible.

So what’s the deal? Grid/Cloud computing means: a “process/computation” moved off a single server into a “cloud” of computers.  A group of servers (can be tens of thousands) is presented as a single unit to handle a task with the combined power of all the processors/ram/storage.  In this model, all systems are the same when it comes to both hardware and software, and are completely interchangeable. Meaning that you can take any two systems and swap their physical locations in the cloud, they will not need to be reconfigured.  All systems perform the same task.

While this model works very well for parallelizable tasks such as graphics processing and mathmatical computation, it simply does not work for hosting a website.  A “grid” or “cloud” as they are being presented are far less useful than a “cluster” of servers.  While I may be too picky, I feel terminology is very important.  In our case, a Cluster is a group of systems, each performing a specialized task (web, mysql, dns, email), and presenting a unified interface to the Internet.  Here, you cannot take a server from the MySQL group and swap it with the server from the eMail group and expect them to work.  They are very distinct systems with differing hardware and software configurations.

This “cluster” model is a very old concept and is the only way to host the largest websites, such as facebook, myspace, youtube, google, yahoo.  When the existing systems are approaching saturation point and load spirals higher and higher, you simply add another front-end machine to the effected segment of the cluster to offload some of the processing.  When web traffic goes up, add another front end apache server.  Too many SQL queries, add another mysql machine.  In essense a Cluster is a collection of Clouds/Grids.  Each Cloud handles its specialized task and contributes to the performance of the main Cluster.

What I find infuriating is that some providers are talking about Cloud or Grid computing as though it’s the next step in VPS hosting.  This is so misleading that it makes my teeth grind.  There is no way to run a single VPS instance over a cluster/cloud/grid of computers.

When they market their VPS service this way, it makes a client believe that if the server that hosts their VPS, has a meltdown, their own system will continue to run on the rest of the cloud without interruption.  In fact, what happens is that a crashed system is a crashed system, and your VPS instance will also go down in smoke with the rest of the server.  And while it may be restarted almost immediately, it will still have downtime.

Also, they claim that you can scale out your VPS to unlimited levels, implying that it’s a trivial task to add more processing power.  The way they handle this is by adding other VPS instances of the same system and splitting the traffic with a load balancer.  This has it’s own tremendous issues because you cannot take a normal website, split it into two or more instances and expect it to function properly.  Websites have to be designed especially to handle this scenario.  For instance, MySQL files cannot be written to at the same time by two instances of MySQL without experiencing some very serious corruption.

More than that, this new “Cloud” model is billable based on usage of cpu cycles, bandwidth, disk access.  This means that you never really know how much it’s going to cost at the end of the month.  This is especially wonderful in the case of a Denial of Service attack which can burn through server resources like there is no tomorrow.

Today’s providers who claim to live in the Cloud, are using traditional hosting technology masked with a very fancy control panel.  In my opinion, cloud computing still has a long way to go before it’s going to be useful for our industry.  Today this technology is useful to a handful of customers.  For the rest of us, we have Shared hosting, VPS hosting, and Dedicated hosting.  With the incredible pace of technological advancement of individual servers, I don’t see a reason to move to the cloud.  Today’s web servers are more powerful than yesterdays supercomputers and this trend will continue for a long time yet.

Last thought, when the Cloud is going to become useful, we will add it to our product line.  For now, our products are every bit as useful as anything available today on the market, and without any fancy buzz words.

How it works: server hardware

About servers. Everyone reading this post is making a connection to a server. In fact, you are making a connection to at least a couple. There is a server in your office or home that is allowing you to proxy onto the internet, most likely a wireless router, which connects through another server, the DSL or cable modem. There is a caching DNS resolver server on your ISP. An entire army of router servers between your home and our data center. And the last server in the chain is our web server, which actually hosts this content.

Lets narrow down the definition of a Server. We are not going to talk about IBM mainframes or Sun UltraSPARC based blade systems. Today, we care only about the servers which comprise the majority of the infrastructure of the websites you visit. These are normal computers just like you have in your house or office, with the exception of being confined into more efficient packaging. They use familiar Intel or AMD processors, normal DDR2 or faster RAM, and SATA hard drives. What really separates them from home PC’s is the software. But software is not what this blog is about.

Here is what one of our older servers looks like (below):

2u

To compare, here is one of eBoundHost’s newest servers.  This form factor is unofficially called the ‘pizza box’ due to its small dimensions.

1utop

The first thing you will notice is that the new server is not as tall. Our older hardware uses 2u (units of space) while the new servers use 1u. This allows for greater density. Some servers use as much as 7u but these are specialty machines that are filled to the brim with hard drives in gigantic RAID arrays.

Side to side comparison:

old and new

These servers fit into specialty (read expensive) racks that have 42u of storage in each rack. This means that when filled with 2u servers, we can only install 21 machines instead of 42 1u servers. It’s a dramatic difference when you talk about a server room full of racks such as in our facility:

Racks
Of course the entire 42 units are not available for servers, there are switches, power distribution units, firewalls, intrusion detection equipment.  All considered, we are happy to have 30 servers in one rack.

There is also the consideration of electricity and heat. A rack full of servers eats electricity like a hungry SUV, and produces just as much heat pollution. 30 servers stacked on top of each other, blowing air into the same direction, require an amazing amount of cooling, which needs big air conditioners that move a lot of tonnes of air. That’s all I’m going to say about that. Data center challenges is going to be saved for another blog entry.

To jump back into server hardware. Here is the same 1u server without its cover.

1uinternals3

Motherboard, CPU, heatsink, RAM, hard drive and a very powerful cooling fan. Seems simple enough. Another picture:

1uinternals1

Every server is custom built. When an older machine comes off line, we generally sell it through eBay and build a new server to take its place. The nature of hardware is such that components wear out and fail eventually. Our clients and our reputation are far too important, so we give old hardware the boot and use all new equipment.

Here are some servers in action. The following pictures may not be completely safe for geeks, they may cause weakening of the knees and a desire to run out and fix something. Please refrain, it will pass:

1u servers

These (above) are dedicated servers. Inventory tags have been obfuscated in order to protect the innocent.

(below) Are some specialty machines which have 15k SAS (fast/expensive) hot swappable hard drives in RAID array. Used for our shared servers, VPS machines, and some powerful dedicated servers.

Swappable

Each server is built by our staff. We love them so much that we have hundreds of them 😉

More to follow, there is so much to cover: data center, operating systems, server software.

Thanksgiving Holiday

The Thanksgiving holiday is almost upon us. From the eBoundHost crew, I would like to wish all our friends who are observing this wonderful day, to have a good celebration and try not to have too much turkey (or whatever else).

This is the one holiday per year that seems completely innocent and not commercialized. Its spirit has somehow been preserved over the years and has not become a day to give cards or mandatory gifts, just to get together with your family around a dinner table and enjoy each other’s company.

On a related note, some of our staff are traveling around the country in the next few days, so we are running on a skeleton crew. If support runs a bit slower than usual, we hope you understand! (non critical/outage tickets only, of course)

So without any further delay, happy Thanksgiving.

Acts of God and other fun things

Thursday august 23rd, 2007, at approximately 11 pm, our backup power generator was unable to cope with a lightning strike to our data center and shut itself down.  A technician was dispatched from the generator manufacturer, CAT, and was on site in 15 minutes after the outage.  Battery power allowed our servers to keep working, however, the air conditioning units were disabled until generator power could be restored.  After approximately 20 minutes of battery power a decision was made to bring down the equipment in order to avoid heat damage.  Approximately at the same time as the network was being brought down, two mobile power generators capable of producing 1 Megawatt each, arrived on site on flatbed trucks.

After the two mobile generators started to feed the air conditioners and our main building generator kicked in, all servers were turned up to full service and were scanned for array problems. All together, most customers experienced about 30 minutes of outage.

In the last 24 hours, the Chicago area experienced floods, severe thunderstorms, and a tornado.  Trees are broken everywhere and one of our staff lost his car to a flood.  All things considered, we are very lucky that the damage was not any worse.

To head off any questions, yes we do have backup batteries, yes we do test the generator every month, and yes, we are prepared and have survived times this type of situation many times.  We apologize for the outage and will be happy to speak with you on an individual basis about how this effected your service and what we can do to help.

Never a dull moment

Not a week goes by without some kind of emergency: hackers, backup server woes, operating system issues, hardware trouble, software trouble, spammers, integration of new technologies. Round and round it goes.

So, to start with Hackers. A long long time ago, EBOUNDHOST acquired a smaller hosting outfit to broaden its offerings with cPanel. Up to 2005, EBOUNDHOST was a Plesk only outfit. cPanel and Plesk are two competing hosting control panel systems that run on Unix-like servers. Both systems have their raison d’être, one is better suited for power users, another for SOHO and non professional website owners.

Unfortunately, one of the acquired cPanel servers had a serious vulnerability which was inherited with the machine. The root-kit survived our admins’ sweeps and lock downs and lay dormant for at least a year. When the time was right, our friendly hacker, or should I say `cracker` (hackers generally don’t damage systems), sprung into action. When the situation was finally under control, several clients no longer had their databases and files were missing. Unfortunately, the attack was scheduled in a time when the server backup was in progress and corrupted the backup. This was a glaring oversight, and our team took ownership of the problem and helped our customers rebuild their websites from old backups with the help of pieces recovered by the Data Recovery procedure.

After this event, a new backup strategy was deployed to production almost immediately. Client data is now archived in snapshot style for several weeks on our new Backup Server cluster. All of our Shared servers and many Dedicated/VPS hosting customers make daily backups to this system. Additionally, databases are being archived in a separate structure. Whereas previously, to recover a single client, an entire server backup had to be unwrapped onto a dedicated machine and then moved back into place; one of our techs can now mount the image and copy the files back into place within minutes. This is possible due to some very cool technologies that became available recently, but this is geek talk.

In real world terms, this requires a tremendous amount of storage, lots of spinning hard drives RAIDed together into mammoth terrabytes of backup space. But it’s never a dull moment, just two months later, we’re almost out of room.

Not all emergencies are of the bad kind some are just exciting opportunities, but one common thread emerges, in hindsight they are all valuable learning experiences. Once you pass one hurdle, the next one seems more approachable.

 

Network Maintenance @ midnight

Last night, January 25th, 2007, there was a brief outage that effected our entire network at 2 am – 2:30 am.

Our network people had to take down both of the redundant gateway switches for maintenance (big iron 1 and big iron 2) resulting in all external traffic completely stopping. The switches were dropping packets and had to have some hardware replaced.
Some of our customers did not know what this was a scheduled outage and we were flooded with email as soon as the services were back up. So I want to take this time to point out the Network Status page which has a running schedule of all upcoming maintenance (network and server).

The vast majority of such repairs do not result in outages, since just one switch at a time is usually being worked on, but this was a very special event. They explained why both of the switches had to be worked on at the same time, but not being a network person, it was a bit over my head.

Shared hosting plans upgrade

Along with the very exciting news about the launch of the VPS product, something equally important received no attention. You may have noticed that the Home and Professional shared hosting packages were upgraded to:

HOME:
200 GB Storage
2,000 GB Transfer

PROFESSIONAL
300 GB Storage
3,000 GB Transfer

The prices did not change, the plans are $6.55 and $9.99, and of course all of our existing clients are automatically upgraded to the new feature set.

New VPS product launch

As we enter the second week of 2007, I am very happy to present a revolutionary new concept in hosting, Virtual Private Server (VPS) technology, also known as Virtual Dedicated Server. It allows for one server to have many virtual machines coexisting side by side. Each with their own set of software, root user, services, virtual memory and cpu process isolation. In plain terms, one physical server can host many independent servers.

Honestly it’s not such a new concept, IBM has used it in their mainframes since before history of time. But the technology was not available in any reasonable way to end users until very recently.

What it is good for:

1) Websites with sensitive data who cannot afford a security breach. Regular hosting accounts share space and daemons (server software: email, web, database, etc.) with each other. There is always a risk that one user’s poorly written script will open a back door to hackers to steal other users’ information. VPS technology eliminates this risk almost completely because only your own software runs on your VPS.

2) People who don’t want to share a server with too many users. Since VPS users have guaranteed storage space, there are significantly fewer users than on a shared system. For instance, while a shared system may run 500 websites, a VPS may only host 10-30 other users.

3) Users who need more resources than a shared hosting server can provide, but can’t justify a full dedicated server. CPU intensive websites are generally moved onto VPS to free up resources on our shared hosting servers. This is common practice for customers with very intensive websites such as forums and other database intensive applications. Since VPS can allocate CPU “time” it helps to guarantee that your website runs quickly.

4) People who can’t afford a dedicated server but REALLY REALLY want one.

5) Developers, great for testing applications. VPS allows you to isolate your software until you are absolutely certain it is ready for prime time.

6) Users that don’t want the hassle of dealing with operating system issues. Such as kernels, file systems, etc.

What it is NOT:

1) A VPS is NOT a dedicated server.
2) You cannot compile your own kernel.
3) Limited RAM. Although everyone has Guaranteed RAM, they may burst up to the full amount of ram on the server. But as more users are added, everyone scales down to whatever is available at the time.
4) You don’t have 100% of CPU time, same situation as #3.

We are very excited about this new VPS line and time will show that it is the next step in hosting technology.

The big news, redesign

The newly redesigned eBoundHost.com website went live on August 1, 2006. This is a very proud moment for all of us in the EBH family since it represents many months of behind the scenes efforts. Award winning web designers and our own back end integrators worked closely to link the new graphics to our custom content management system.

Apart from the new look, there is plethora of exciting offerings. The new Home and Professional packages are the very first multi-domain offerings from EBH. Up to 12 domain names may now be hosted on one account for the Professional and up to 5 for Home.

Hard drive storage is upgraded to 25 GB and 15 GB respectively for Professional and Home packages. Bandwidth is at an enormous 450 GB for both packages. This means a virtually unlimited hosting package that will serve your needs for the foreseeable future.

Prepaid accounts are now rewarded with our lowest prices ever. Customers prepaying for a 24 Month term pay as low as $6.55 for a hosting account. Of course we realize that this is not an option for everyone this is why there are several billing cycles, Monthly, Quarterly, 6-Month, 12-Month and 24-Month. The more you prepay, the deeper the discount.

Domain names are always free for new accounts. This is a tradition that we intend to keep!

By far the most exciting news is our new affiliate program. A more detailed blog entry will be posted about this system because there is so much to say. But to summarize, referrals earn $80 for the affiliate. This is an incredible opportunity for our clients, by simply placing a link on the website, you can earn hundreds of dollars each month.




Just a moment...
Just a moment...