Like many other web developers and designers, I have a host of various smaller websites that I have put together for various reasons – this website included. Most of these sites are pretty small in volume and are aimed at local groups or specific audiences, neither of which attract a lot of web traffic. For hosting these sites, I’ve traditionally relied on the “shared hosting” model dating back to hosting directly with local dial-up internet providers to the eventual model of shared hosting with DreamHost, HostGator or lately a provider called GreenGeeks that typically provide simple static web hosting and a suite of services that are managed with cPanel or similar management environments.

There is an obvious tradeoff in the shared hosting model:

Pro: Shared hosting is often very cheap and becoming cheaper, with some options approaching $5 USD per month.

Con: As with anything that is “cheap” the con is often reliability. With this site, for example, I often get a stream of Twitter notifications:

Pingdom Up and Down

There is also the issue of shared hosting being located in a single data center which can cause some latency issues for visitors who may be visiting your site from a location across the globe. For my regional sites this typically isn’t an issue, but I have noticed latency issues being reported for this site when accessed from locations in Asia.

While shared hosting offers a set of tradeoffs, there is also the option of hosting your site with cloud-based service providers. Many cloud providers also offer a set of tradeoffs for the type of small sites that I’m describing here:

Pro: Service is vastly faster and more reliable.

Con: Cloud providers can also be vastly more expensive than low cost shared hosting.

There is one approach offered by Amazon Web Services that can provide you with the reliability and performance of the Amazon cloud while offering a very cheap “pay as you go” price: hosting static websites with Amazon’s S3 (“Simple Storage Service”) and domain name services provided by Amazon’s “Route 53” DNS service. I’ll explain how to set up your simple static site in the Amazon cloud, but first there are a few items you’ll want to note about Amazon’s S3 service:

  1. Amazon S3 is a very robust system for storing files, including public web sites. When hosting with Amazon S3, you’ll choose a region for website hosting (with many regions across the globe) with your files being replicated across different availabiilty zones within the region. In practice, this means that you will choose to host your site in an area like “North Virginia” with backups of your site being replicated across the four availability zones within the region. You may also wish to replicate files across different regions as well, but if your goal is to serve web files more quickly, Amazon also offers a CDN (Content Delivery Network) called CloudFront that will allow for pushing web content to Amazon’s network of global edge locations.
  2. Amazon S3 Storage is also inexpensive for storing and transferring files, such as the files associated with hosting a website. An overview of S3 pricing is available for you to see the costs associated with S3 hosting. A couple terms that you’ll want to be familiar with:

Standard Storage: This is the standard tier of S3 service and pricing. This tier offers “99.999999999%” durability and continued service through the loss of two facilities (often associated with “availability zones” in a region.) This is the storage type that you’ll be able to access through the Amazon web console and most file transfer clients.

Reduced Redundancy Storage: RRS is a storage tier that is “99.99%” durable over a given year, which means that the storage facility may lose 1 out of 1000 objects that you upload to the service. This usually is not a big problem for sites that you “publish” to S3, as you can always re-upload or re-create the objects that you are hosting.

Glacier Storage: Amazon Glacier is a recent queue-based method of storing and retrieving files. This method is much cheaper, but file storage and retrieval may be measured in “hours” and is intended for “cold storage” of data or system backups. (Hence the name!)

For smaller sites like the ones that I will be hosting with S3, it’s likely that I won’t see significant cost differences between Standard and Reduced Redundancy Storage, and if I were to host my sites with a “new” account, it’s likely I would easily qualify for Amazon’s free usage tier for my first year of service.

With that, let’s get started moving a site to S3.

The Website

The website that I’m moving to S3 from my shared hosting account is one for a small local event that I organize called the Bill Bell Tuba Day held once a year in the small town of Perry, IA.

The purpose of the public website is to simply have a place to post information about the event – the when and where, plus places where people can sign up for a mailing list. (Thanks to MailChimp.) Because there is little need for interactivity and only one author (me), I can manage site content through the web authoring tool Espresso and publish to a static web host. In this case, our static web host will be Amazon S3.

Setting Up S3 as a Static Web Host

Let’s get started setting up our static hosting on S3. The first step, naturally, is signing up for an Amazon Web Services account. Accounts are ‘free’ – Amazon only bills users based on the services that they use. You’ll want to take special note on sign up of the options for security credentials as well as store your access keys in a safe place. When your account is ready, log in to the Amazon Web Services console to see a list of services available to your user account.

Amazon WS Services

Click on “S3” in the Storage and Content Delivery section. You’ll be taken to a screen that allows you to create a bucket for storing your web files. In S3 parlance, a bucket is a related collection of files that can be further subdivided into folders, etc. – but a bucket is replicated within the Amazon region, making S3 buckets distributed storage. If you are making many writes to S3, you’ll want to become familiar with the concept of eventual consistency in the Amazon S3 system and how ‘write’ transactions are logged. For the case of simple web hosting we shouldn’t run into the real-time limitations of distributed systems.

Creating an Amazon Bucket

IMPORTANT: You’ll want to name your bucket with the same name as the domain you want to serve. In this case, my root domain is tubaday.org, so I created a bucket named tubaday.org. Bucket names across the Amazon cloud must also be unique, so a domain name (guaranteed to be unique) is also a good bucket name.

Once you have created a bucket, right-click the bucket name to bring up the properties dialog. In the properties dialog (this shows up as a right-hand column on the page), you’ll see an option called “Static Website Hosting”. Click on the header to open details – select Enable website hosting:

Enable Static Website Hosting

Choose the name of your index document (defaulted to the usual value of index.html) and specify an error document to handle error conditions. (In S3 hosting, this will be mostly “404 Page Not Found” or “403 Access Denied” errors.)

Setting S3 Bucket Permissions

While you’re in the properties dialog, you’ll also want to add a few permissions. The Permissions dialog offers a few options. In basic operation you can add a new permission allowing “Everyone” view permissions by clicking the “Add more permissions” key and choosing options for all users:

Add More Permissions

While this will allow public access to the bucket, it will not automatically apply these permissions to uploaded files. Once we upload our web files, we can apply a simple “Make Public” permission set. If you wish to make this process more automated, a better approach (and one consistent with the rest of the Amazon service security) is to define a policy to apply to the bucket. In this example we’ll stick with basic operation of S3 and make our web files public once we upload them.

Uploading Files to S3

Uploading files to S3 can be done through the web console in a ‘one at a time’ fashion through browser upload, though there are also API capabilities for uploading files. You could write a script to the API, or find a traditional “FTP Client” that supports the Amazon S3 API. I purchased a client for OS X called ForkLift awhile ago as part of a third party developer bundle that supports S3 and works well for uploading files from a local folder.

ForkLift for OS X

After files have been uploaded to S3, you’ll want to set permissions to the web files, making them “public”. In the AWS S3 Web Console, select the files you wish to serve, right click and choose “make public”. This is a batch operation on the S3 bucket, so you’ll see a progress bar appear with the details about the operation.

Making Web Files Public

Once the files are public, you should be able to access them with the ‘endpoint URL’ specified in the bucket properties dialog:

S3 Endpoint URL

With that in place, you’ve now reached the goal of being able to serve web files directly from S3 – in some cases S3 buckets are used to store web content such as images and CSS files, so the endpoint URL may be acceptable for including in other web pages. In our case of hosting an entire site, we want to have that bucket served directly at the url http://tubaday.org. For that, we’ll need to set up a DNS record to point to that bucket – Amazon’s “Route 53” service will allow us to do that.

Using Amazon Route 53 DNS

You’ll find Route 53 in the Amazon AWS Management Console under Compute and Networking. Clicking on the Route 53 link will take you to a console that will give you the option of creating a hosted zone for serving your website. You’ll want to click the “Create Hosted Zone” button, then create a hosted zone with the domain name you’d like to serve. (In this case, tubaday.org).

Create Hosted Zone

When you create this zone, you’ll be presented with some details – a host zone ID, comments and most importantly a Delegation Set with some rather cryptic looking URLs:

Delegation Set URLs

These URLs are the ones you will specify as your “DNS Name Servers” with your domain registrar. Write these down (or copy-paste them) for later reference.

Next, we’ll add a Record Set to our DNS entry. Highlight your Hosted Zone and click on the Go To Record Sets button in the upper left hand corner. You’ll see a dialog with a couple of entries present (namely the NS record of the name servers, perhaps a few others). We’re going to create an Alias for our domain to another resource. Amazon Route 53 allows you to alias to a number of different resources you may have set up. Click on the Create Record Set button – the first option that pops up will be one of type A - IPv4 address. This is exactly what we’ll want.

For serving our root (http://tubaday.org), leave the name empty – the default will be tubaday.org – and choose Yes in the Alias radio button. If you click on Alias Target, you’ll be presented a list of Amazon resources present in your account, including web hosting S3 buckets. Choose the web hosting bucket you created in a a previous step:

Creating a DNS Alias for your bucket

Route 53 also allows you to do much more powerful things with Amazon resources; setting up CNAME aliases for additional sub-domains, setting up service based on load, latency and failover and even doing health checks on your service endpoints. However, for hosting this website we can simply alias to a Simple routing policy.

With that, our simple setup on the Amazon services is complete. There are a few more items that we can do to make our hosting a little more ‘robust’, such as:

  • Adding subdomains to our account. For example, we may wish to add the subdomain www to our Amazon account so that www.tubaday.org successfully serves from our parent directory. (Note: if you want to forward domains in this way, please visit my article on domain forwarding with S3
  • Adding alerts to our buckets so that we know when certain traffic quotas have been reached, or certain events occur (traffic spikes?) that we would like to monitor.
  • Using Amazon’s “Cloudfront” to push our static content to edge locations which will improve network load times for site visitors.

We do have one last task to do…

Pointing Your Domain to Amazon’s DNS Servers

This step will be dependent on your domain name registrar – you’ll be editing your records there to point your domain name (tubaday.org) to the Amazon DNS servers. I’ve been using NameCheap as a domain registrar, so I’ll log in there to edit my DNS records with the name servers I wrote down earlier.

Domain Name Registrar

… and we’re done! It will take some time for these DNS changes to propagate to internet service providers, so you may not see your newly migrated / created website appear at your domain name immediately. This is a process that usually takes ‘hours’ depending on your internet provider and/or what name servers you use for your internet connection.

Finished!

Now that you’re serving a public website from the Amazon cloud, you should see both better response time and possibly a reduction in cost depending on how much traffic your static website draws.

A few things to keep in mind:

  • Amazon S3 hosting works only for static websites, S3 does not provide dynamic hosting capability. If you are accustomed to building sites with Content Management Systems such as WordPress, you may want to take a look at tools that allow you to publish to static hosts such as Jekyll or Octopress.
  • If you want to increase your website performance, Amazon CloudFront is a Content Delivery Network that ties directly into your S3 buckets and may be a better solution than attempting to deploy to multiple S3 regions.
  • If you do need dynamic content such as forms or comments, there are a number of third party services that provide these functions such as wufoo for web forms or Disqus for blog commenting. These systems can be integrated with Javascript, allowing for a semblance of dynamic content in a static HTML file.

Happy migrating!

UPDATE: In response to comments, I also added some instructions for forwarding one URL to another using S3.