S3MakeFilesPublic

Static Web Hosting With Amazon S3

Like many other web developers and designers, I have a host of various smaller websites that I have put together for various reasons – this website included. Most of these sites are pretty small in volume and are aimed at local groups or specific audiences, neither of which attract a lot of web traffic. For hosting these sites, I’ve traditionally relied on the “shared hosting” model dating back to hosting directly with local dial-up internet providers to the eventual model of shared hosting with DreamHost, HostGator or lately a provider called GreenGeeks that typically provide simple static web hosting and a suite of services that are managed with cPanel or similar management environments.

There is an obvious tradeoff in the shared hosting model:

Pro: Shared hosting is often very cheap and becoming cheaper, with some options approaching $5 USD per month.

Con: As with anything that is “cheap” the con is often reliability. With this site, for example, I often get a stream of Twitter notifications:

Pingdom Up and Down

There is also the issue of shared hosting being located in a single data center which can cause some latency issues for visitors who may be visiting your site from a location across the globe. For my regional sites this typically isn’t an issue, but I have noticed latency issues being reported for this site when accessed from locations in Asia.

While shared hosting offers a set of tradeoffs, there is also the option of hosting your site with cloud-based service providers. Many cloud providers also offer a set of tradeoffs for the type of small sites that I’m describing here:

Pro: Service is vastly faster and more reliable.

Con: Cloud providers can also be vastly more expensive than low cost shared hosting.

There is one approach offered by Amazon Web Services that can provide you with the reliability and performance of the Amazon cloud while offering a very cheap “pay as you go” price: hosting static websites with Amazon’s S3 (“Simple Storage Service”) and domain name services provided by Amazon’s “Route 53″ DNS service. I’ll explain how to set up your simple static site in the Amazon cloud, but first there are a few items you’ll want to note about Amazon’s S3 service:

  1. Amazon S3 is a very robust system for storing files, including public web sites. When hosting with Amazon S3, you’ll choose a region for website hosting (with many regions across the globe) with your files being replicated across different availabiilty zones within the region. In practice, this means that you will choose to host your site in an area like “North Virginia” with backups of your site being replicated across the four availability zones within the region. You may also wish to replicate files across different regions as well, but if your goal is to serve web files more quickly, Amazon also offers a CDN (Content Delivery Network) called CloudFront that will allow for pushing web content to Amazon’s network of global edge locations.
  2. Amazon S3 Storage is also inexpensive for storing and transferring files, such as the files associated with hosting a website. An overview of S3 pricing is available for you to see the costs associated with S3 hosting. A couple terms that you’ll want to be familiar with:

Standard Storage: This is the standard tier of S3 service and pricing. This tier offers “99.999999999%” durability and continued service through the loss of two facilities (often associated with “availability zones” in a region.) This is the storage type that you’ll be able to access through the Amazon web console and most file transfer clients.

Reduced Redundancy Storage: RRS is a storage tier that is “99.99%” durable over a given year, which means that the storage facility may lose 1 out of 1000 objects that you upload to the service. This usually is not a big problem for sites that you “publish” to S3, as you can always re-upload or re-create the objects that you are hosting.

Glacier Storage: Amazon Glacier is a recent queue-based method of storing and retrieving files. This method is much cheaper, but file storage and retrieval may be measured in “hours” and is intended for “cold storage” of data or system backups. (Hence the name!)

For smaller sites like the ones that I will be hosting with S3, it’s likely that I won’t see significant cost differences between Standard and Reduced Redundancy Storage, and if I were to host my sites with a “new” account, it’s likely I would easily qualify for Amazon’s free usage tier for my first year of service.

With that, let’s get started moving a site to S3.

The Website

The website that I’m moving to S3 from my shared hosting account is one for a small local event that I organize called the Bill Bell Tuba Day held once a year in the small town of Perry, IA.

The purpose of the public website is to simply have a place to post information about the event – the when and where, plus places where people can sign up for a mailing list. (Thanks to MailChimp.) Because there is little need for interactivity and only one author (me), I can manage site content through the web authoring tool Espresso and publish to a static web host. In this case, our static web host will be Amazon S3.

Setting Up S3 as a Static Web Host

Let’s get started setting up our static hosting on S3. The first step, naturally, is signing up for an Amazon Web Services account. Accounts are ‘free’ – Amazon only bills users based on the services that they use. You’ll want to take special note on sign up of the options for security credentials as well as store your access keys in a safe place. When your account is ready, log in to the Amazon Web Services console to see a list of services available to your user account.

Amazon WS Services

Click on “S3″ in the Storage and Content Delivery section. You’ll be taken to a screen that allows you to create a bucket for storing your web files. In S3 parlance, a bucket is a related collection of files that can be further subdivided into folders, etc. – but a bucket is replicated within the Amazon region, making S3 buckets distributed storage. If you are making many writes to S3, you’ll want to become familiar with the concept of eventual consistency in the Amazon S3 system and how ‘write’ transactions are logged. For the case of simple web hosting we shouldn’t run into the real-time limitations of distributed systems.

Creating an Amazon Bucket

IMPORTANT: You’ll want to name your bucket with the same name as the domain you want to serve. In this case, my root domain is tubaday.org, so I created a bucket named tubaday.org. Bucket names across the Amazon cloud must also be unique, so a domain name (guaranteed to be unique) is also a good bucket name.

Once you have created a bucket, right-click the bucket name to bring up the properties dialog. In the properties dialog (this shows up as a right-hand column on the page), you’ll see an option called “Static Website Hosting”. Click on the header to open details – select Enable website hosting:

Enable Static Website Hosting

Choose the name of your index document (defaulted to the usual value of index.html) and specify an error document to handle error conditions. (In S3 hosting, this will be mostly “404 Page Not Found” or “403 Access Denied” errors.)

Setting S3 Bucket Permissions

While you’re in the properties dialog, you’ll also want to add a few permissions. The Permissions dialog offers a few options. In basic operation you can add a new permission allowing “Everyone” view permissions by clicking the “Add more permissions” key and choosing options for all users:

Add More Permissions

While this will allow public access to the bucket, it will not automatically apply these permissions to uploaded files. Once we upload our web files, we can apply a simple “Make Public” permission set. If you wish to make this process more automated, a better approach (and one consistent with the rest of the Amazon service security) is to define a policy to apply to the bucket. In this example we’ll stick with basic operation of S3 and make our web files public once we upload them.

Uploading Files to S3

Uploading files to S3 can be done through the web console in a ‘one at a time’ fashion through browser upload, though there are also API capabilities for uploading files. You could write a script to the API, or find a traditional “FTP Client” that supports the Amazon S3 API. I purchased a client for OS X called ForkLift awhile ago as part of a third party developer bundle that supports S3 and works well for uploading files from a local folder.

ForkLift for OS X

After files have been uploaded to S3, you’ll want to set permissions to the web files, making them “public”. In the AWS S3 Web Console, select the files you wish to serve, right click and choose “make public”. This is a batch operation on the S3 bucket, so you’ll see a progress bar appear with the details about the operation.

Making Web Files Public

Once the files are public, you should be able to access them with the ‘endpoint URL’ specified in the bucket properties dialog:

S3 Endpoint URL

With that in place, you’ve now reached the goal of being able to serve web files directly from S3 – in some cases S3 buckets are used to store web content such as images and CSS files, so the endpoint URL may be acceptable for including in other web pages. In our case of hosting an entire site, we want to have that bucket served directly at the url http://tubaday.org. For that, we’ll need to set up a DNS record to point to that bucket – Amazon’s “Route 53″ service will allow us to do that.

Using Amazon Route 53 DNS

You’ll find Route 53 in the Amazon AWS Management Console under Compute and Networking. Clicking on the Route 53 link will take you to a console that will give you the option of creating a hosted zone for serving your website. You’ll want to click the “Create Hosted Zone” button, then create a hosted zone with the domain name you’d like to serve. (In this case, tubaday.org).

Create Hosted Zone

When you create this zone, you’ll be presented with some details – a host zone ID, comments and most importantly a Delegation Set with some rather cryptic looking URLs:

Delegation Set URLs

These URLs are the ones you will specify as your “DNS Name Servers” with your domain registrar. Write these down (or copy-paste them) for later reference.

Next, we’ll add a Record Set to our DNS entry. Highlight your Hosted Zone and click on the Go To Record Sets button in the upper left hand corner. You’ll see a dialog with a couple of entries present (namely the NS record of the name servers, perhaps a few others). We’re going to create an Alias for our domain to another resource. Amazon Route 53 allows you to alias to a number of different resources you may have set up. Click on the Create Record Set button – the first option that pops up will be one of type A - IPv4 address. This is exactly what we’ll want.

For serving our root (http://tubaday.org), leave the name empty – the default will be tubaday.org – and choose Yes in the Alias radio button. If you click on Alias Target, you’ll be presented a list of Amazon resources present in your account, including web hosting S3 buckets. Choose the web hosting bucket you created in a a previous step:

Creating a DNS Alias for your bucket

Route 53 also allows you to do much more powerful things with Amazon resources; setting up CNAME aliases for additional sub-domains, setting up service based on load, latency and failover and even doing health checks on your service endpoints. However, for hosting this website we can simply alias to a Simple routing policy.

With that, our simple setup on the Amazon services is complete. There are a few more items that we can do to make our hosting a little more ‘robust’, such as:

  • Adding subdomains to our account. For example, we may wish to add the subdomain www to our Amazon account so that www.tubaday.org successfully serves from our parent directory. (Note: if you want to forward domains in this way, please visit my article on domain forwarding with S3
  • Adding alerts to our buckets so that we know when certain traffic quotas have been reached, or certain events occur (traffic spikes?) that we would like to monitor.
  • Using Amazon’s “Cloudfront” to push our static content to edge locations which will improve network load times for site visitors.

We do have one last task to do…

Pointing Your Domain to Amazon’s DNS Servers

This step will be dependent on your domain name registrar – you’ll be editing your records there to point your domain name (tubaday.org) to the Amazon DNS servers. I’ve been using NameCheap as a domain registrar, so I’ll log in there to edit my DNS records with the name servers I wrote down earlier.

Domain Name Registrar

… and we’re done! It will take some time for these DNS changes to propagate to internet service providers, so you may not see your newly migrated / created website appear at your domain name immediately. This is a process that usually takes ‘hours’ depending on your internet provider and/or what name servers you use for your internet connection.

Finished!

Now that you’re serving a public website from the Amazon cloud, you should see both better response time and possibly a reduction in cost depending on how much traffic your static website draws.

A few things to keep in mind:

  • Amazon S3 hosting works only for static websites, S3 does not provide dynamic hosting capability. If you are accustomed to building sites with Content Management Systems such as WordPress, you may want to take a look at tools that allow you to publish to static hosts such as Jekyll or Octopress.
  • If you want to increase your website performance, Amazon CloudFront is a Content Delivery Network that ties directly into your S3 buckets and may be a better solution than attempting to deploy to multiple S3 regions.
  • If you do need dynamic content such as forms or comments, there are a number of third party services that provide these functions such as wufoo for web forms or Disqus for blog commenting. These systems can be integrated with Javascript, allowing for a semblance of dynamic content in a static HTML file.

Happy migrating!

UPDATE: In response to comments, I also added some instructions for forwarding one URL to another using S3.

  • Matthias Eisen

    Hi Chad, if you’re looking to add form mailing capabilities (e.g. contact forms) to your S3 sites, you may want to try Newman API (http://www.newmanapi.com).

  • Andreas

    hi, thanks for the guide! is there a way to redirect www requests to non www requests or vica versa? i’m thinking of a duplicated content issue if both versions serve the same content (www.domain.com and domain.com).
    any ideas?

    • chad_thompson

      You’re welcome – thanks for reading. As to your question, yes there is a way. (In fact, let me write that up…)

  • Juan

    Hi Chad.. Very interesting post!!. In my case, i have the need of Dynamic Capabilities mainly based on WordPress projects. I think it’s possible with Amazon EC2 Service. Have you tested it?.

    Thks.
    Juan

    • chad_thompson

      Thank you! I’m glad you find it useful. To your questions:

      Yes – Amazon EC2 is the way to set up dynamic websites in the Amazon cloud. (As part of my day job, I support a number of WP sites running on EC2 instances.) A few things to keep in mind:

      1) An EC2 instance is a virtual representation of a server – so you’ll need to manage all of the technical infrastructure yourself. (e.g. Apache Web Server, PHP, WordPress Install, etc.) There are also a few third party “AMIs” out there for WordPress installs.

      2) Depending on the amount of traffic (and/or your budget) you might find EC2 fairly expensive. For WordPress, you’ll want a machine with a decent amount of throughput and/or RAM to serve HTTP requests (be sure to use the SuperCache plugin!) – for small low traffic sites, you’ll probably find EC2 to be a rather expensive option. (i.e. on the order of $50 / month, though you might be able to serve multiple WP sites from a single EC2 instance.)

      On the other hand, if your WP site generates enough traffic and requires enough resources where you are considering WordPress VIP, then even the larger Amazon EC2 instances and/or infrastructures will seem like a pretty good deal.

      Good luck!

      • Juan

        Well… i´ll try to deepen in this kind of Service (EC2) !!. Mainly, i´m searching, a good perfomance and storage to maximize my anual bugdet without sacrifice the quality of the service i offer to my customers.

        Thks a lot Chad four your advices!!.

      • Dean

        Hi Chad,
        Thanks for your wonderful article. I have a team of programmers helping me creating a property listing website from scratch (not using wordpress or template of the kind. My question is that do I need EC2 or S3?
        My site will have tons of images uploading to the server all the times. Thanks for your help.

        • chad_thompson

          Well, that’s something of an “it depends” type of question.

          * A site developed using S3 (like the type described here) is more akin to “publishing” – there is no capability for doing things like logging in to upload images, etc. A site that is changing frequently would have to involve a mechanism for creating static files to store on S3. (An example framework for publishing would be http://jekyllrb.com – a process that creates a static site for you to host as static pages.)

          * If your site will involve some dynamic components (logging in to create listings / upload images, etc.) it is still advisable to store and serve images from an S3 bucket. The reason: an image is nothing more than a static resource, S3 can serve static resources very efficiently, S3 is redundant (meaning that you won’t lose images due to server crashes) and S3 is a very inexpensive solution for serving static resources. (If you offload image serving / hosting to S3, you might also be able to shrink your EC2 server footprint to make that a little less expensive as well.)

          I hope that helps – feel free to hit the ‘contact me’ page up there and I can try to answer any questions you might have.

  • http://www.mathewporter.co.uk/ Mathew Porter

    Nice post Chad, I will have to have a look at setting this up for one of our smaller clients as im sure its a more cost effective and surely a robust solution for hosting.

    • chad_thompson

      Thank you – I’m glad you found the post useful!

  • jk

    Thanks for the help!

  • http://christian-fei.com/ Christian Fei

    SUPER!
    Thanks a lot for the detailed guide :D

  • zebedee

    Amazingly useful and clear. Massive respect. After much swearing and gnashing of teeth, success, thanks to this wonderfully patient guide.

    Just one recommendation, for a newby it is not clear that adding www. subdomain can be achieved by forwarding one url to another, so would be great to link to that other post from that part of this post. Thank you so much for putting this together!

  • jakem

    Thank you that has helped me

  • Sadia Ahmed

    It really informative sharing.

    Thanks,

    Sadia Ahmed

    SPV Host

  • Jerry

    This is so helpful – thank you. Will Amazon s3 hosting be suitable for a site that routinely gets 200 hits per day but has spikes up to about 1500 and will grow in the future? I’m on shared hosting at the moment and I’m starting to notice big performance problems.

    • Chad Thompson

      Amazon s3 hosting should be perfectly acceptable for that type of page load – S3 is a pretty robust system.

      Should you start to run into problems (keep in mind these are all static sites – not something like WordPress!) you can also move image content, etc. to “CloudFront” CDN – another Amazon service – to distribute your content to servers worldwide.

      (Keep in mind that these systems are engineered for systems that get hits in the “couple hundred thousand to millions per day”. With large scale systems like that you do start to incur bandwidth charges that will be far greater than storage.)

  • http://thebakery.io Philip

    Thank you for the excellent article, Chad. I use the following Bucket policy to make sure all the content stays public automagically:

    {
    “Version”: “2008-10-17″,
    “Statement”: [
    {
    "Sid": "AllowPublicRead",
    "Effect": "Allow",
    "Principal": {
    "AWS": "*"
    },
    "Action": "s3:GetObject",
    "Resource": "arn:aws:s3:::NAME-OF-YOUR-BUCKET-HERE/*"
    }
    ]
    }

    • Chad Thompson

      Great tip – thanks!

      (I’ve also used IAM policies to the same effect – which you might want to do if you are using other services and want to control them with policies, etc.)

    • Rail

      http://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html

      Here is the latest bucket policy taken from Amazon’s own walkthrough instructions:

      {
      “Version”:”2012-10-17″,
      “Statement”:[{
      "Sid":"AddPerm",
      "Effect":"Allow",
      "Principal": {
      "AWS": "*"
      },
      "Action":["s3:GetObject"],
      “Resource”:["arn:aws:s3:::NAME-OF-YOUR-BUCKET-HERE/*"
      ]
      }
      ]
      }

      • Chad Thompson

        Awesome – thank you!

  • Greg

    HI Chap,
    What if I have my domain registered from godaddy and have been using the DNS manager there.
    Can I point the domain name directly to the S3 bucket at godaddy with a CNAME without moving things to R53? I have MX record and all the other stuff there at godaddy and do not really want to move those around.
    Your advice?

    • Chad Thompson

      I would think that adding a CNAME record for your bucket name should work.

      I might do the CNAME for the domain itself + “www” and redirect one; it seems to help ‘SEO’ if you serve your page from a single URL.

  • Djoh

    Hello,
    Is the address created in “Endpoint” the final one? For security reason, I’ll keep my own DNS server and won’t be using the Route 53. Can I just point the CNAME of my DNS records to this address?

    • Chad Thompson

      Short answer: yes.

  • http://www.isqsolutions.com Carl Davis

    Thanks so much for the clarity of all that! I’ve made the changes, just waiting for it to propergate, fingers crossed everything will work

  • http://drasaadi.ir Dr. Asa’adi

    Hi Mr. Chad
    My site is on a shared hosting and so good and simple but just offer 512MB storage that best for basic WP site..
    Now, I need a bit more space (e.g. 1-2GB) as free space just for uploading images (and other Media for my WP)..
    Can I use free service of Amazon S3 wih 5GB space?? I don’t know about this limited requests and traffic.. seems have many low price for removes limitions!!
    Best regards
    Dr. Asa’adi

    • Chad Thompson

      A few things to keep in mind:

      1) Using S3 with WordPress is a viable option for more storage space in a WP blog, but the options for doing so (S3FS, etc.) can be very slow. (You can always upload to S3 and create links to files.)

      2) While S3 is inexpensive, it isn’t “free” – you will be charged for the amount of space and bandwidth that you consume. For the type of traffic that you likely get with shared hosting you’ll find that S3 will likely be pretty inexpensive.

  • http://about.me/itobisanya Tobi

    Amazing! Thanks so much – it really breaks it down so well!

  • letmehandlethis.net

    Very helpful, thank you! Between AWS and Namecheap, I got my single-serving web site up and running in about 90 minutes.