Automatically publishing Hugo blogs to S3

The process traditionally is: create a new blog article, edit it some, save. run hugo on the command line to generate the static site. copy/sync the public directory onto your webserver. Needing a webserver, build environment and text editor is a bit of hassle. It’s all a bit manual too. I’ve previously blogged about hosting on S3, but I wanted to see if a static blog could be managed entirely on free/hosted services. [Read More]
hugo  blog  s3  aws  codeship 

Hosting a jekyll blog on Amazon S3

** Note: I’ve replaced jekyll with the equally adapt pelican now. ** This article describes how to host your own static blog/site on S3. It revolves around the evolution this site has taken. First off I started using github’s public site feature. Dead neat, nice set of features and so quick to get running. The problem is I’m impatient, and often after pushing an update the “Page build successful” can take upwards of 30 minutes. [Read More]
jekyll  s3 

Introducing s3grep

This is the first in a series of posts introducing some of the tools I’ve developed. The first is s3grep - parallelized grep for Amazon S3. The need for this one arose as one recent project processes and stores on S3 large (text) log files. Often to diagnose problems it’s really handy to check direct in the log files. Whilst some tricks with s3cmd and xargs can get you so far, it’s hard to parallelize and seems trickier than it should be. [Read More]
s3  grep  aws