The full scoop on why and how I created my own GitHub pages hosted blog and the technologies I used
Why Go Self Designed?
This started out as a learning exercise.
I have played with GitHub pages before on my Gallifrey project & also played with CloudFlare there too, but it was all based on standard GitHub pages templates and no blogging capability.
I have also used Bootstrap on a couple of work projects, so wanted to experiment and get more familiar with the inner workings of GitHub pages, or rather Jekyll.
I had also never linked or worked with things like comment sections, or social sharing. Both are key to a successful blog, so this would be some interesting learning along the way.
My idea was simple, keep it clean and minimalist. But when on mobile, really strip down so only the content is on show.
I wanted a gradient background from my favourite HEX colour (#ABCDEF) to white with a white rounded content panel sitting dead centre with a little shadow.
The content panel is the broken down into 4 sections:
The header is a little under-utilised at present, I have a very simple title and navigation links.
The navigation links compress into a drop down on mobile to save on screen space which is a nice trick I learnt how to do using bootstraps styling to show or hide different content on different screen sizes.
The content area is quite rightly the largest section, it’s easy to read with black text on white background and it nicely framed with borders on all sides joining other content
The sidebar is only visible on medium or large devices (i.e. phones or portrait tablets will not show it).
I’ve broken the sidebar into 3 vertical sections.
The first has my photo and links to my social networks. These icons are provided by Font Awesome They provide a CSS you can link to and add CSS tags to include little images. With a bit of styling, these become link buttons with a nice hover effect!
The second is the latest 3 blog posts I have made (plus RSS link – another font awesome icon).
The third is a widget with my Twitter timeline (created by Twitter).
The sidebar helps keep the content from looking too wide on larger screens, whilst also providing links off to other useful items.
Like the header, this is very underused. I think in the future I may have some additional content to put here, but not sure what just yet!
I wanted to keep things how people expect to see blog posts, to up top we have the title, the publish date some tags and a comment count.
This is followed by social sharing buttons provided by AddThis. AddThis enable automatic recommendation of what social sharing to show based on usage, but I’ve gone with the static I choose approach for now. They allow for remote customisation of the buttons on their side which update in real-time & provide metrics on social shares which is kind of cool!
The main content of the post is separated from the social sharing with a horizontal bar top and bottom.
As with most blogs, there is a standard page with all the posts, I’ve gone for a paginated approach with a maximum of 5 posts per page. This it the bit that took me the longest to get working due to the complexity of Jekyll in this area. There are a lot of tricks you need to know to get it working such as not using perma links and the page must be HTML and not markdown!
The net result though after that effort is a nice thing, I’ve made sure to include the date, tags and comment count on these too, but there is no sharing.
This page also (like the sidebar) links off to my blog RSS feed.
I had a couple of pages on my old Wordpress blog talking about me, my open source software and links to other blogs and websites I read/follow.
I’ve migrated these pretty much as was, but plan on sprucing that content up as time goes on as it’s a little outdated looking.
The site is hosted purely on GitHub pages, this is free to use (if the repository is public). It also means that I can get reader updates should I make any silly spelling mistakes (which is almost certain to happen). This was a completely trivial exercise of creating a repository on GitHub.com and adding my pages to it!
It took me a while to decided exactly what DNS I wanted to get and settled with blyth.me.uk. A CNAME file in the root of my repository and a few DNS records added with my DNS provider and I was up and running!
You could say that for a site like mine SSL is overkill. But when a service like CloudFlare is offering is as part of their free package you would be a fool not to take it!
All I needed to do was update my DNS nameservers over to CloudFlare, they migrated the entries I had already created with the DNS provider. Having turned on “Always HTTPS” they even handle the automatic redirects for me!
Linking nicely from SSL, CloudFlare also provide free caching. This doesn’t support HTML files as standard, but you can create a page rule to state all content on the domain should be cached. I’ve set this up with a 2-day cache in CloudFlare for all content.
This has caught me out once when making changes but a quick cache purge on their website and I could see my changes instantly!
The WordPress site I came from had analytics all built in. I didn’t want to lose this visibility, so have added Google Analytics to my site. I already had a Google Analytics account setup from Gallifrey, so this is just a second “Property”
Jekyll has a neat plugin called Jekyll-SEO. This takes some configuration information on the site and adds a whole load of meta content into the HEAD of all pages. This helps improve SEO and means links on social networks get nice looking cards!
The cost was something I wanted to keep as low as possible.
I think I have achieved this, my only cost being my domain name.
I recently had a situation where I needed to move parts of a GIT repository over to a new repository but wanted to keep all the history from the original repository.
After a bit of searching on google and finding many resources on “the best way to do this,” I ended up hitting a wall. The overall thought patterns I found looked good, but there was no one size fits all “tutorial” for how to do this.
The Eureka Moment
After muddling through (and deleting my local copies of the repository many times!) I finally managed to achieve what I wanted.
A pull request in each repository one deleting the folders with my code, and the other adding them complete with all their history (that second pull request looks very scary with a lot of commits)
But how did I do it I hear you ask…well…this post aims to make this easier for everyone, so keep reading!
The process documented below states all the git bash commands you’r going to need.
Start from any folder you have on disk, we going to leave it clean at the end :)
At various parts in the process, you will need to modify the command to put your own variable content in. These are shown with text in brackets.
Get Your Source/From Repo
To prevent screwing up any local copies of the repo you already have, I suggest a clean pull and remove remote.
git clone (URL-to-from-repo) FromRepo
git remote rm origin
Clean Your Source/From Repo
Since we only want the history to contain certain folders, we can run a command to completely remove everything outside of these folders from history - essentially rewrite history with just the bits we want. - This may take some time if you have a lot of history.
After this, we are done with this repo, for now, so move back up to the root folder.
This blog post will try and explain the decision why we have used click once and how I have gone about implementing it.
What is Click-Once?
Click-Once is an easy way to regularly push updates out to your windows desktop apps.
Click-Once can install updates automatically on application start, or you can handle this manually within your app.
Why Is Gallifrey Using Click-Once?
We have decided to go with the manual check approach, the reason for this is that we can integrate this experience into the app, rather than having a launcher that downloads the updates.
Therefore we operate in a similar way to other apps like Spotify whereby the updates are downloaded and installed, all the user has to do is restart the app.
How Does It Work?
GitHub offers a “raw” version of all its files, and this can be used to serve the application.manifest from a click-once application over the internet.
Changes can be made an published locally to disk, and then when the changes are committed into GitHub they are ready for everyone’s application to search and download the new versions.
From Visual Studio the publish of click once is done into 1 of 2 directories depending on the version of the app.
The “stable” version will publish to “....\deploy\stable" whereas the beta will publish to “....\deploy\beta"
This is just so that someone with a Stable version doesn’t accidentally get beta installs.
The Gallifrey App using click-once will know to go to this URL when checking for updates. The great this about this is that pushing updates is as simple as pushing a new version into GitHub.
The only pain point is having to manually perform the publish prior to the push into GitHub.
There are a few pain points to get around with using this approach.
GitAttributes - Since your pushing XML and .deploy files you don’t want your Git client to change the line endings. You can add a .GitAttributes file to your repository (check the one in this repo) that will tell Git that all “.manifest” or “.deploy” files are binary and should not be compared or adjusted.
GitHub raw seems to have a cache of some description, so when requesting the latest and greatest version, sometimes this is out of date. From my experience, the updates are there within 5 minutes, so it’s fully workable.
You have LOTS of files in your deploy folders! Git stores everything in history, so the more versions you push, the bigger your repo gets. Even if you clean up the files, someone has to pull all the history in, which could over time become cumbersome. Though, we are not talking GB’s of download in Gallifrey (Yet!)
I hope that this is useful to you and I’m happy to help anyone else who is trying to get a click-once app deployed using GitHub :)
At 15below we are using a tool called Octopus for our product deployment. The tool works well, but it’s integration to install our web-based application products doesn’t suit our needs.
However, with Octopus we can write custom PowerShell for deploying our applications. This got us into a situation where we have 3 different versions of IIS across our servers, each of which has a different method of installation, but we want a nice and easy way we can trigger the creates. Cue a “clever” script.
Firstly, what are the differences between the versions:
IIS6 (Server 2K3 & XP) - WMI needed to interact between IIS and PowerShell.
IIS7 (Server 2K8 R1 & Vista) - PowerShell snap-in available for download (I would recommend using the Web Platform Installer)
IIS7.5 (Server 2K8 R2 & 7) - PowerShell module, installed when selecting “Scripts” from IIS role feature installer.
The solution we came up with is hosted in the 15below public source code repository and sits inside the Ensconce application (more on that in a latter blog post, or link to 15below post) on GitHub. To see more information, or get the PowerShell scripts, click here.
The 3 PowerShell scripts we are talking about are:
Both the create IIS app scripts have the same 3 callable functions, these are:
CreateAppPool (which takes a string for the name)
CreateWebSite (which takes name, local path, app pool name, application name, host header value & log location)
AddSSLCertification (which takes website name to add to & certificate name)
Breaking these down, how does it work…
This will try to do a WMI control with IIS6, hiding any errors, but should it get a success, it will include the script “createiis6app.ps1”. Should the operation be unsuccessful, the “createiis7app.ps1” is included.
From this, you will be able to call any of the 3 functions outlined above.
Therefore, your PowerShell deployment only needs to include this PowerShell, and you can install into IIS 6,7 & 7.5. - helpful right!
Using only WMI controls, the functions are all callable once included (either directly or through the create website script)
So, as I’ve already mentioned, IIS7 and IIS7.5 operate in different ways and both require something extra to be added to your PowerShell session.
When this script is included, it will check if the IIS module is present to be imported, if it is, it will import it, if the import fails, or it’s not there it will try to locate and install the Snap-In.
If neither of these things is present it will return you an error.
This means that you don’t have IIS6, and you don’t have the required components for an IIS 7 install.
I hope that you may find this useful should you need to do any operations like this on your application deployment.
Feel free to head over to GitHub and check out the Ensconce application, and the IIS scripts. - You may find that the Ensconce application has other benefits to your deployment :)
Details of the Ensconce application functions can be found on the read-me within GitHub.