After releasing my guide on how to automate SSL certification with VestaCP, I think it is equally useful knowledge about how to force browsers to use HTTPS. I never liked the idea of using two different software setups to serve files, so I don’t use nginx but this guide can be applied to it with a bit of extra-googling. In a nutshell, what we will be doing is taking advantage of Vesta’s simple templating system to modify the configuration files generated upon a website’s creation from the control panel.
With the release of the open-source Let’s Encrypt project by the Linux Foundation, we took advantage of some folk’s creation that automates the lets-encrypt client for the Vesta Control Panel. Considering that we add a new subdomain every other week, this was very useful and yet… not useful enough. Despite the fact that Vesta’s lets encrypt client only taking a few commands to setup a new certificate, the reality is that it took commands at all.
Perhaps it is me who lacks knowledge in basic networking as I learned everything I know mostly by accident and never actually studied it. Maybe it is a lack in NodeJS documentation. Maybe I was expected to know this… but I didn’t. I am currently working on a project called TChaP which is intended to be highly secure. I say intended because it has multiple minor security flaws which could be abused.
If there is one Apache module you must have for your production server, it is mod_pagespeed by Google. I don’t know everything that it does, but that the beauty of it; I don’t! Pagespeed does a massive amount of various methods to speed up your website such as caching, lossless compression for most types of media and other things I don’t know. In short, it does everything that you should do automagically.
Often times while making ninja-updates, mod_pagespeed gets in the way with it’s caching as it doesn’t seem to clear it’s cache automatically when the source file changes. The mod_pagespeed cache is stored in /var/cache/mod_pagespeed. So to clear the cache you can easily just rm -rf everything out of there. After that, the cache will rebuilt itself as needed with the modified files. The end.
I thought I would write this up partially for sake of bragging about the efficiency of my setup. Unless if you host the Hugo server locally, you will need to either SSH to the server to generate a new post, put Hugo on watch and FTP to the server to edit the files locally or some variable of that. Now me, I wanted to be able to generate the posts without using any extra software while using Atom.io for seamless editing and publishing.
As noted in the footer, we have migrated from the Ghost CMS to the Hugo static site generator for blogging purposes. There are a few reasons for it and none of them mean that Ghost is somehow bad, because it’s not; in fact, it’s great! It’s just not for me for a few reasons. 1. Speed This first point is negligeable for me as I was never the load required for it to actually matter.
Want to talk? Drop me an email: firstname.lastname@example.org I usually respond within 24 hours. Don’t spam me, I’ll just block your email.
I did a bunch of stuff. Most of it is either ad hoc, useless or cannot be published for one reason or another. Here are a few of my works that are public in this point in time: Pluralsight Scraper The pluralsight scraper has it’s very own post here and a repository here. In short it downloads pluralsight videos. Sitemap Generator Crawler This was one of our earliest worthwhile works as there we no decent crawlers available.
KNYZ.org is a project unifying and centralizing all my work. The front-end holds little to not importance but I felt bad putting up a blank page. I do contract work (Back-end, Front-end, Database stuff, whatever you need). I don’t take much.