I'll take you on a guided tour of Oxbridge Notes, a profitable 10-year old web app. See everything from how it's deployed, to how it's architected, to how it's maintained -- and learn how to do the same.
We take a look at the main flows through a web application I've been running for over ten years, OxbridgeNotes. You'll see each side of this marketplace, along with its admin area. You'll visit Google Analytics to see the traffic figures, and afterwards we'll return to the command line and analyze the codebase size, showing off some handy tools in the process.
I'll show you how I came up with the idea for my online business, show just how minimal the first version was, and show how I went about marketing it. Then I'll give two examples of where I failed to use minimal viable product reasoning and how it caused untold waste.
The ease of running
This episode goes through some of the strategies I used to get north of 200k monthly organic page views to my website. I'll cover picking keywords through Google Keyword Planner (and why it's important to build your code naming conventions around these), structured data (which increase CTRs on Google), and scalable mass-content creation - what I believe to be the best strategy for SEO both when I began in 2010 and ten years later when I released this screencast in 2020.
This video continues where Part I left off. Here I talk about managing the limited Google crawl budget (by instructing it to ignore boring pages), preserving accrued SEO juice when users edit their own content on your website (or when you fix typos), getting into the Google Images game, dealing with shadowed content when two users create pages with the same name, and automated tools to audit your on-page SEO.
Data is more important than code, therefore the most important job you, as a programmer, have is to design a system that allows for a simple, constrained, and predictable set of data. In this episode, I'll discuss how null constraints can reduce the number of types your program has to deal with, thereby simplifying your code. Then I discuss how
check constraints can force data to take a limited (and more useful) range of values. Lastly I'll explain why it's better to carry all this out at the database level rather than at the Ruby/Python/Php/JS level.
Here I continue on from the last episode (null constraints, etc.) in exploring ways to use an SQL database to ensure data integrity. I'll show ways to avoid shooting yourself in the foot by setting non-existant relationships or by deleting rows that are referenced elsewhere in the database and are therefore necessary. I'll show how to lean on foreign keys to build resource allocation features with practically no backend code. Next I'll demonstrate the perils of relying on uniqueness validations at the Ruby/PHP/Python backend level. Finally I'll show to avoid bloat in pivot tables for many-to-many relationships.
In this third episode about data integrity, I talk about how to save your data such that computers have an easier time working with it (e.g. for filters), how to ensure that data stays valid by your own rules as you add increasing numbers of validations over the years, about avoiding the mistake of duplicating data across tables and needing to sync these records, about using database transactions to ensure that state changes to multiple rows either all occur or all fail as a unit (preventing invalid intermediate states from cropping up), and lastly about using cascades to ensure the deletion of associated records that should not exist when their 'parent' records are deleted.
A key factor in reducing my coding time for Oxbridge Notes down to a few hours per month was adding comprehensive integration tests. Today I demonstrate how these tests work using the test-browser's NO HEADLESS mode, which lets you actually see the browser executing your tests. Next I show how to write such tests using tools like factories (touching on how I test tracking code). Following that, I show how to set up a continuous integration server (using docker containers), and how to run your CI tests locally to verify they work before pushing them to the cloud. Lastly, I finish with a discussion about what we should test, given limited testing budget.
In this episode I talk about my personal best practices when doing acceptance testing in web-development. Firstly, how to reduce brittleness by using stable test-IDs to interface with your tests. Next I discuss why you should use non-JS test-drivers where possible for speed. Then I talk about the benefits of making your integration tests fail on ANY JS exception -- even when only tangentially related to the system under test. Lastly I give you the reasoning behind why I like to automatically capture screenshots of any test failures.
Continuing on from the last episode, I discuss more best practices for acceptance tests. Firstly, I discuss how to give your assertions a looser touch so as to reduce coupling (and brittleness). Next I talk about abstracting away your config such that changes in specific config variables won't break your tests. Lastly, I discuss the necessity of clearing state between your tests for dependencies like email collectors, file systems, databases, etc.
I believe a surer route to good programming is focusing on avoiding mistakes rather than focusing on doing things 'right'. In this episode, I give four tips for adding a modicum of rigor to your programming. First, proof-read all changes before committing. Second, execute every line of code. Third, double-check you're in the right file. Fourth, double-check your docs are for the right version.
Continuing on from part I, this week focuses on easily avoidable mistakes. It teaches techniques like being mindful of interfaces between programming languages, matching of closing entities, and being aware of the availability of functionality in alternative contexts and platforms.
Continuing on from part III, this episode focuses on reducing silly mistakes when programming. Do you truth-table logical possibilities to ensure you don't miss a critical branch in your logic? When you discover a bug, do you search the rest of your code for parallels that caused by similar mechanisms or the same misunderstanding? And, lastly, are you running all those tests and linters you wrote on a regular basis?
These screencasts are aimed at ambitious programmers who need to take full responsibility for their codebases - especially as owners of small software companies.