My solar system is an Enphase, it has a nice app for looking at my numbers, but happily it also has a pretty solid API that I can use to pull down my data. I wrote a little Python script that pulls down my system’s data in 15 minute increments (generation, usage and net flow into or out of the grid), and ran this to collect data for the whole of 2023.
Each pixel here is an individual 15 minute increment. The color of the pixel represents the amount of electricity generated in that 15 minute period. The x-axis is the day of the year, and the y-axis is the time of day.
Rather obviously, the system generates most electricity in the middle of the day, and in summer and the dark vertical stripes correspond to very cloudy or snowy days. One thing I liked about this plot is that it’s very easy to see the days getting longer and shorter as the year progresses, and the little jumps in spring and fall when we change the clocks.
This heatmap is similar to the one above, but is now showing when we use electricity. The color of the pixel represents the amount of electricity used in that 15 minute period.
The things that stand out to me on this chart are:
Same as above, but now showing when we sent electricity to the grid, or consumed it. Throughout the year we had produced slightly less than we used, and our two largest consumers of electricity were the AC and car.
This exercise taught me a lot about our electricity use, and informed what I plan to do to the house during 2024. Over time, we plan to fully electrify the house, which means replacing the remaining gas applicances (oven, water heater, dryer, furnace) with electric ones, so our electricity usage is only going to go up.
We already use slightly more than we generate, so I’m going to have to think about increasing our home’s efficiency, likely with a home energy audit. I don’t really have the appetite to install more solar panels right now!
Header photo courtesy of Anders J.
]]>Earlier this year I designed and built an aeroponic garden. Here is a week of the plants growing while I was away from home. It’s genuinely surprising to me how much they move around to search out light.
I bake all of my household’s bread, and have been honing my technique on rustic, country-style sourdough bread for a couple of years at this point. This time-lapse shows my sourdough starter for 12 hours after a feeding.
Header photo courtesy of David Becker.
]]>That’s a lot of things to keep track of. A year or so ago I made the decision to use some of the tools from my working career as a software engineer to automate this for myself. In particular, I wanted to do continuous integration on this website. continuous integration is the practice of automatically testing things every time you make a change, and making sure that you get quick automated feedback on your changes. This should help ensure that when I’m dashing off some page about what I’m excited to cook today, that there is a tool is ensuring I didn’t make any errors such as:
Now, I’m not an expert on a lot of these things (specifically accessibility), but would like to be able to guarantee a base level of quality and effort for everybody who does visit the site. As I’m not an expert myself, I ended up using a couple of tools that verify different things:
I use CircleCI (although any number of other solutions would work out to be almost identical, e.g. GitHub actions, TravisCI).
The process of implementing CI for any of these providers is pretty much the same, although the details will differ. The introductory tutorials for each tool pretty much do everything we want, but at a high level the steps are:
Here is my CI script. It runs markdownlint
, then builds the site and runs and installs htmltest
. Nohing fancy at all.
#!/bin/bash
# This script test that the Jekyll site in the current directory passes
# Markdown linting and a run through htmltest.
#
# Expected state: mdl is available in the path, curl is installed
#
# Result: Non-zero return code indicates failure.
set -ex
# Lint all of the markdown files in the source tree
mdl .
jekyll build
# Install htmltest
curl https://htmltest.wjdp.uk | bash
HTMLTEST_OPTIONS="-c .htmltest.yml"
if [ -z "$1" ]; then
echo "Scanning entire site"
bin/htmltest ./_site $HTMLTEST_OPTIONS
else
echo "Scanning page $1"
bin/htmltest $1 $HTMLTEST_OPTIONS
fi
It’s worth noting that the CI process does find useful things. The thing that led to me writing this blog post was that I added a logo (the little Venn diagram) to the site. In the theme, the logo is configuration (simply add the name of the file to a config file), but when I merged that one-line change, CI started failing.
It turns out that the theme neglected to add alt text to the header image, making the site more difficult to navigate for people using a screenreader. The header image is a link to the front of my website, so added an alt text appropriately labeling it as such.
I learned this particular technique from a software consultant I worked with who specialized in helping organizations to roll out new software systems. The advice he gave me was “Switch off almost everything in the new software and get one thing working. Once that works, switch on a second thing, then a third”.
The reasoning behind that advice was that the quickest way to learn to ignore something (or never internalize that is exists) is for it to be noisy on day one, before it has demonstrated its value.
I markdown linted this site following exactly this philosophy. The first time I ran mdl
, there were probably 300 failures, it was an absolutely overwhelming number that I had no idea where to start with.
My approach ended up being the following:
After going through this process, I ended with a completely tractable list of ignored rules, which I understand well, and a site that obeys markdown best practices.
# MD002 First header should be a top level header
# Jekyll's title is the h1 on the page. The first heading in a post should be the h2
exclude_rule 'MD002'
# MD013 Line length
# There are just a lot of these to fix
exclude_rule 'MD013'
# MD026 Trailing punctuation in header
# I write the occasional header that ends in ?
exclude_rule 'MD026'
# MD033 Inline HTML
# Just use inline html sometimes
exclude_rule 'MD033'
# MD041 First line in file should be a top level header
# Excluded because Jekyll files have a preamble
exclude_rule 'MD041'
I have found this approach to be one that is successful very much of the time: Get one thing working, and gradually switch on more.
]]>Around the start of 2021 (coming into the second year of COVID-related lock down), I decided to spend some time actually getting good at puzzles. In particular, I realized that I had basically never solved a Sudoku before and resolved to pick up a modicum of ability at solving these, with the assumption that Sudoku solving is very well documented, and what I learn from Sudoku will be partially transferable to other puzzles.
From January through April, I fired up the sudoku.com app, doing the challenges every day. As is the case with brand new skills, I initially improved rapidly (an easy puzzle took me around 15 minutes to begin with, a month later that number was 4-6 minutes), and then continued to refine my abilities with a little bit of reading on more advanced techniques than I would have figured out on my own (X-wings, XY-wings, Skyscrapers) and my times continued to improve, and I took on medium, then hard, then expert puzzles in a reasonable amount of time (as of writing my average solve times are 5:38 (easy), 12:11 (medium), 16:58 (hard), 19:19 (expert))
At some point, I was watching my favorite puzzle YouTube channel Cracking the Cryptic, and they did a video where they solved a puzzle from the book “World of Sudoku vol. 4”. This book actually sounded like one that I wanted to get, for three main reasons:
I resolved to get a copy of the book, solve a bunch of puzzles and see where I stack up against an actual competitive Sudoku solver. Between April 18th and May 27, 2021 I solved all 120 puzzles in this book.
I got a total of 42 hrs 11 minutes of enjoyment out of the book, which, for something that cost $8.99 is exceptionally efficient, clocking in at 21 cents per hour of puzzling.
I improved! Here is a graph showing the ratio of my time to Chiel Beenhakker’s as I worked my way through the book. I started around 6x slower than him, and ended around 2.5x slower.
The Pearson correlation coefficient between my times and his is very low (0.36). I was expecting something higher, since we were solving the same puzzles I would expect us to take the same path, just with me going more slowly. The scatter (I think) is indicative that I’m frequently missing things.
How do I know it’s me who is missing things? Here are histograms of our times.
His are very consistent with a strong peak around 5 minutes. I’m all over the place, implying that my performance is just much more variable.
I was definitely delighted by the human-built Sudoku. Something I never experienced before with computer-generated puzzles was neat solve paths. For example seeing patterns form on the grid, or experiencing one logical jump clearly unlocking the next part of the board. That’s just not something that I ever experienced in grinding through machine-generated puzzles.
I also found that, as I picked up more experience, my technique changed from being very “rote” (check the numbers 1-9, in order, repeat once in case the first sweep filled anything in, then look for rows, columns and boxes that are close to full, then do another sweep of the numbers 1-9), to being a little more flexible where I would observe the grid and go where the puzzle looked “ready” to be filled in, look at numbers or boxes that looked busy, or where there were patterns I knew would be beneficial.
Overall, I learned that I will never be a competitive puzzle solver, but my delight in solving puzzles didn’t wane at any point, and now I’m spending some time with various other types of logic puzzle.
]]>So, when this link on challenging projects for programmers appeared on Hacker News the other day, it just caught my attention something fierce, and of the projects on there writing a ray tracer just jumped out as very interesting.
Sure, I don’t need one of these, and reality is that I’ll make a few photo-realistic images of… well, probably spheres floating above checkerboards. Whatever. I’ll enjoy it and that’s what matters.
In my teens and 20s I was really into POVRay, and generated a whole number of scenes from scratch (sadly lost to time), and competed in various rendering competitions. What if I could build something that gave me those capabilities, except I built it myself!
I quickly set myself a ground rule. I want to build a ray tracer that can make a photorealistic image using only the Python standard library. Meaning: no numpy for matrix operations, no scipy for anything. If I need to do a thing, I have to learn the fundamentals and do it myself. I figure this will be horrendously slow, but might be a fun target for thinking about optimization.
I immediately started researching, finding a few really good resources:
The thing that caught me was that the basic algorithm of ray tracing is simple. You’re making a picture full of pixels. You fire a ray from your eye to each of the pixels (the opposite of how real sight works, naturally), and then:
That’s it! Go pixel-by-pixel in an image and calculate its color by checking what it hits (and if it hits a reflective, or transparent thing doing a bit more tracing).
That said, the groundwork was painful.
I have always been embarrassed at how bad my linear algebra is. During college (and shamefully through much of my postgraduate work) I could always perform the calculations on the exam, but I never really got the /why/ of them. Actually working through, step-by-step, with a goal reminding me why I was doing these things was very powerful for me. Although this isn’t a math book, I think I picked up more linear algebra intuition from this exercise than from my actual education.
Anyway, it’s slow going to start with, to build a ray tracer, you implement the atoms of the system (vectors, matrices, colors, canvases, intersections, rays) with little to show for it. Then, it all comes together at the same time. You want me to grab a shape, spin it 270 degrees, make it five times taller, then throw it into the far distance? Sure, I can do that in one line of code.
new = old.RotX(270).Scale(1, 5, 1).Translate(0, 0, 15)
Under the hood that’s a bunch of linear algebra that I coded from scratch, but at this level, I just don’t need to know. This moment, the one where you realize you are working at a good level of abstraction is a really good feeling in programming and working through building a ray tracer gave it to me in spades. I gained comfort with the underlying matrix algebra by implementing it from scratch, but at the same time could write these simple one-liners to do really tough things.
It’s around about here in a build that progress begins to feel tangible. It’s no longer “I got my unit tests to pass”, but that “I managed to render this new thing”, and now we hit “first light”. That’s where I’ll leave this first iteration on the raytracer, with my first image. It’s a single sphere, in purple. It’s simple, but every pixel in this image was generated by tracing light rays around the image and checking where they bounce. Very cool!
]]>When the whole lockdown during pandemic started I, like many other people, found myself stuck at home for an extended period of time. During this period, I started baking sourdough (as did so many other people). I learned sourdough from the Tartine book, and I baked loaf after loaf, gradually tuning the recipes until I really had a feel for them.
After a few months of sourdough, I got the itch to try something new, and that was when I stumbled over the Bread Baker’s Apprentice (BBA) challenge. Turns out there are dozens of people over the past few years who have attempted to bake all 42 of the recipes from BBA, from Anadama to whole-wheat.
It has been pretty slow going, but as of today I hit 21 breads, with Pain à l’Ancienne. Half way there, 21 more to go.
If you want to follow along then here is my page about it
For this reason, I was super delighted when I saw 24a2 appear on Hacker News. 24a2 is
24a2 is a simple game engine that lets you to build a game in a few hours. It has a very limited set of features which makes it easy to learn, and encourages you to solve problems creatively.
In fact, 24a2 gives you access to a 24x24 grid of dots, and gives you the ability to change their color to one of 8 or so different colors in response to arrow key presses, or mouse clicks.
Pretty much immediately, a few things fell into place for me. First I wanted to see if I could create a game in just a couple of hours, and second I realized that I had already written most of the code I would need. Back in 2018 I was invited to display some of my pen plotter art at the Chicago art/open mic even, Makespace.
As part of this, I made some images of autogenerated mazes. I ran out of time before completing my final goals, which was to have the plotter draw some impossibly complex mazes, then flip over to drawing in red and solve those impossible mazes faster than the human eye could follow. That said, I did manage to build a decent maze generation alrogithm in Python. It took only a quarter hour or so to port the algorithm into TypeScript, then another couple of hours to set up 24a2 to render the maze and allow me to move a dot around it. I then added an “exit” to the maze, but the game just wasn’t in the slightest bit entertaining. It’s almost trivially easy to solve a 24x24 maze by eyeballing it, so I added some line of sight effects. Still too easy, you would always find your way to the end, even if there were a couple of false starts, so I finished up by adding some time pressure. Namely, I made the game color in every square you trod on in red. Initially I was thinking of having it so that you had to get to the end without stepping on any square more than once, but that was impossibly hard to do, but around then I realized that the red dots in your wake looked like a trail of blood so “You Killed a Bear But Now You’re Bleeding To Death: The Game” took shape. Get out of the maze before your character loses too much blood and faints.
And here it is.
]]>Getting restless again, I realized that I wanted to learn something new. I have had my eye on Offensive Security’s Wireless Professional (OSWP) Certification for a while now. As they promise:
OSWPs are able to identify existing encryptions and vulnerabilities in 802.11 networks. They can circumvent network security restrictions and recover the encryption keys in use. The 4-hour exam also demonstrates that OSWPs are able to perform under imposed time constraints.
Now, I don’t work directly in security, and I already know that the material on this course is somewhat out-of-date. What I really want to get out of this is an understanding of how WiFi works, and maybe get a little kick out of managing to decrypt a WiFi password or two, and to do something more productive than Netflix with some of my quarantine hours.
With that in mind, a few weeks ago, I signed up for the course. Here are my thoughts on the course and the exam.
A few days after registering, you receive the welcome email and course materials. There are two parts to the course materials: A PDF and some videos. Given my previous experience of Offensive Security courses, I didn’t even look at the videos. They cover the same things as the PDF and I just work better that way.
The PDF itself is 386 pages of instruction. Approximately the first half of it is dedicated to the history of WiFi, the detailed structure of WiFi packets, and the algorithms used for communication. I spent a fair amount of time grinding through this dry material. I didn’t always retain 100% of it, but I came away feeling like my understanding of WiFi was measurably higher than it was at the start.
After this introductory material, you get on to the meat of the course: Compromising wireless networks. To do this, you need to put together a home lab by purchasing a suitable wireless card (must be able to do packet injection) and a router. Largely following the course recommendations, I got a D-Link DIR-615 Wireless-N Router, and a Alfa AWUS036NHA - Wireless B/G/N USB Adaptor
The course is designed to be taught using Backtrack linux I decided not to go through the hassle of installing Backtrack linux anywhere, and instead just did the course on my Ubuntu 18 laptop. I had very little issue with this, a basic apt-get install of aircrack-ng worked for everything in the course. The only thing I struggled with was getting the WiFi card into monitor mode, since so many pieces of software had their fingers in that part of the pie. I ended up having to kill enough daemons and remember (and sometimes repeat) enough commands that I just wrote functions to do it for me. Maybe these will be useful for somebody one day:
In the end, I got through the course in 3-4 weeks of just taking on a chapter when I felt like I had spare time in the evening. Each chapter has the same structure:
Which is a really fun way to learn, since you’re immediately applying every concept that you pick up. Overall, the learning process was very enjoyable for me, although I can’t speak to the quality of the videos since I didn’t watch any but the introductory one.
The exam was really enjoyable. Rather than being a multiple choice exercise, you’re given root SSH access to a host. There are three wireless networks in range of that host. Success on the exam means cracking the encryption keys to all three of those networks, and documenting how you did it.
As for the difficulty of the exam: if you have done the exercises in the PDF, you have all the tools you need to pass. The exam is four hours in length, which is an ample amount of time to get everything done.
That said, to prepare myself I did a bit of exam-specific work. It is public knowledge that the exam is “crack some Wifi networks”. With that in mind, I spent 20 minutes the night before the exam just writing out a flowchart of what I could try, given the skils I learned in the course:
When I started my exam I first looked at traffic to all three networks, identified which paths I would be taking down the flowchart, and so had a pretty good plan of attack put together in the first few minutes.
At that point, it was just executing on things I knew. I tripped up a couple of times due to inaccurate typing, but got the last password within 50 minutes of starting. I then spent another hour writing the report, and re-cracking one of the networks just to make sure I got all the juciest screenshots.
A few days later, I got back the “We are happy to inform you that you have successfully completed…” email, and here is a link to the certification
Yeah the OSWP is maybe out of date, yeah it’s not the most challenging exam in the world, but I had fun, and I learned a lot about how WiFi works, and in a pinch I feel like I could apply what I learned. Given that, this course totally met my expectations, and now I have a nifty little Offensive Security Wireless Professional certification.
]]>The podcast’s “thing” is that in addition to discussing a creative topic, the hosts give you one thing to consume and one thing to do every week.
The topic of the first episode is just “doing things” and not worrying about them being good. One thing that really struck home for me is that the hosts discussed that many people have an idea floating around in their heads and they never act on it. The thinking is that by “doing” the idea, you have burned it forever and then will never get the chance to do it well. The hosts discuss this in terms of artists doing studies for paintings where they practice the same part of it over and over again, and they ask “why not do this with your own projects? Feel free to do it badly, just do it and get it out there. It’s a study”.
This post is my little study. The precise task they give is “set a timer for 60 minutes and work on one of your ‘one day’ ideas, then at the end of 60 minutes put the result out there”.
I have had this strange idea bouncing around in my head for the longest time: That it would be fun to try and write a story/poem/paragraph/thought by using only the names of Pantone colors, and trying to make a combination of the words, the colors and the layout tell a story.
Tonight I set a timer for 60 minutes and started copy-pasting Pantone colors into a document to try and tell a story.
]]>What is the most important thing you’ve learned from a colleague?
I struggled for a while to find an answer to this question because I could point to any number of times that somebody I admire has taken the time to coach me through a difficult technical problem, or a manager or other mentor has given me career guidance that changed my direction. Ultimately, though, I told a story about my first ever shift of being on-call for our product and the engineer who helped me through it. Here is the relevant piece of the story
Eric, the engineer who was available to help me through this oncall problem wasn’t consciously coaching me on how to behave as an engineer during this event, but his attitude and behavior rubbed off on me to the point that I think it’s one of the things that shaped who I try to be as an engineer.
It was around this time that I first made the move from being a pure “software developer” to being more involved with keeping things running and questions of infrastructure in general, a decision that shaped the course of my future career, and indirectly led me into a more client and service focused role than I would have originally thought would appeal to me, but upon reflection I realize that fundamentally what motivates me as an employee is to support a team, facilitate growth, and to be challenged by big, scary problems. My current role certainly supports all of that.
If you had told me four years ago – as I was ending my career as an astrophysicist – that I’d be quite delighted by being on the phone with a software company’s clients, talking them through a tricky software install I quite literally would not have believed you, but when I look at my career now I can’t imagine it having gone any other way.
On top of that, Eric taught me how to increase the number of Celery workers on a busy machine.
]]>