After twelve months away from it I'm returned to Whoop. I got a lot of value out of it when I first used it, but when my son was born I knew training would have to take a back seat for a little while. We're now in more of a position where regular training is happening, goals are being reset and my craving for more data is increasing. 

Of course 24 hours after re-subscribing, Google announced the Fitbit Air. It looks great but perhaps a little basic in terms of sensors and datapoints. I'm looking forward to seeing what the early reviews make of it because the price point is otherwise excellent. A screen-free, cheap, data capture device that I can then apply my own analysis to is exactly what I'd opt for.

Switching to Fastmail

I've used Google Workspace for about 15 years now, and it's been great. The Google ecosystem is comfortable and works well. But more recently — and AI may well be the trigger here — I've wanted to have more ownership of my own data. To know where it is and what it is being used for.

To this end, I've started to build a personal database. I'll share more on that another time, but I'm opting to capture books, films, wines and coffee beans in there rather than use and maintain four different social services that are all essentially just advertising platforms. AI can now provide answers to the "based on my profile, what would I next enjoy" question, so I can trim away the rest.

It's a common saying that if you're not paying for the service, then you and your data are the service. So much of the Google ecosystem is shared, analysed, surfaced and optimised around keeping you inside it, rather than simply providing a requested service.

Email felt like the obvious place to start. Not because I think it's the most at risk, but because it sits underneath so much else. It’s a 15-year archive of messages, receipts, logins, family admin, travel plans, account recovery. A boring utility, until you stop and realise how much of your life passes through it.

I’ve had my own domain for years, so my email address itself isn’t changing. That makes this a much lower-risk move than it would otherwise be. I don't need to ask anyone to update contact details (my parents still try and email my university address!) and I’m not breaking old accounts. I’m just moving the plumbing from Google Workspace to Fastmail.

In theory, this is exactly the kind of internet I prefer. Open standards. A paid service with a clear business model. IMAP, SMTP, custom domains, boring reliability. Less “ecosystem”, more utility.

There’s something quite appealing about that. So let’s see how long this phase lasts.

Having Kids

And while having kids may be warping my present judgement, it hasn't overwritten my memory. I remember perfectly well what life was like before. Well enough to miss some things a lot, like the ability to take off for some other country at a moment's notice. That was so great. Why did I never do that?

See what I did there? The fact is, most of the freedom I had before kids, I never used. I paid for it in loneliness, but I never used it.

With my son now approaching his first birthday I can relate a lot to this piece from Paul Graham. Raising a child has been considerably harder than I anticipated. Don't get me wrong, it's equally the most incredible and fulfilling experience too, but it is relentless, and in the tougher moments it's easy to look back and think about the freedom you've since lost. 

Except I never used it.

It's just easier to essentially blame that fact on another part of your life than own up to it. There's also no reason why that freedom has to be lost, and this is something my wife and I are trying to fight. Sure, it's harder to travel with a one year old. Even more so with a dog. But it's far from impossible if that's what you really value. 


Craig Mod: MacBook Neo and How the iPad Should Be

I agree with a lot of this. The iPad has for too long occupied this strange middle ground. The hardware has been extremely capable for years whilst the software has inexplicably lagged behind. This is now more noticeable with AI.

I've been tempted through the years to consider the iPad Pro as my primary machine. After all, a vast majority of my work only requires a browser; everything of note is a web app or would have an iOS app available. But now, a main device that cannot run Claude Code or Codex wouldn't really be an option. It would feel like having my hands tied behind my back.

The Neo looks to be a great machine. A desire for that kind of device is why I picked up a second-hand 12-inch MacBook last year. Small and capable, though without an M-series chip it was never going to be a long-term main machine.

I still wonder where the iPad fits into my routine. Not as capable for work as a MacBook. Not as good to read on as my Kindle. Not as immediately available as my iPhone.

As Craig finishes by saying, it'll be very interesting to see how John Ternus approaches this when he begins as Apple CEO in September. The iPad is clearly very successful and a popular device, but is an ever closer convergence between iOS and macOS the right approach?

It'll also be fascinating to see how rumoured devices like the OpenAI hardware Jony Ive is working on may disrupt this space. Does the future of computing look completely different in ten years' time?

Manipulation versus management, tools versus agents

I recently read an essay by Alan Kay from 1989 that originally featured in The Art of Human-Computer Interface Design, edited by Brenda Laurel. This was essentially before the internet, before smartphones, and long before any of the AI assistants we now use daily. And yet it describes the exact problem we’re still trying to solve.

Kay makes a distinction I haven’t seen articulated as clearly anywhere else. Humans have extended themselves in two ways throughout history.

First, through tools. Physical things we manipulate directly. A hammer, a keyboard. The feedback is immediate. You hit a nail and it moves.

The second way is through management. Convincing other entities to work toward our goals. Other people, historically. But increasingly, software that acts on our behalf. What Kay calls agents.

The interface challenge for these two categories is completely different. With tools, the question is how efficiently can I manipulate this? With agents, the question is how do I know if I can trust this to complete the task I set?

This has been something that I’ve been thinking about this as I’ve played with various AI products. The onboarding for poke.com felt immediately familiar. Within minutes it felt like it “knew” me. But after a week the novelty wore off. Sure, it drafted emails, but I always felt the need to adjust before sending.

This is essentially the gap Kay identified. We’re trying to apply tool-based expectations to something that requires a completely different interaction pattern.

Kay wrote that the thing we most want to know about an agent is not how powerful it is, but how trustable it is. The agent must explain itself well enough so that we have confidence it’s working for us rather than as what he calls an escaped genie.

He predicted agent development would move in two directions. First, expanding into domains where mistakes don’t matter much. Where undo is easy. These would move fast. The second direction would move slowly. Domains where undo is hard or impossible. Where mistakes affect real relationships or irreversible decisions.

Looking at where AI has actually expanded, this prediction holds remarkably well. Code completion moved fast. Autonomous decisions in healthcare or finance remain constrained. The pattern isn’t about technical capability. It’s about reversibility, confidence and trust. Not in the technical abilities, but in the agent itself.

What strikes me most is his claim about explanation. Kay argued that well-done explanation will be needed regardless of how the agent is instructed. The interface challenge isn’t about making AI more conversational. It’s about making the reasoning legible enough to calibrate trust. When I ask an AI to draft something and then need to adjust it before sending, that gap represents a trust calibration failure. The AI was confident. I wasn’t. And I couldn’t easily understand why our judgments differed.

The hardest part to accept is that this might not be primarily a technical problem. Tool-based interfaces can be evaluated through direct feedback. Agent-based interfaces require something closer to the trust calibration we use with human colleagues. But with humans, we have shared context. We have social structures that create accountability. We build trust through repeated interactions where we observe judgment against outcomes.

None of these mechanisms exist for AI agents. The conversational interface creates an illusion of familiarity, but the underlying trust architecture is still largely missing.

Kay saw this clearly in 1989. We’re still figuring it out. But we’ll get there.