Having Kids

And while having kids may be warping my present judgement, it hasn't overwritten my memory. I remember perfectly well what life was like before. Well enough to miss some things a lot, like the ability to take off for some other country at a moment's notice. That was so great. Why did I never do that?

See what I did there? The fact is, most of the freedom I had before kids, I never used. I paid for it in loneliness, but I never used it.

With my son now approaching his first birthday I can relate a lot to this piece from Paul Graham. Raising a child has been considerably harder than I anticipated. Don't get me wrong, it's equally the most incredible and fulfilling experience too, but it is relentless, and in the tougher moments it's easy to look back and think about the freedom you've since lost. 

Except I never used it.

It's just easier to essentially blame that fact on another part of your life than own up to it. There's also no reason why that freedom has to be lost, and this is something my wife and I are trying to fight. Sure, it's harder to travel with a one year old. Even more so with a dog. But it's far from impossible if that's what you really value. 


Craig Mod: MacBook Neo and How the iPad Should Be

I agree with a lot of this. The iPad has for too long occupied this strange middle ground. The hardware has been extremely capable for years whilst the software has inexplicably lagged behind. This is now more noticeable with AI.

I've been tempted through the years to consider the iPad Pro as my primary machine. After all, a vast majority of my work only requires a browser; everything of note is a web app or would have an iOS app available. But now, a main device that cannot run Claude Code or Codex wouldn't really be an option. It would feel like having my hands tied behind my back.

The Neo looks to be a great machine. A desire for that kind of device is why I picked up a second-hand 12-inch MacBook last year. Small and capable, though without an M-series chip it was never going to be a long-term main machine.

I still wonder where the iPad fits into my routine. Not as capable for work as a MacBook. Not as good to read on as my Kindle. Not as immediately available as my iPhone.

As Craig finishes by saying, it'll be very interesting to see how John Ternus approaches this when he begins as Apple CEO in September. The iPad is clearly very successful and a popular device, but is an ever closer convergence between iOS and macOS the right approach?

It'll also be fascinating to see how rumoured devices like the OpenAI hardware Jony Ive is working on may disrupt this space. Does the future of computing look completely different in ten years' time?

What an incredible view captured by the crew of Artemis II. I wonder if this view is something that my son will get to experience. How will space travel change in the next fifty years?

Manipulation versus management, tools versus agents

I recently read an essay by Alan Kay from 1989 that originally featured in The Art of Human-Computer Interface Design, edited by Brenda Laurel. This was essentially before the internet, before smartphones, and long before any of the AI assistants we now use daily. And yet it describes the exact problem we’re still trying to solve.

Kay makes a distinction I haven’t seen articulated as clearly anywhere else. Humans have extended themselves in two ways throughout history.

First, through tools. Physical things we manipulate directly. A hammer, a keyboard. The feedback is immediate. You hit a nail and it moves.

The second way is through management. Convincing other entities to work toward our goals. Other people, historically. But increasingly, software that acts on our behalf. What Kay calls agents.

The interface challenge for these two categories is completely different. With tools, the question is how efficiently can I manipulate this? With agents, the question is how do I know if I can trust this to complete the task I set?

This has been something that I’ve been thinking about this as I’ve played with various AI products. The onboarding for poke.com felt immediately familiar. Within minutes it felt like it “knew” me. But after a week the novelty wore off. Sure, it drafted emails, but I always felt the need to adjust before sending.

This is essentially the gap Kay identified. We’re trying to apply tool-based expectations to something that requires a completely different interaction pattern.

Kay wrote that the thing we most want to know about an agent is not how powerful it is, but how trustable it is. The agent must explain itself well enough so that we have confidence it’s working for us rather than as what he calls an escaped genie.

He predicted agent development would move in two directions. First, expanding into domains where mistakes don’t matter much. Where undo is easy. These would move fast. The second direction would move slowly. Domains where undo is hard or impossible. Where mistakes affect real relationships or irreversible decisions.

Looking at where AI has actually expanded, this prediction holds remarkably well. Code completion moved fast. Autonomous decisions in healthcare or finance remain constrained. The pattern isn’t about technical capability. It’s about reversibility, confidence and trust. Not in the technical abilities, but in the agent itself.

What strikes me most is his claim about explanation. Kay argued that well-done explanation will be needed regardless of how the agent is instructed. The interface challenge isn’t about making AI more conversational. It’s about making the reasoning legible enough to calibrate trust. When I ask an AI to draft something and then need to adjust it before sending, that gap represents a trust calibration failure. The AI was confident. I wasn’t. And I couldn’t easily understand why our judgments differed.

The hardest part to accept is that this might not be primarily a technical problem. Tool-based interfaces can be evaluated through direct feedback. Agent-based interfaces require something closer to the trust calibration we use with human colleagues. But with humans, we have shared context. We have social structures that create accountability. We build trust through repeated interactions where we observe judgment against outcomes.

None of these mechanisms exist for AI agents. The conversational interface creates an illusion of familiarity, but the underlying trust architecture is still largely missing.

Kay saw this clearly in 1989. We’re still figuring it out. But we’ll get there.

Wales stands at a critical moment as our economy continues to degrade the natural resources that underpin our health, security, and prosperity. Wales is one of the most nature depleted countries in the world, with almost 1 out of 5 species at risk of extinction.

Wales’ consumption levels far exceed sustainable limits. If replicated globally, our resource use would require more than two Earths, demonstrating the amount of natural resources we import, all accompanied by impacts on other countries

— State of Natural Resources Report 2025

The State of Natural Resource Report 2025 paints a very bleak picture. All too often we look elsewhere, at what other countries are doing or the United Kingdom as a whole. But here in Wales we’re at a critical point. Our history, particularly agriculture which accounts for 90% of Welsh land, isn’t easily compatible with achieving climate goals. Recent changes to legislation, including the Sustainable Farming Scheme has real potential, though it’s still not perfect and is still struggling to balance the agricultural industry and people with the broader needs of our country.

Reading this report is making me really think about my own impact. We’ve all got to do more to protect nature. I need to find out what that is, though. There’s a lot to think about here.