Skepticism

One of the goals of my career sabbatical is to spend time with new technology tools. To that end, I decided to start working with an autonomous coding agent as a way to spend time with technology and to have an actual project to demonstrate what I am learning (that project here: https://stachemail.co). I am late to the party, but like everyone else who has spent any meaningful amount of time with these tools, I quickly came to the realization that taste & judgement will matter more in the future than they have in the past. (Note: I believe these are really one skill: judgement being a component of taste.) The taste and judgement people are completely right. However, the point of this post is to add a new word to the conversation around skills for the future: skepticism.

In the AI conversation, skepticism is a loaded word, but I would like to frame it here as a skill that is as essential to working with AI as taste & judgement. Skepticism is about thinking independently. It is about being blown away by what these AI tools can do while keeping in mind that, like everything else, communication—whether with humans or robots—is hard. Skepticism is remembering that AI agents are services that you are hiring to do a job—and like lawyers, barbers, and consultants—you have to be very precise about what you are asking them to do, you have to check their work, and you have to make sure that work accurately portrays what you are trying to accomplish. So, what does this look like? It is asking: Do I understand what is going on with this code? It is challenging the AI agent to explain its choices and continually peeling back the layer of the onion until you understand it well enough to guide it effectively.