I'm writing a book on the human side of being a developer. Sign up below for weekly reflections on working better and happier and what it means to be human in the age of AI, plus occasional glimpses of my art. Find out more at beyondwritingcode.com.
Be clear about your goal, issue 26 of Beyond Writing Code
Published 12 days ago • 7 min read
Beyond Writing Code #26
November 13, 2025
Be clear about your goal. I keep hearing that, but the lesson doesn't seem to stick.
Sometimes, we have to hear a lesson multiple times in order to get it to stick. No big deal. Not everything gets through on the first try, or even the second or third.
But have you ever encountered a lesson that you know you've been taught multiple times before... and you're pretty sure it isn't going to stick this time either?
At least, not unless you do something different than what you've been doing.
Sometimes you know the sticky notes aren't going to stick unless you also use a tack. Photo by Patrick Perkins on Unsplash
The other day I heard the message again: "be clear about your goal, and the other decisions become much easier."
Let's do something different. Let's actually try to find clarity.
Here's how I defined my goal.
The thesis statement
Let's start with my existing thesis statement.
In one of the first classes of Luvvie Ajayi Jones's Book Academy, Luvvie insisted that we each have a thesis statement for our book. It's a short description of what the book is about, with two parts:
What's the problem to be solved?
What solution does our book provide our readers?
Presto. Combine the two, and we've got our thesis.
Here's one of my early thesis statements:
As a developer who wants to grow in your career, go beyond writing code to developing leadership, collaboration, and big picture thinking skills.
Problem: you're a developer who wants to grow in your career, but you're stuck. Solution: read my book (of course) and develop these skills.
Nice and all, except that the more I write, the more I realize that isn't what my book is about.
This isn't a career growth manual, nor is it a soft skills training guide. Despite Luvvie's strong advice to not change our minds about it once we've written a good thesis statement, that's just not the book I'm writing.
But what is my book, then?
AI takes a turn at the title
Another way of looking at "what my book is about" is the title. Beyond Writing Code works as a name for my website. But as a book title, it says more about what the book isn't than what it is.
So I decided to have claude.ai suggest some book titles for me based on my blog posts and newsletter posts:
From a claude.ai session in September
I'm not especially fond of any of the suggested titles, but I was intrigued that it pulled out kindness, empathy, and especially the human side of development here.
Now we're getting somewhere. "Human Advantage" or "Humanity Matters" have been hanging around as drafty working titles.
Human side
Great, we know that we're talking about the "human side of development" now.
But you may remember that, when it came up in my newsletter post a few weeks ago, I wound up with more questions than answers.
I started by trying to finish this sentence: "I'm writing my book because..." and I landed here:
I’m writing my book because we need better ways of working in tech, and being human is the way there.
Great, and accurate, but:
What are "better ways of working"?
What do I mean by "being human"? Isn't it a given that human beings will be human, by definition?
Why is that key to better ways of working?
That said, everyone I talk to seems to react positively to this as a topic, especially as AI becomes a bigger and bigger part of our world. "Humanity matters" is a message people really want.
Let's take those questions one at a time...
What are better ways of working?
For years, researchers have been pointing the way to be a high-performing and happy team, with ample evidence.
Sooner Safer Happier, for example, reviews patterns and antipatterns for getting "better value sooner safer happier" outcomes. There's a great outline of the principles from that book here.
And the 2025 DORA State of AI-assisted Software Development Report has really made it clear: AI is an amplifier. If you're already a high-performing team, AI will help you do better. If you're already struggling, AI can certainly enable you to do worse.
And DORA continues to offer additional evidence for the capabilities of high-performing teams. Those capabilities focus on a climate of learning, fast flow, and fast feedback - the "three ways" outlined years ago in The Phoenix Project.
All of these resources are pointing at not just more effective ways of working, but also more humane and sustainable ways.
What makes us human?
The irony of asking AI to tell me what humans do better than AI was too good for me to pass up.
Here's another AI chat snippet:
AI ponders what's human
The AI's list of what's human:
Intuition/Gut Feelings
Collaboration & Relationship Building
Empathy & Emotional Intelligence
Creativity & Imagination
Systems Thinking—specifically, distinguishing signal from noise in complex contexts, holding paradoxes, understanding second and third-order effects ("If we do X, then Y, which will cause Z...”), and understanding how things fit together
Judgment Under Ambiguity
Navigating Conflict & Politics
Holding Values & Ethics
Maintaining Context Over Time
Presence & Attention
Sense-Making - synthesis that creates meaning ("This is different from analysis - it’s about interpretation")
AI's conclusion:
What AI does well: Pattern matching on existing data, optimization for defined goals, rapid information processing, tireless execution of defined tasks What humans do well: Everything listed above - the messy, embodied, relational, values-driven work of navigating complex sociotechnical systems
What does being human have to do with better ways of working?
One of the questions AI raised: what is the cost to developers of not reading my book?
I decided to come back to that question. Instead, I asked AI to do a survey of problems developers are reporting that my book might be able to address.
It listed the following:
Burnout and meaninglessness - Developers feeling like cogs, disconnected from impact, going through motions without understanding why their work matters
Organizational dysfunction - Feeling powerless in dysfunctional systems, seeing the same problems repeat, not knowing how to influence change when technical solutions aren't enough
Career plateaus - Hitting a ceiling where more technical skill doesn't help, but not knowing what other capacities to develop
Isolation and disconnection - Despite being on teams, feeling isolated, struggling with cross-functional work, not knowing how to build effective working relationships
Impostor syndrome and self-doubt - Especially around the "non-technical" aspects—feeling like they should just know how to navigate people and politics
AI anxiety - Right now especially, questioning their value and what makes them irreplaceable
The promotion problem - Getting promoted into leadership or senior IC roles and suddenly needing skills they've never developed or been told matter
"Does any of this resonate with what you're trying to address?" it asked me. "And I'm still curious—what's the cost if they don't learn this?"
We've nailed the cost right there: all of the above.
All of these are human problems, not technical problems. And they're all addressed in various ways by the stories from my own experience, as well as the lessons I've learned from reading.
Be clear on your goal
So, what's the goal of the book?
This still makes sense to me as my "why":
I’m writing my book because we need better ways of working in tech, and being human is the way there.
We've examined why we need "better ways of working": burnout, organizational dysfunction, career plateaus, isolation, etc.
We know what the better ways of working are. For example:
We've clarified what "being human" means in this context: "the messy, embodied, relational, values-driven work of navigating complex sociotechnical systems" as claude.ai summarized.
And I've got plenty of stories about that messy human work, ways in which embracing our full humanity has helped us as developers to do better, more humane work.
The goal, then:
Sharing how our uniquely human skills and strengths can help us work better and happier.
Uniquely human skills and strengths - where we excel, even in the age of AI
Sharing how they can help us - stories from my experience, learnings from my reading
Work better - really "better value sooner safer happier," but I think "better" says it all.
Work happier - even though "better" encompasses it, I repeat it here to emphasize that the top concerns I'm addressing are about alleviating human suffering: burnout, isolation, overwhelm, etc.
That's what I want to get into the hands of as many developers as I can.
Many thanks to Melissa M. Reeve, author of the forthcoming book Hyperadaptive, for asking me: "What's your goal in writing the book?" She knew all the other questions I had would be easier to answer once I was clear about my goal.
Drop me a note
I would love to hear from you. Hit reply and let me know what's on your mind. What's your goal? Have you used AI to refine it?
I was initially inspired to start consulting claude.ai to hammer out unfinished thoughts this by leadership coach Mathew Sanders in this post about using AI to discover your core values. From Mathew:
[AI] can help you articulate ideas you're struggling to express, offer perspectives you hadn't considered, and push your thinking in new directions.
This newsletter is approximately weekly. In addition, I post to my blog on my website, which also appears on Medium and Substack.
Why humanity matters in being a software developer
I'm writing a book on the human side of being a developer. Sign up below for weekly reflections on working better and happier and what it means to be human in the age of AI, plus occasional glimpses of my art. Find out more at beyondwritingcode.com.