Prologue

When I was in college, one of my poetry professors gave our class an assignment.
"Write," he said, "a bad poem."

You'd think this was a simple enough assignment, but we were college students, and English majors at that. Our egos were strong. Each attempt at writing a bad poem failed. We'd dash off deliberately clumsy lines, force awkward rhymes, pile on clichés. But then we'd read them back in class, and someone would find something in them–an accidentally evocative image, an unintended rhythm that worked, a throwaway line that landed with unexpected weight. Even our most careless words seemed to contain, on later readings, some glimmer of insight or meaning. Not because they were brilliantly written, but because they had the potential to mean something–if not to the writer, then to someone, somewhere, in some context we hadn't imagined.

And that, of course, was our professor's point. As aspiring poets, we constantly gnashed our teeth and wiped our brows, worried that what we had written wasn't good enough. And yet, as our professor's assignment demonstrated, the very fact that we were trying to write well–that we cared–meant we were heading in the right direction. We might never write something that we ourselves thought was "good enough," but maybe that's okay. It is the pursuit of that goal that matters.

This idea–that quality emerges from care–is one of the core themes of Robert Pirsig's 1974 book, Zen and the Art of Motorcycle Maintenance.

What few people realize is that Pirsig wasn't just a philosopher contemplating quality in the abstract. While he was writing Zen and the Art of Motorcycle Maintenance, he was working as a technical writer at Honeywell, writing computer manuals. He would wake up early in the morning in his apartment above a shoe store in south Minneapolis and work on his philosophical exploration of Quality, then go to his day job and write documentation. For four years, he lived in both worlds simultaneously.

I've spent over 25 years as a technical writer in various forms, and I've thought a lot about Pirsig's dual life. Because here's the thing: poetry and philosophy can rest on care alone. You can write a poem, put your whole heart into it, and whether or not anyone calls it "good," you've created something meaningful. The care itself is enough.

Technical documentation doesn't have that luxury.

As technical writers, it is our job to care. We care about our customers, who depend on our words to build useful things. We care about the engineers we work with, who toil endlessly to build systems that are scalable, performant, and useful. But caring isn't sufficient. In engineering, anything that can't be measured becomes suspect as something unnecessary. You can't simply look at documentation and say, "These are good docs." You need data. You need metrics. You need a framework.

And yet, in my many years of technical writing, I've never found a satisfactory way of defining what quality means for documentation. We know it when we see it. We recognize bad docs immediately and good docs eventually. But we struggle to articulate what makes the difference.

I'm convinced that quality absolutely applies to technical writing–we've just lacked a working framework to define it. I wrote this book not as an attempt to create an authoritative standard for quality in technical writing. Instead, I hope to start a conversation among technical writers, engineers, users–anyone who engages with technical documentation. My hope is that the ideas here either support your own thinking or give you something to push against.

Because the pursuit of quality in technical writing matters. And it starts, as Pirsig understood, by caring enough to try.

How to Read This Book

This section is being written for v0.6.

In the meantime: this book is currently at version 0.5, which means the manuscript is complete but not yet polished. The bulk of the content is here, but I'm still making changes to the structure. There's still some mechanical cleanup to do, because it doesn't make sense to edit a section heavily if I'm just going to delete it.

Still, if you'd like to read it, please do! I welcome your thoughts and insights. Enjoy the bit about my parent's "soirees", or jump to chapter 5 if you want to get into my six characteristics of quality.

Found a typo (I'm not surprised) or have a suggestion (I'm grateful)? The book is developed in the open. File an issue at github.com/aikithoughts/zen-technical-writing/issues.

Chapter 1: The dinner party

In a sleepy gated community on the outskirts of Augusta, Georgia, the event of the season is about to take place.

It's a semi-occasional party that my parents throw. They call these parties, soirees, and they usually occur when a particular mood strikes them to do so. Both of my parents are talented musicians. My father, a renowned psychiatrist, has played piano for decades while my stepmother, is a vocalist and skilled pianist in her own right. These soirees are often opportunities for my parents to share their considerable skills with select friends and family. The backyard is converted into an outdoor amphitheater. Special guests are invited to perform. The evening usually involves some very talented performances, accompanied by a reasonably polite amount of liquor.

They're a good time.

I've only attended one of these soirees. The fact that they occur in Augusta while I and my family live in Seattle means that a weekend jaunt is close to out of the question. However, every now and again the stars align and fortune smiles. I had found a good deal on airfare for just that particular weekend, so I was able to attend. Even better: my brother, a talented jazz saxophonist, was going to perform at the soiree as well. And so the weekend was shaping up to be a fun, informal, family gathering.

The night of the soiree arrives and the house is buzzing with activity. The backyard is set up with at least a hundred chairs. No easy feat considering that most of the yard is taken up by a swimming pool. (More than once I wondered if one of the attendees would fall in at some point during the evening. No one did.) Sound checks were done, programs were printed. And, as the one member of the family who does not play an instrument, I found myself with the all important job of vehicle logistics. Okay, okay--I stood out front and told people where to park.

It didn't take long until all the guests had arrived and the first round of libations consumed. I sat and enjoyed listening to the first set of performances--but still kept an eye on folks as they walked perilously close to the edge of the pool.

During an intermission, I wandered from the backyard and into the house, where a number of guests were getting drinks and having conversations. I smiled and nodded as I walked by–my mind admittedly on some lemon tarts I had seen on a tray somewhere in the kitchen. As I passed, I did what I usually did–I tried to guess how these people knew my parents. Some were no doubt parents of children to whom my stepmom taught piano. Others were likely doctors and professors that knew my dad professionally. Still others were acquaintances from their country club. I was usually wrong in most of my guesses, but it was fun to try.

As I maneuvered myself closer and closer to the kitchen, I soon spotted the lemon tarts I was looking for. But before I could reach them, a woman's voice stopped me.

"Excuse me," the voice said. "But are you related to Dr. Shevitz?"

I realize that you're likely reading these words, so you can be forgiven for not understanding the somewhat ridiculousness of the question. You see, I look nearly identical to a younger version of my dad. We have the same nose, the same smile, the same relative build. If you were to see a picture of my dad and I, side by side, you might wonder if someone had built a time machine. So when someone asks me if I'm related to my dad, it's a little like asking the sun if it belongs in the sky.

Nonetheless, I suppose it would be rude to just assume that I was related in some way. There's a chance, after all, that there is some quirk of universal genetics at work. So I smiled and turned to put a face to the voice. It belonged to a woman just a little older than myself. She was standing with a small group of men and women, who also turned to look at me.

"Why yes," I responded. "I'm Dave. I'm Dr. Shevitz's son."

"Oh!" The woman exclaimed. "You must be his OTHER son. Your brother is the one who plays the saxophone."

This was not the first time I'd participated in a conversation like this. So I kept my smile in place. "That's right," I replied. There wasn't a need to add any more than that.

One of the men spoke up. "You have a talented family!" he said. "Your father is great at the piano. And Lynda"--my stepmom–"has such a powerful voice! And your brother is just amazing as well."

I'm happy to accept compliments about the talents of my family–I really am. So I was genuine when I said: "I couldn't agree more."

"Tell me," the man continued, "do you play an instrument?"

I've gotten this question quite often over the years as well. I had my answer ready. "Not well enough for an event like this, I'm afraid." Unless a soiree was going to focus on the rhythm chords of AC/DC's "You shook me (all night long)" I wasn't going to be performing for this crowd any time soon.

The faces of this little group shifted into one of those expressions that is intended to convey sympathetic pity. "Oh," one of them said. There was a lengthy pause.

"So, what do YOU do?"

I have always found this question an interesting one to answer, especially because I've spent decades working for technology companies. Companies like Microsoft, Amazon, and Google have powerful roles in all of our lives—yet most people don't really know how these companies work. The easy answers—"I work for Amazon" or "I work in high tech"—usually imply that you're a software engineer.

There's nothing wrong with being a software engineer, but there are so many "tech adjacent" jobs that are essential for these companies to operate. Product Managers, Developer Advocates, Technical Writers—these roles do a lot to ensure that a given product release delights our intended customers. I'm always drawn toward advocating for these different roles, toward raising awareness of them in the minds of the public. Not only because I'm proud of what we do, but because it opens the door for other people who see the world of these technology companies and want to find a way to be a part of it. Not everyone wants to be an engineer, and that's okay.

Of course, the folks in my family's kitchen are not interested in a lecture. They're asking a polite question. I've tried several "elevator pitches" to convey what I do accurately, but they so far have all fallen short. "I write documentation," is too vague. "I explain how code works to developers," was better, but still not enough.

Standing there in that kitchen, surrounded by the sounds of my brother's saxophone drifting in from the backyard, I realized I was about to attempt yet another explanation that would probably fall flat. But I had to try.

"I'm a technical writer," I said.

The blank stares began almost immediately.

"Like instruction manuals?" someone asked.

"Not exactly," I replied, already feeling the familiar frustration building. "I write for software developers. I help them understand how to use the tools and services that other developers build."

More blank stares. I could see them trying to process this—developers helping other developers? Isn't that redundant?

I could tell I was running out of time. So I went to my default: "You know how, when you buy a new TV or microwave, it comes with a manual? A manual you never read?" That got a few smiles. "Well, it turns out developers also don't like to read. And I write manuals for them."

Everyone chuckled and the conversation moved on, their temporary curiosity satisfied.

Over the past few years, I've found that this humorous answer gnaws at me. Why don't developers like to read? It's almost a common joke among my technical writer friends: "We write things no one wants to read." In fact, when I mentor new writers, I often tell them:

"Remember, always assume that your audience is angry. They're angry because they couldn't figure out how to write the code they need to write, and now they have to read the documentation."

I still think this is good advice, but how did we get to this point?

As I've thought about this question over the years, I keep coming back to one idea: quality. We've spent so long not caring about content quality, treating technical docs like a box we need to check before shipping a new release. We don't focus on writing well, on maintaining existing documentation, on building thoughtful experiences. These shortcomings don't come from any malicious intent—even the largest companies care a lot about customer experiences. Instead, documentation is often a trade-off as teams put more focus on the things they can measure: deployment frequencies, shipped features, and so on.

This deprioritization of quality documentation is one reason that I both love and fear the introduction of AI tools. I love these tools because, when used correctly, they can help everyone create content faster. And I fear these tools because they are only as good as the data they're trained on. And we've had decades of bad documentation.

We need to think more about the quality of our technical writing. Like Phaedrus in Zen and the Art of Motorcycle Maintenance, we need to care about doing the work well, not just getting it done. Pirsig wrote about quality as something that can't be defined but is immediately recognizable—you know good work when you see it, whether it's a perfectly tuned engine or a piece of writing that actually helps someone solve their problem.

In technical writing, quality isn't just about correct grammar or following style guides. It's about understanding your audience so deeply that you can anticipate their questions. It's about organizing information in a way that matches how people actually work. It's about caring enough to test your instructions, to revise when something doesn't work, to maintain content as systems evolve.

Quality technical writing doesn't just inform—it empowers. It transforms frustration into confidence, confusion into clarity. When we commit to quality in our technical writing, we stop writing things people don't want to read and start writing things they're grateful to find.

While we may not be able to define quality in technical writing with a single sentence, we can identify its characteristics. We can recognize the patterns that separate documentation that helps from documentation that frustrates. We can name the qualities that make some technical writing indispensable and other technical writing ignored.

That's what this book is about: identifying quality's characteristics and learning how to achieve them.

Chapter 2: My journey

I never thought I'd become a technical writer. In fact, I didn't even know the job existed when I graduated college. But my path to technical writing—and my passion for advocating for content quality—started with a green pen.

One day, I was at a friend's apartment—a common gathering place for a whole host of different people—when another friend of theirs arrived with a stack of papers and a green pen. She sat down and began editing. I probably would have ignored it if I hadn't noticed the unusual color choice.

"Why are you using a green pen?" I asked.

"I'm editing," she explained. "And I don't want the engineers who wrote this to feel like they're back in high school. A red pen makes them feel like they just got a final exam back with the words 'See me after class' written on it."

"What are you editing?"

"I'm a technical writer," she said. "I'm editing some how-to topics."

Our conversation continued as I peppered her with questions about being a technical writer. What did she do all day? How did she learn about the technical topics she wrote about? What was it like working with engineers? By the time I went home that day, I knew I had found what I wanted to do as a career.

The Foundation

Looking back, I had two things going for me—one obvious and one not so obvious. The obvious thing was that I loved to write, and the University of Washington had a subcategory of its English degree that it called "English with a Writing Emphasis." Basically, it meant that I had to focus on two writing styles. I opted for poetry and expository writing. That spoke to me. I was even known for writing my essays in poetry form, because I found it amusing.

The non-obvious advantage? All of my friends were computer science majors. I didn't realize this was valuable at the time, but it taught me that code wasn't anything magical. It was challenging, sure, and required training and practice to do well. But my friends who were computer science majors weren't geniuses—and I mean no offense by this. Coding, programming, software engineering—these are hard things, but they're things you can learn. I just hadn't spent the time they had learning them.

When I graduated college, though, I had a shiny English degree and no idea what I wanted to do with it. I fell into working in computer sales, which I was terrible at. I didn't like making people buy things. But I was pretty good at explaining how things worked, and that seemed valuable. Still, I had no idea what I actually wanted to do.

After that green pen conversation, I went back to UW and took their Certificate in Technical Writing Course. I lucked out a little—at the time I was in school, around the 1990s, tech companies were hiring all over the place. I ended up landing a job as a technical writer at a very small company.

Learning the Craft

That first job lasted until they shut down. Then I went to another small company, where I documented wireless access points and point-of-sale devices. Eventually I wanted to do more, so I found myself at a startup. That's where I cut my teeth on a whole lot of things. A few experiences stand out.

The startup had very little money. They couldn't even buy a copy of Word. So I learned about open source and XML and how to write transforms so that I could create PDFs and webhelp. I still have the book on XSL-FO that I used. I don't want to ever read it again, though.

I needed to learn more about how to write code. One of the founders of the company handed me a book on Perl. I think it was "Perl for Dummies" but I don't remember. I will never forget that he wasn't gatekeeping his knowledge. He wanted me to understand how to write code. I particularly remember that his goal was to help me learn how to create.

I also learned how to lead projects and think about documentation strategically, not just as isolated topics but as part of a larger system that needed to work together.

Unfortunately, the startup didn't quite make it. My next stop was F5 Networks.

Learning to Push Back

At F5, I owned the documentation for the Global Traffic Manager. I learned a lot here too, particularly how to really engage with subject matter experts. I'll never forget handing some new content over to a very senior engineer. In those days, we still printed everything out! A day or so later, I got his feedback back. Across two pages he had drawn a red line and just a single word: "No."

I remember being scared out of my mind—this was a pretty senior member of the team—but I had to go back to him and say, "That's not good enough feedback. You wouldn't accept a bug report that just said: 'this is broken.' Help me understand where I went wrong."

And you know what? He did. He just wasn't great at communicating his thoughts through writing. This taught me an important lesson that would serve me throughout my career: being an expert in one area doesn't automatically make you an expert in another. Engineers need technical writers just as much as technical writers need engineers.

I also learned how important it was to meet with users and get a sense of what they were struggling with and what they needed to know. You can't be a good technical writer or content creator and not engage with your users—and I stand by that statement.

Working with an Editor

When it was time to grow my career again, I found myself at Microsoft. And that was transformative.

At Microsoft I learned a lot, but probably the most important thing I learned was how to work with an editor. When I joined Microsoft, I was part of a very small team working on Windows Live. There was me, my manager, a couple of contract writers, and an editor. It was working with that editor that was so transformative.

At the time, the writing process was as follows: I would write the content and get subject matter expert approval. Then I would hand the content to the editor, who would edit for clarity and consistency and all the other things that make writing good writing and then publish it. Rarely would he come back to me with questions. But that meant that I wasn't going to learn about flaws in my writing.

So I set up a weekly meeting with him so he could share where I could improve. Those meetings were invaluable. I learned a lot about how to consider localization, how to think about the broader content ecosystem as a whole, and so on. After some time, I remember him telling me: "I save your work for the end of my day. It's like dessert, because you've already written things so well."

To this day, that's one of the highest compliments I've ever received in my career.

Understanding User Journeys

Amazon was another evolutionary leap. When I was first there—10+ years ago—cloud computing was new. I joined as a curriculum developer, which is another tech adjacent role. My job was to create courseware on Architecture on AWS. This is where I got firsthand experience that users don't use products in isolation.

The course covered how to set up an AWS account, how to create EC2 instances, how to connect those instances to RDS or DynamoDB, how to set up load balancing—all of these different products and services that had to work together. In fact, they really didn't do much all by themselves, when you think about it! I also learned the importance of exploring on your own. I would build out my own networks, trying to set up a NAT server, and so on. To me Cloud Computing was such an amazing playground. And if you messed something up, you could just delete it and start over. Which I had to do a lot.

There's another piece that's important. At Amazon, I honed my belief that you don't need to be an expert to be a great communicator. And, in fact, being an expert can actually be a hindrance. I remember getting into a discussion with a colleague about this.

"We need experts," he argued.

"No, we don't," I said. "We need people who are knowledgeable enough that they can write about what we want to share. But if we become experts, we cease to become user advocates. We forget what it's like to be the novice. We have to be aware of that risk, all the time."

I'm not sure I convinced him, but I convinced me.

Learning Strategy

When I got to Google, I again learned so much.

One of my first experiences was when I was writing about a SQL dialect that Google used for several of its services. I remember being in a meeting with the project lead and telling that lead that the way he wanted to talk about a feature wasn't going to resonate with our users. After the meeting, a fellow writer took me aside.

"Do you know who that is?" they asked.

I was new to the company, so of course I didn't know. "Not a clue," I replied.

"He's a distinguished engineer at Google!" they replied in hushed tones. If you're not aware, distinguished engineer is one of the highest, if not THE highest, levels you can achieve.

"And you know what that means?" I said. "It means he's not a technical writer. He needs me to be knowledgeable about what I do, so he can focus on what he does. Just because you're a brilliant engineer doesn't mean you're a great writer. And just because I am a writer doesn't mean I don't know what I'm talking about."

Another major experience at Google was when I was the lead writer for Angular. That was when I needed to learn about strategy. The Angular docs were, at that time, an untended garden. It was probably nice at one point, but it was now overgrown and difficult to sift through. I had to advocate for focusing more on strategy—cleaning up outdated docs, deleting unneeded content—instead of constantly chasing new features. I had to learn, again, how to deal with senior leadership who thought their programming expertise automatically made them writing experts. And I needed to do this as a collaborator, not as an adversary.

I also learned how to develop systems for maintaining content. I put together programs to help localization efforts, for example. And I developed policies for deprecating content.

The Quest for Quality

There's one more thing about my Google experience that I need to share. It was at Google that I got set on my quest to define and champion content quality. But it wasn't a great experience.

I had gone up for promotion one year, and I was declined. That isn't unusual. But the reason they declined really bothered me. I was told that they were concerned about my writing quality.

Imagine how that feels, as a writer, to be told your writing isn't any good. I was absolutely devastated and more than a little angry. It took me months to get myself back together. When I finally did, I started asking: "What do we mean by content quality?" It turns out, as in Zen and the Art of Motorcycle Maintenance, folks knew quality when they saw it, but couldn't articulate what quality writing was. So I started my own journey to find—at least for me—what quality writing was and how you could define it.

It's funny, but since that moment, my time at Google, and then later Stripe, at startup, and then back at Amazon—where I am now—has all been about answering a question: "How do we define quality content, and how do we ensure that content helps our customers succeed?"

That green pen conversation started me on a career path I never expected. But that promotion feedback at Google started me on something more important: a quest to understand what makes content truly valuable. This book is the culmination of that quest—an attempt to give concrete shape to something we all recognize but struggle to define.

Chapter 3: The Types of Technical Writers

When I tell people I work for Amazon, I usually get one of two responses. Either they assume I work in a warehouse, or they often assume I'm a software engineer. Both assumptions reveal how little most people understand about how technology companies actually operate.

The reality is that companies like Amazon, Google, Microsoft, and Stripe employ thousands of people who aren't software engineers. And even within software engineering itself, there's incredible diversity. Frontend engineers build user interfaces and focus on how customers interact with applications. Backend engineers develop the server-side systems that power those applications. DevOps engineers build the infrastructure that keeps everything running. Security engineers build protections against threats. Data engineers build systems for processing and analyzing information.

But the misconception goes deeper than just the variety within engineering. Most people don't realize that technology companies need entire ecosystems of non-engineering roles to create products that customers actually want to use.

Everyone is a Builder

Andy Jassy, the CEO of Amazon, has a perspective I love: he views everyone at Amazon as a builder. This thinking transforms how you understand what it means to work in technology. Everyone at Amazon—and really, at any technology company—is a builder. The question isn't whether you're technical enough to work in tech. The question is: what do you like to build with?

Some folks build with software. We call them, unsurprisingly, software builders. They write code, design systems, and create the functionality that powers digital products.

Some build projects. Program managers and product managers identify customer needs and map out how to create services that meet those needs. They build roadmaps, coordinate teams, and ensure that technical capabilities align with business goals and user requirements.

Some build enthusiasm. Developer advocates and community managers share experiments and insights with different communities. They build relationships, create educational content, and help other developers succeed with specific technologies or platforms.

Some build experiences. UX designers and researchers build user interfaces and workflows that make complex technology accessible and useful. They build empathy for users into product development processes.

Some build understanding. Technical writers build with words. We take complex systems, processes, and concepts and build bridges between what engineers create and what users need to accomplish their goals. Documentation, after all, is code compiled by the human brain.

Let's focus on this last category—those of us who build with words—because it turns out there's remarkable diversity within technical writing itself.

The Spectrum of Technical Writing

Within technical writing itself, there's remarkable diversity in what builders with words actually create. There's a temptation—even among technical writers—to think that one type of technical writer is better or more important than another. That's not true. Each specialization requires different skills and serves different but equally valuable purposes.

Technical Writers build understanding for people trying to accomplish specific tasks with products or services. At Stripe, for example, most technical writers focused on explaining how to integrate applications with payment systems—many of those solutions involved no code whatsoever, but required understanding complex business processes, compliance requirements, and system configurations. The audience might be developers writing integration code, but it could equally be business users setting up payment flows or network engineers configuring server access rules.

Developer-Focused Technical Writers build understanding specifically for software developers who need to integrate APIs, use frameworks, implement services, or troubleshoot technical issues. They create API documentation, SDK guides, integration tutorials, and architectural explanations. This writing assumes programming knowledge and focuses on helping developers implement technical solutions efficiently. One of the most important skills for these writers is the ability to talk effectively with software engineers who serve as subject matter experts for what they're documenting. Having a solid understanding of how software engineering works is essential for success in this specialization.

UX Writers build understanding directly into products. They craft button labels, error messages, onboarding flows, and interface copy that guides users through complex workflows. These writers often work with what's called microcopy—think about the UI experience on something as small as your phone, where there's very little room for words. UX Writers specialize in creating delightful customer experiences using very few words. If a regular technical writer is a novelist, then a UX Writer is a poet—specifically, one that specializes in haikus.

Content Strategists build coherent content ecosystems. This is a somewhat new role, or one that is still evolving. Content strategists subdivide in ways similar to technical writers in general. A "general" content strategist focuses on sharing messages and stories to communities—thinking holistically about how marketing materials, product messaging, and communications work together to serve business goals.

There's another type of content strategist that I think of as a technical content strategist. These strategists not only think about stories, but they think about how to support those stories throughout a customer's journey. For example, in my role as a Principal Content Strategist at Amazon, my responsibility is not only to identify and tell specific stories to our customers; it's also to ensure those stories are supported by training and documentation. In some cases, that means partnering with training and documentation teams. In other cases, I'm working to develop solutions to help product teams—who may need to create their own documentation—create quality content.

As for what is quality content, I promise I'll talk about that shortly.

Why This Matters

Understanding these distinctions serves three important purposes.

First, it pulls back the curtain on the different roles that exist within the technical writing community. (I'm not trying to provide an exhaustive list here—there are other specializations and hybrid roles that emerge as the field continues to evolve.) If you're someone considering a career in technical writing, or if you're curious about the breadth of opportunities in technology companies, understanding this diversity helps you see potential paths you might not have known existed.

Second, for those who have to work with technical writers, understanding these different types and their unique value is crucial. People tend to lump technical writers into one group, and that's not accurate. Understanding the types of technical writers can help others decide what they really need from a writing standpoint. A writer who is great at programming tutorials can certainly try to write microcopy, but they don't have the depth of knowledge that a real UX writer has. Just like a software engineer specializing in backend services can probably help solve a problem with a React component, but doesn't have the same expertise that a frontend engineer has.

Third, we need folks to understand context. The ways in which we tell stories to customers is vast—too vast for any one group to be able to tackle. A beautiful customer experience doesn't just stem from one line of code, or even one service. And it certainly doesn't come from a single type of technical writing. A great experience for customers requires coordination across specializations. Just like a home doesn't just need skilled carpenters—you need foundation specialists, electricians, plumbers, roofers, and so on.

The Challenge Ahead

Now that we understand the breadth of technical writing specializations and why they matter, we face a more difficult question. With all these different types of technical writers doing such varied work across different contexts and audiences, how do we know if they're actually any good at what they do?

This isn't just an academic question. Organizations invest significant resources in content creation. Product teams depend on technical writers to help users succeed with their products. Developers rely on documentation to integrate services and build applications. Customers make decisions based on how well they can understand and use what companies build.

But evaluating the quality of technical writing turns out to be surprisingly difficult—much harder than you might expect. And that challenge is what we'll tackle next.

Chapter 4: A personal reckoning

I joined Google as a Technical Writer, which was technically a step down (level-wise) from previous roles. After about a year, I decided to try for promotion. I worked with my manager to put together my justification—what we called a 'promotion packet'—a comprehensive document listing my accomplishments, links to content I created, and testimonials from colleagues. Then I waited.

I didn't get the promotion.

That stung, but it's hardly unusual. Most people don't get promoted their first time through. I listened to the feedback, put together an action plan with my manager, and prepared to try again six months later.

I was declined again.

This time, it hurt worse. Not just because I had done what I thought the committee had asked, but because of the reason they gave me: "We have concerns about his writing quality."

As you read this, I hope you have to imagine what I must have felt. There I was, at one of the top companies in the world, and I was being told my writing quality wasn't good enough. If that was the case, why was I still employed? Why hadn't they moved me to exit? Even more baffling, my performance reviews were great! By one measurement, I wasn't doing well, but by another I was doing great.

I would like to say that I handled this feedback well. But I didn't. I spent a good half year being angry and upset. Don't do that. It neither helps you nor those around you.

When I finally moved into more of a growth mindset, I made a decision. If my writing wasn't of high enough quality, then I was going to fix that. And to fix that, I needed information. Specifically, I needed to know: How did we define content quality?

The Search for Standards

I thought I would find an easy, or at least well-defined answer. After all, I was at Google, a company that prided itself on making data-driven decisions. Surely we had data that indicated when content was of high quality and when it wasn't. We certainly have such mechanisms for code. When you submit code into a repository—for those unfamiliar with the process, I'm basically saying when you save the code so it can be used in a production environment—that code has to go through all sorts of tests. We have all sorts of ways of measuring if the code is functioning correctly, if it's doing the right things in the right ways. Most companies even have coding guidelines that cover the subtle nuances of software engineering that are harder to measure—things like how to declare variables, method names, and so on.

But technical writing doesn't have such mechanisms. There isn't an easy way of determining if a conceptual topic actually explains its concepts effectively, or if a how-to guide actually helps a user perform a given task. Depending on how you create your API documentation, you might not have an easy way of determining if a method does what it says it does.

There are some ways where you can get a few insights into this question of content quality. You can add widgets to your pages so people can give it a thumbs up or a thumbs down, even asking probing questions if a user is unhappy. Such widgets can give you some information, but they suffer from Negative Review Bias, which is a term used to describe how people who are unhappy with something are more motivated to share their unhappiness than those who are satisfied.

You can use page analytics, which include metrics such as bounce rates and pageviews. But those are also difficult to parse. A high bounce rate usually means that users are visiting the page and then leaving quickly. That would seem bad. But what if the topic is a landing page, with the purpose of leading users to the content they actually need? In that case, having a high bounce rate might be good—you don't want your customers spending a lot of time on a page trying to find the right link. A page with low pageviews might be seen as low quality. But maybe it's a page that's about a specific feature—one that, although not used often, is crucial to those who need it? Or, what if your potential user base for your content simply isn't as large as another documentation set? For example, I remember working on the Angular documentation. At the time, I was on a team that focused on internal developers. Angular was an outlier in that it was used internally, but was also open source, so it was used by millions of external developers. Looking at the numbers, my content was viewed by ten times the number of people than some of the content my teammates worked on. That didn't make their content lower quality—it meant that they had a smaller audience.

And you can conduct user research studies. These studies probably give you the most accurate insights into your content quality. You can actually sit with your users and see how they use your content, where it excels and where it falls short. But these studies take time and they take skilled professionals who know how to conduct them scientifically. Most technical writers aren't lucky enough to have access to a user experience team who can help conduct these studies.

I quickly found that we didn't have a clear idea of what content quality actually meant. We knew that some content was of higher quality than others, but we couldn't necessarily articulate why.

Why This Matters

This is a big problem for several reasons. One of them is obvious: if everything around you is able to be measured, but your own work is not, how can you articulate your value to the team and the company? I think this issue alone is why a lot of product teams distrust or discount the value that technical writing brings. Software engineers and product managers all have ways of measuring the impact of their work, but have no idea if the quality of their product's documentation even matters. And if they don't know if it matters, then it's understandable that they might underappreciate its value. That may be one reason why technical writers often experience variations of the following conversation:

Product team: "We need docs!"

Technical Writer: "Sure! When?"

Product team: "We shipped yesterday!"

Another reason why the lack of clarity around content quality causes problems: there aren't enough technical writers to handle all the documentation needs. Every company I've worked at—from Microsoft to Amazon to Google to Stripe—has never had enough technical writers. That's often meant that some product teams have to write their own documentation. But how can they do that when we can't articulate what quality documentation is? Imagine being asked to build a house, but not having access to any building codes or instructions. You'd have no way of knowing if the house you built was safe, let alone stable. We ask product teams to own their documentation all the time. And most have the best intentions when it comes to creating that content. But, if I can paraphrase Jeff Bezos for a moment, intentions don't matter—mechanisms do. Customers don't care if you meant well when you wrote your documentation; they only care if the content tells them what they need to know, when they need to know it.

And speaking of customers, that's the third reason why being unable to define content quality is such a problem. If we can't define it, we risk creating subpar customer experiences. Poor documentation might be overlooked if the product is simple or intuitive enough, but as systems get increasingly complex, you need quality documentation for customers to succeed. No matter who your customer is, they're coming to you because you're helping them solve a problem. If they can't figure out how you solve that problem for them, you're not helping them, you're hindering them.

Technical writers contribute to this evaluation challenge too. We're well-intentioned but often struggle to articulate why we write the way we do or what we need to create good content. When product teams ask how long documentation will take, or why we need certain information, many of us give vague answers about "understanding the user journey" or "ensuring content flows well." We can assess whether we're ready to write, but we can't systematically evaluate whether what we've written is actually good.

The AI Complication

You'd think with the proliferation of AI and large language models, this issue of content quality would fix itself. Just have the AI write everything! But it's not that simple—and in some cases, adding AI makes things worse.

With the introduction of large language models and generative AI, we have the opportunity to measure and improve content quality. In fact, while I was at Stripe (which, admittedly, was at the beginning of a lot of the AI innovations that are going on today) we decided to focus our energies on what we thought AI could do best: determine if the code examples in our documentation were functional—or at least, syntactically correct.

This idea was great in theory, but in practice it proved more difficult. When you document code in a tutorial, for example, you don't always show all of the code at once. Sometimes you build out the example one snippet at a time. But it's hard for AI to know that a snippet is part of a larger example. So we'd get a lot of errors about undeclared variables or undefined functions, when in fact they were defined earlier in the tutorial. The number of false positives in our tests made identifying real issues challenging—save for more obvious errors, such as a missing semicolon. And even if the code example was correct, that didn't mean the documentation around that code was accurate.

There's another reason why AI can be problematic when it comes to documentation and content quality. These models are only as good as the data on which they're trained. How do you know if that data represents good documentation? You can run tests to see if code works, but we have had decades of writing content without a strong definition of content quality nor decent ways of measuring that quality. But we all know about documentation that absolutely fails to meet customer expectations. Simply put, we're asking the models to be experts on a subject and giving those models questionable data on which to make decisions and analysis.

I'll be more blunt: We've spent decades de-prioritizing documentation, and now we want AI to write that documentation for us.

An Attempt at a Solution

As I continued to try to understand what content quality was while I was at Google, I had the opportunity to reflect on my career. It was then I realized that, while I had worked with amazing editors and writers across companies like Microsoft, Amazon, and Stripe, we had never really established a framework to discuss content quality. Every team had informal ways of recognizing good writing—we could point to examples we liked or disliked—but we couldn't systematically explain why one piece of documentation worked better than another.

I started thinking that such a framework could be transformative. It would bring technical writing closer to other software engineering disciplines in terms of metrics and measurement, giving us the language to articulate our value and process. We could use it to help product teams understand what they should think about when they needed to write their own content, moving beyond vague requests for "good docs" to specific, actionable guidance. Most importantly, we could better ensure that the products we created and the software we shipped was genuinely helpful to our customers.

A part of me thinks: "Who am I to determine what content quality is?" The best answer I can give you is: "I'm no more and no less qualified than anyone else who has spent their career writing technical content. But I know we have to do better than a definition that amounts to 'we know it when we see it.'" My promotion experience had forced me to think systematically about what makes writing work, and I'd seen firsthand how the absence of clear criteria hurt writers, our teams, and our customers. The status quo isn't good enough.

The rest of this book is my attempt to fix that. You might read my words and agree. And you might read them and think I'm full of it. That's not the point. If you read the rest of this book and come away just thinking about what content quality means, and understanding why it's important, I've done what I set out to do.

So let's get to it.

Chapter 5: The Six Characteristics of Quality

After years of searching for a clear definition of content quality at Google, Microsoft, Amazon, Stripe, and other companies, I had to accept an uncomfortable truth: there wasn't one. No framework, no rubric, no systematic way to evaluate whether documentation actually helped users succeed. We had intuition—we could point to examples of content we liked or disliked—but we couldn't explain why one piece of documentation worked better than another.

That realization forced me to start building my own framework. Not because I thought I was uniquely qualified to define quality, but because the alternative—continuing to rely on "we know it when we see it"—wasn't serving writers, teams, or users.

I approached this the way I'd seen engineers tackle complex problems: break it down into smaller, more manageable pieces. If quality as a whole was hard to define, maybe it would be better to break quality down into smaller components. The question was: what are those components?

Over the years that followed, I began to identify patterns in what made content genuinely useful versus merely accurate. I noticed that quality wasn't a single characteristic but rather multiple dimensions that worked together. Some content could be perfectly accurate but still fail users. Other content could be complete and well-organized but lack meaning for its intended audience.

Through analyzing what worked and what didn't across different contexts and products, a pattern emerged. What emerged was a framework built around six core characteristics that, when working together, create content that truly serves users: Accuracy, Completeness, Conciseness, Discoverability, Consistency, and Meaning.

The Six Characteristics

Accuracy ensures that what you're telling users is correct and appropriate for their context and your product's maturity level. But accuracy isn't binary—it's about being right in the right way for the right audience.

Completeness means providing everything users need to succeed in their specific workflows, not documenting everything that exists. It's about understanding the difference between comprehensive feature coverage and complete user journeys.

Conciseness balances efficiency with effectiveness. It's not about using the fewest words possible, but about respecting your users' time and cognitive load while still building the relationship and trust they need.

Discoverability recognizes that users don't read documentation linearly like a book. It's about creating content that works regardless of where users enter or exit, and guiding them toward valuable next steps.

Consistency operates across multiple layers—within individual topics, across documentation sets, and throughout entire product ecosystems. It reduces the mental overhead users face when navigating your content.

Meaning serves as the foundation for all the others. Content can be accurate, complete, concise, discoverable, and consistent, but still fail if it doesn't connect to what users are actually trying to accomplish.

Why This Framework Matters

This framework gives us the language to move beyond vague assessments of content quality. Instead of saying "this documentation feels confusing," we can identify that it has a discoverability problem—users can't find what they need or don't know where to go next. Instead of requesting "better docs," product teams can specify whether they need help with accuracy (getting the facts right), completeness (covering the right user scenarios), or meaning (connecting to user goals).

For technical writers, this framework provides a systematic way to evaluate and improve content. Rather than relying on intuition alone, we can assess content across each dimension and identify specific areas for improvement.

For organizations, it offers a way to think strategically about content investment. Not every piece of content needs to be optimized across all dimensions—sometimes you need laser-focused accuracy for a beta feature, while other scenarios call for broad completeness across multiple user types.
Unlike Zen and the Art of Motorcycle Maintenance, where Pirsig's pursuit of Quality was deeply personal and philosophical, I needed something more practical. While Pirsig sought a unified understanding of what Quality meant, I needed a framework that teams could actually use to evaluate and improve their content. We can break content quality down into these recognizable, achievable characteristics. While we may not be able to define quality in a single sentence, we can identify its components and learn how to develop them systematically.

In the chapters that follow, we'll explore each characteristic in detail: what it means, why it matters, how to achieve it, and how it relates to the others. You'll learn when to prioritize accuracy over speed, how to achieve completeness without overwhelming users, and why meaning serves as the foundation for all effective content.

Chapter 6: On Accuracy

When most people think about accuracy in technical writing, they imagine it as a binary state: information is either accurate or it isn't. In this view, good technical writing means getting every detail exactly right, while inaccurate writing is simply failed writing. But after years of working with products at different stages of maturity—from startup MVPs to enterprise platforms serving millions of users—I've learned that accuracy is far more nuanced and strategic than that simple binary suggests.

The traditional approach to accuracy creates impossible situations. Teams demand perfect documentation for imperfect products. Writers spend weeks documenting edge cases that may never ship. Users get frustrated when reality doesn't match the comprehensive promises made in documentation. Meanwhile, the core use cases that actually matter to users get lost in a sea of theoretical completeness.

There's a better way to think about accuracy—one that's strategic, user-focused, and aligned with how products actually evolve in the real world.

Accuracy Across Product Lifecycles

One of the most common mistakes I see teams make is demanding what I call "GA docs for beta code." Picture this scenario: A product team is preparing to launch a new API. The engineers are still fixing critical bugs discovered in testing. Key features might be delayed to the next release. Error handling is inconsistent across endpoints. But the product manager insists that documentation must be "complete and accurate" before launch.

This creates an impossible situation: writers are asked to document with certainty something that is inherently uncertain. The resulting documentation either becomes obsolete before it's published, or it makes promises the product can't yet keep.

I've seen writers spend weeks crafting detailed explanations of features that get cut the day before launch. I've watched support teams field angry customer complaints because the documentation confidently described functionality that was still experimental. I've observed engineering teams delay product launches because they felt the documentation wasn't "accurate enough," even though the core functionality worked perfectly well.

The reality is that products and services in different lifecycle stages require fundamentally different approaches to accuracy:

Early-stage products (beta releases, version 1.0, proof-of-concepts) only need to be deeply accurate for the specific scenarios and use cases the team wants to enable. If you're launching a beta API for processing payments, you need bulletproof accuracy for the standard payment flow. But you don't need to exhaustively document every possible error condition, edge case, or integration pattern that might theoretically be possible.

Consider Stripe's early API documentation. When they were starting out, they didn't try to document every conceivable payment scenario. Instead, they focused laser-sharp accuracy on the core use case: accepting a payment. The documentation was incredibly precise about that flow—every parameter, every response, every error code that mattered for the basic transaction. But they didn't pretend to have solved every edge case in e-commerce.

Growing products (version 2.0, expanding feature sets, new user segments) need accuracy that scales with their ambitions. As your user base grows and diversifies, the range of scenarios that require accurate documentation expands. But it expands strategically, following user demand rather than theoretical completeness.

Mature products (established platforms, enterprise solutions, widely-adopted tools) need broader accuracy coverage because users will naturally try to push the boundaries of what's possible. When you're serving millions of users across thousands of different use cases, the long tail of edge cases becomes significant. But even then, not every edge case deserves the same level of documentation accuracy.

I sometimes flip the question to product teams: "Would you want beta docs for your GA product?" The answer is always no. Nobody wants their mature, stable product to feel experimental or unreliable. A customer evaluating your enterprise platform doesn't want to read documentation that hedges every statement with "this might work" or "we're still testing this."

This helps teams understand why the reverse is also problematic. Beta software with GA-level documentation promises creates expectations the product can't meet. It's not just misleading—it's strategically counterproductive.

The Right Depth of Accuracy

The second major misconception about accuracy is that it requires exhaustive technical depth. This leads to documentation that reads like engineering specifications rather than user guides, where every implementation detail is meticulously explained whether it's relevant to the user or not.

I don't need to understand the chemical reactions in a baking recipe to successfully bake a cake. I need to know the ingredients, the proportions, the temperature, and the timing. The fact that gluten proteins form networks when hydrated and agitated might be fascinating to a food scientist, but it's not necessary information for someone who just wants to make bread.

Similarly, users don't need to understand every technical detail of how a system works to use it effectively. They need to understand the parts that affect their decisions and actions.

Consider database documentation. A developer using your database API needs to know that certain operations are atomic, but they probably don't need to understand the specific locking mechanisms that make atomicity possible. They need to know that indexes improve query performance, but they don't need to understand B-tree algorithms unless they're doing database optimization work.

The key is providing the right level of accuracy for the task at hand. This means focusing on what users need to know to accomplish their goals, not everything there is to know about the subject.

But here's where it gets tricky: different users have different depth requirements even for the same task. A database administrator setting up replication needs to understand consistency models in much more depth than an application developer writing simple queries. The same system, the same feature, but very different accuracy requirements.

I've seen technical writers get paralyzed by this variation. They try to accommodate every possible depth requirement in a single document, creating sprawling explanations that satisfy nobody. The beginner gets lost in details they don't need. The expert gets frustrated by explanations of concepts they already understand.

The solution isn't to find some middle ground that disappoints everyone equally. It's to be strategic about layering information. Start with the accuracy level that serves your primary user's immediate goals. Then provide clear paths to deeper information for users who need it.

Amazon Web Services does this well in their documentation. Their getting-started guides focus on the accuracy needed to complete basic tasks—creating resources, configuring settings, testing functionality. But they link extensively to deeper reference material, troubleshooting guides, and architectural best practices for users who need that additional depth.

The principle is simple: accurate enough to be useful, deep enough to be trustworthy, but no deeper than necessary for the immediate task.

User-Centric Accuracy

Perhaps most importantly, accuracy is inherently user-centric. What counts as accurate depends entirely on who's using the information and what they're trying to accomplish. This seems obvious, but it's one of the most frequently violated principles in technical documentation.

Information that's perfectly accurate for a seasoned developer might be misleading or incomplete for someone new to software development. Consider this statement in API documentation: "Authentication uses standard OAuth 2.0 flow." For an experienced developer, this is accurate and sufficient—they know what OAuth 2.0 is, how it works, and what they need to implement. For a junior developer or someone new to API integration, this statement is technically accurate but practically useless. They need to understand what OAuth 2.0 means, why it's used, and what specific steps they need to take.

The same technical detail that's essential context for one audience might be distracting noise for another. A system administrator needs to know about memory usage patterns when configuring a server. An end user of the application running on that server doesn't need that information—it would just create unnecessary anxiety about performance.

This user-centric view of accuracy explains why so much technically correct documentation fails to help users accomplish their goals. The information is accurate in an abstract sense, but it's not accurate for the specific person trying to use it in a specific context.

I learned this lesson the hard way early in my career. I was documenting a complex enterprise software system, and I prided myself on getting every technical detail exactly right. The engineering team praised the documentation for its technical accuracy. But user support was still overwhelmed with questions that seemed like they should have been answered in the docs.

The problem wasn't that the documentation was inaccurate—it was that it was accurate for the wrong audience. I had optimized for technical precision rather than user success. The documentation answered questions that engineers had about the system, not questions that users had about accomplishing their work.

This means accuracy isn't just about getting the facts right—it's about getting the right facts for the right audience. It's about understanding not just what is true, but what truths matter to the people who will use this information.

Consider the difference between these two accurate descriptions of the same software feature:

Engineer-accurate: "The system implements exponential backoff with jitter for retry logic, starting with a 1-second delay and doubling until reaching a maximum of 30 seconds, with randomization to prevent thundering herd scenarios."

User-accurate: "If your upload fails, the system will automatically retry several times with increasing delays between attempts. You don't need to manually retry—just wait and the system will handle it."

Both statements are factually correct. But they're accurate for completely different audiences and use cases. The first is accurate for someone who needs to understand the implementation (perhaps to configure it or troubleshoot it). The second is accurate for someone who just needs to know what to expect when using the feature.

The best documentation often includes both levels of accuracy, but clearly separated and targeted. The user-facing explanation focuses on what the user needs to know to be successful. The implementation details are available for users who need that deeper understanding, but they don't get in the way of users who don't.

Managing Accuracy in Practice

Understanding these principles is one thing; implementing them in real organizations with real constraints is another. The most effective approach I've found is to create what I call a "content accuracy hierarchy" that aligns with how users actually discover and consume information.

The foundation of this hierarchy is focusing canonical documentation on what's established and working reliably. This is your official documentation—the content that appears in your main doc site, gets linked from your product interface, and represents what your company officially supports.

For this canonical content, accuracy standards should be high but strategically focused. Document the scenarios you want users to succeed with. Be precise about the features that are stable and supported. Don't hedge or equivocate about functionality that works reliably.

But what about newer or experimental features? What about edge cases that might work but aren't fully supported? This is where the hierarchy becomes crucial.

Let blog posts, developer advocates, community content, and experimental documentation explore the cutting edge. These content types have different expectations and allow for more uncertainty. A blog post titled "Exploring Advanced Use Cases with [Product X]" signals that readers are venturing into less certain territory. A developer advocate's conference talk about "bleeding edge features" sets appropriate expectations about stability and support.

This creates a clear content hierarchy: official documentation represents what the company stands behind, while other content sources can acknowledge uncertainty and explore emerging possibilities.

I've seen this work particularly well at companies like HashiCorp. Their official Terraform documentation focuses laser-sharp accuracy on core workflows and stable features. But their blog, community examples, and developer advocate content explores newer providers, experimental features, and complex architectural patterns that might not be ready for official documentation.

When documentation does need to cover less-established scenarios—and sometimes it must—the key is being transparent about the level of support. Users should understand when they're in well-supported territory versus when they're venturing into areas that might change or require troubleshooting.

Some effective ways to signal this:

Clear labeling: "Preview feature," "Beta functionality," "Advanced configuration"

Explicit support statements: "This workflow is supported by our customer success team" versus "This is a community-contributed solution"

Honest limitations: "This integration works well for datasets under 10GB" rather than claiming unlimited scalability

Update commitments: "This documentation is updated with each product release" versus "This guide was last updated in Q2 2023"

The goal isn't to make users feel uncertain about your product. It's to help them make informed decisions about which features to rely on for critical workflows and which ones to experiment with in non-production environments.

Common Accuracy Pitfalls

Even with these principles in mind, teams still make predictable mistakes when managing accuracy. Here are the patterns I see most often:

The Perfectionist Trap: Teams delay publishing documentation until they can make it "completely accurate." Meanwhile, users struggle with no documentation at all. Remember: accurate documentation about 80% of use cases is infinitely more valuable than perfect documentation that doesn't exist.

The Kitchen Sink Problem: Writers try to document every possible scenario with equal accuracy and detail. This creates overwhelming documents where critical information gets lost among edge cases. Be strategic about what deserves detailed accuracy treatment.

The Oracle Fallacy: Documentation promises more certainty than the product actually provides. This is especially common with AI/ML products, where outcomes are inherently probabilistic. Don't let your documentation make promises your product can't keep.

The Static Mindset: Teams treat accuracy as a one-time achievement rather than an ongoing process. Product features evolve, user needs change, and business priorities shift. Accuracy requires maintenance and updates, not just initial precision.

The Expert Bubble: Subject matter experts review documentation for accuracy, but they're not representative of actual users. What seems accurate and complete to an expert might be confusing or insufficient for someone less familiar with the domain.

The Business Impact

Companies often struggle with content accuracy because they haven't connected documentation quality to business outcomes. They see accuracy as a "nice to have" rather than a strategic necessity. But inaccurate documentation creates measurable business costs that compound over time.

Increased Support Overhead: Every inaccurate piece of documentation generates support tickets. I've tracked cases where a single misleading sentence in API documentation generated dozens of support requests per week. The cost isn't just the support team's time—it's also the engineering time required to investigate issues that turn out to be documentation problems rather than product problems.

Slower User Adoption: Users who can't trust your documentation will be hesitant to adopt new features or expand their usage of your product. They'll stick with workflows they've already figured out rather than risk encountering more documentation that doesn't match reality. This directly impacts feature adoption metrics and expansion revenue.

Frustrated User Churn: Users who repeatedly encounter inaccurate documentation develop learned helplessness. They stop trusting your content and start looking for alternative solutions. In B2B contexts, this can mean losing entire accounts over documentation quality issues.

Reduced Team Velocity: When internal documentation is inaccurate, your own teams move slower. Engineers waste time trying solutions that don't work. Product managers make decisions based on outdated information. Sales teams make promises that can't be kept. The productivity cost ripples through the entire organization.

But measuring these costs is genuinely difficult, which is why accuracy often gets deprioritized. Unlike feature development, where you can track user engagement and conversion rates, documentation accuracy has indirect and delayed impact that's harder to quantify.

The companies that do prioritize accuracy have usually learned this lesson through painful experience. They've lost customers, wasted engineering cycles, or missed market opportunities because of documentation problems. The abstract concept of "quality" became concrete when it hit their revenue or their team's productivity.

Some of the contributing factors I see most often:

Feature Shipping Pressure: Teams focus too heavily on shipping features because that's what directly generates revenue. Documentation is seen as overhead rather than an enabler of that revenue. But this creates a false economy—rushed documentation often costs more in support and user confusion than it saves in development time.

Measurement Challenges: It's very difficult to test the effectiveness of documentation in traditional product metrics. How do you know if people are using your documentation effectively? How do you measure the counterfactual—the support tickets that didn't happen because documentation was accurate? The business impact is real but often invisible to standard analytics.

Technical Depth Mismatches: Technical writers aren't always technical enough to properly assess accuracy, especially for complex developer tools or enterprise software. Some writers would bristle at me saying this, but it's true and it's important. You can't accurately document what you don't understand. This doesn't mean every technical writer needs to be a software engineer, but there needs to be sufficient technical depth somewhere in the content creation process.

The most successful teams I've worked with address these challenges head-on. They've found ways to measure documentation effectiveness (user success rates, support ticket categorization, onboarding completion metrics). They've invested in technical writers who can engage meaningfully with the products they're documenting. And they've connected documentation quality to business metrics that leadership cares about.

Building Sustainable Accuracy

The goal isn't perfect accuracy—it's sustainable accuracy that serves your users and your business over time. This requires systems and processes, not just individual effort.

Establish Update Cycles: Different types of content need different accuracy maintenance schedules. API reference documentation might need updates with every release. Conceptual guides might be reviewed quarterly. Getting-started tutorials might need monthly verification. Don't treat all content the same way.

Create Feedback Loops: Build mechanisms for users to report accuracy problems, and more importantly, build processes for acting on that feedback quickly. A "report an issue" link that goes into a black hole is worse than no feedback mechanism at all.

Involve the Right People: Subject matter experts should review content for technical accuracy, but they shouldn't be the only reviewers. Include people who represent your actual user base in the review process. Their confusion often reveals accuracy problems that experts miss.

Design for Change: Accept that your product will change and design your documentation processes accordingly. This might mean focusing on principles rather than specific UI elements, or creating modular content that can be updated independently.

Track What Matters: Identify the business metrics that documentation accuracy affects—support ticket volume, feature adoption rates, user onboarding success—and track them over time. When you can connect documentation improvements to business outcomes, it becomes easier to justify continued investment.

The companies that achieve sustainable accuracy treat it as a product capability, not a content problem. They build systems, processes, and cultures that support ongoing accuracy rather than hoping it will emerge from individual effort and good intentions.

Chapter 7: On Completeness

Ask most technical writers what makes documentation complete, and they'll give you a laundry list: comprehensive feature coverage, exhaustive API references, detailed troubleshooting guides, multiple examples for every use case. This approach treats completeness as an inventory problem—if you document everything that exists, you've achieved completeness.

But this misses the fundamental question: complete for whom, and for what purpose?

After working with dozens of product teams across different industries and maturity stages, I've learned that completeness isn't about documenting everything that's possible. It's about documenting everything that's necessary for your users to succeed in their specific contexts. And those contexts vary dramatically based on where your product is in its lifecycle, who your users are, and what they're trying to accomplish.

The traditional approach to completeness creates several predictable problems. Teams exhaust themselves trying to document every feature and edge case, often before they understand which features actually matter to users. Writers create comprehensive reference materials that nobody reads because they don't match how people actually work. Documentation becomes a reflection of the product's complexity rather than a bridge to the user's success.

There's a more strategic way to think about completeness—one that adapts to your product's reality and serves your users' actual needs.

Completeness Across Product Maturity

Just as accuracy requirements change as products evolve, so does the definition of completeness. The completeness standard that makes sense for a mature enterprise platform would be wasteful and counterproductive for a startup's MVP.

For newer products, completeness means thoroughly documenting the specific scenarios that comprise your minimally viable product. These are the user journeys that you've validated, tested, and committed to supporting. Everything else is speculation.

When working on documentation for early-stage products, completeness doesn't mean documenting every possible integration or advanced workflow. It means making sure users can successfully complete the core scenarios that define your product's value proposition. Those fundamental use cases need to be documented completely and clearly. Advanced features and edge cases can wait until the product and user base mature.

This focused approach to completeness serves both users and the product team. Users get reliable guidance for the workflows that actually work well. The product team avoids over-committing to features that might change or disappear. Resources go toward perfecting the core experience rather than documenting theoretical possibilities.

As products mature and stabilize, the definition of completeness naturally expands. Your user base grows more diverse, with different skill levels and use cases. Features that were experimental become foundational. Edge cases that affected few users in the early days now impact thousands of users.

But even for mature products, completeness remains strategic rather than exhaustive. Amazon Web Services has thousands of features across hundreds of services, but their documentation doesn't try to document every possible combination and configuration. Instead, they focus completeness efforts on the workflows that drive the most user success and business value.

The key insight is that completeness should scale with your product's proven value, not its theoretical capabilities. Document completely what you know works well and supports reliably. Be more selective about scenarios that are possible but not yet proven or prioritized.

User-Centric Completeness

Completeness also varies dramatically based on who your users are and what they're trying to accomplish. What feels complete to a power user will overwhelm a beginner. What seems comprehensive to a developer might be useless to a business user. Even for highly technical documentation, completeness is ultimately about user success, not feature coverage.

Consider database documentation. For a database administrator setting up a new cluster, completeness means detailed coverage of installation, configuration, security settings, monitoring, backup procedures, and disaster recovery. Missing any of these topics leaves them unable to deploy the database safely in production.

For an application developer who just needs to store and retrieve data, completeness means clear guidance on connecting to the database, executing queries, handling errors, and managing connections efficiently. They don't need the DBA-level details about cluster configuration—including that information actually makes the documentation less complete from their perspective because it obscures what they need to know.

For a data analyst who needs to extract insights from stored data, completeness means comprehensive coverage of query syntax, functions, performance optimization, and data export options. Installation and configuration details are irrelevant to their success.

Same database, same feature set, but three completely different definitions of completeness based on user goals and contexts.

This user-centric view of completeness explains why so much technically comprehensive documentation fails to help users accomplish their actual work. The documentation covers everything about the product, but it doesn't cover everything the user needs to be successful with the product.

The most effective documentation teams I've worked with start by mapping user journeys rather than product features. They identify the key scenarios that each user type needs to complete successfully, then ensure those scenarios are documented completely from the user's perspective. Features that don't serve those core journeys get secondary treatment, regardless of how sophisticated or impressive they might be from a technical standpoint.

The Content Void Problem

Even when teams understand that completeness should be user-focused and maturity-appropriate, they often fall into a predictable pattern that creates what I call the "content void” of documentation completeness.

Teams love to create quickstarts. These short, "hello world" topics give teams a quick adrenaline rush of writing something clearly valuable. And it's true—a good quickstart is genuinely helpful to new users. It proves that your product works and gives people confidence to explore further.

Teams also love to write deep technical tutorials. These are weighty, comprehensive topics that showcase the full power of whatever they're building. Teams love them because they demonstrate impressive capabilities and complex use cases. But if I'm being honest, they also love writing them because they get to show off their own knowledge and technical sophistication.

What about the content in between? That content frequently gets left behind, because it's harder than writing a quickstart, and nowhere near as exciting as writing an in-depth tutorial. The quickstart can be knocked out in an afternoon. The comprehensive tutorial feels like a significant accomplishment that demonstrates expertise. But the middle content requires understanding user progression, breaking down complex workflows, and creating stepping stones that aren't as flashy but are absolutely critical for user success.

So you end up with this content void for most content sets. On one side, you have quickstarts that get users started but don't help them progress. On the other side, you have in-depth tutorials that demonstrate advanced capabilities but assume massive leaps in user knowledge and confidence. And in between is a wasteland of missing content that users have to somehow navigate on their own to build their expertise.

Consider Angular documentation as an example. You might have a quickstart that shows users how to create their first component—a simple "Hello World" that displays some text and maybe handles a click event. Then you have comprehensive tutorials that walk through building a complete e-commerce application with routing, reactive forms, HTTP client integration, state management, authentication, and deployment strategies.

But what about the progression between these extremes? How do you go from displaying "Hello World" to building components that communicate with each other? How do you handle user input before you're ready for complex reactive forms? How do you make HTTP requests before building a full e-commerce checkout flow? These intermediate steps get skipped, leaving users to figure out the progression on their own.

This is why you see documentation sets that have topics like "Create your first component" that immediately jump to "Build a full-featured application with authentication, routing, and API integration." There's nothing in between to help users progress from basic component creation to sophisticated application architecture.

The gap creates several problems:

User Abandonment: Users complete the quickstart successfully, feel confident about the product, then hit a wall when they try to build something real. They can't bridge the gap between the simple example and the complex tutorial, so they either struggle with inadequate guidance or abandon the product entirely.

Skewed User Progression: The only users who successfully advance beyond the quickstart are those who already have significant expertise or unusual persistence. This creates a user base that skews toward advanced users, which can distort product priorities and feedback.

Wasted Advanced Content: Those impressive comprehensive tutorials often don't get used because most users never develop enough confidence and knowledge to attempt them. The content that teams are most proud of becomes least accessible to their actual user base.

But here's the silver lining: if you haven't documented how to do something users want to do, the users will tell you! This focused approach to completeness creates a natural feedback loop where real user needs drive documentation priorities rather than theoretical feature coverage.

Filling the Void

One of the most effective approaches I've found for addressing the content void is to start with those in-depth tutorials that teams want to write anyway, then deliberately break them down into standalone, progressive pieces.

Take that comprehensive content management system tutorial. Instead of presenting it as a single intimidating guide, decompose it into discrete topics: database schema design, user authentication, basic CRUD operations, input validation, error handling, user authorization, automated testing, deployment considerations. Each piece should stand on its own while also serving as a building block for more complex scenarios.

This approach satisfies teams' desire to create impressive comprehensive content while solving the real problem of missing progression. Users can work through the components at their own pace, building confidence and expertise incrementally. They can also mix and match components based on their specific needs rather than following a single prescribed path.

The key is ensuring each middle-ground topic truly stands alone. It should have clear prerequisites, explicit learning objectives, and practical outcomes that users can validate. Avoid the temptation to assume knowledge from previous topics or to set up dependencies that force users through a rigid sequence.

Consider authentication and database connectivity as an example. This topic should cover everything needed to securely connect to a database and verify user credentials, including error handling for common failure scenarios. Users should be able to implement this functionality successfully without having read other topics in the series. But it should also integrate cleanly with more advanced topics like user authorization and session management.

Some teams resist this decomposition because they worry about repetition or redundancy. They don't want to explain basic concepts multiple times across different topics. But this concern misses the point—users don't read documentation linearly like a novel. They jump to topics based on immediate needs, often months apart. A little redundancy in service of standalone utility is almost always worth it.

Identifying What's Missing

The challenge is recognizing when you have a content void problem and systematically identifying what belongs in that missing middle ground.

The most reliable diagnostic is user behavior and feedback patterns. If you see a consistent pattern where users successfully complete your getting-started content but then struggle to progress to more advanced scenarios, you probably have a gap problem. If your support team repeatedly answers questions that seem like they should be covered in documentation, those questions often point to missing middle content.

Pay attention to the questions users ask in community forums, support tickets, and sales calls. Questions that start with "I've successfully completed the quickstart, but now I need to..." or "The advanced tutorial assumes I know how to..." are clear signals of missing progression content.

Another approach is to audit your existing comprehensive tutorials with fresh eyes. Look for assumptions, leaps in complexity, or points where the tutorial suddenly introduces multiple new concepts simultaneously. These are often opportunities to extract standalone topics that bridge the gap between basic and advanced content.

Consider involving users in content gap analysis. Users who have successfully progressed from beginner to intermediate or advanced usage can provide valuable insights about what information they wish they'd had at different stages. They remember the struggle points and knowledge gaps that your expert team members may have forgotten or never experienced.

The most systematic approach is to map actual user progression paths rather than theoretical feature coverage. Track how users actually move through your product and documentation. Where do they get stuck? What workflows do they attempt after completing basic tutorials? What combinations of features do they typically use together? This behavioral data reveals the natural stepping stones that your documentation should provide.

Sustainable Completeness

Achieving and maintaining appropriate completeness requires ongoing effort and strategic thinking. It's not a one-time documentation project—it's an ongoing alignment between your content strategy and your users' evolving needs.

Start with Core Journeys: Instead of trying to document everything, identify the 3-5 most important user journeys for your product and ensure those are completely and clearly documented. Everything else is secondary until those core paths work well.

Build Feedback Systems: Create mechanisms for users to identify gaps and report completeness problems. But more importantly, build processes for acting on that feedback systematically. A "suggest improvements" link that disappears into a backlog isn't useful—you need workflows that turn user feedback into content improvements.

Measure User Success, Not Content Volume: Track whether users can successfully complete the workflows your documentation describes, not how many topics you've published. Completion rates, success metrics, and user progression data are better indicators of completeness than content audits.

Design for Progression: Explicitly plan how users will develop expertise over time. What should they learn first? What builds on previous knowledge? What are the natural next steps after each major workflow? Design your content architecture to support this progression rather than hoping it emerges naturally.

Maintain Content Relationships: As your product evolves, keep track of how content topics relate to each other. When you update one piece of documentation, consider what other topics might need updates to maintain consistency and completeness across the user journey.

The goal isn't perfect completeness—it's strategic completeness that serves your users' success and grows appropriately with your product's maturity and user base. Focus your completeness efforts where they have the most impact on user outcomes, and resist the temptation to document everything just because it exists.

Chapter 8: On Conciseness

When most people think about conciseness in technical writing, they imagine it as a simple editing exercise: cut unnecessary words, shorten sentences, eliminate redundancy. This approach treats conciseness as purely about brevity—fewer words equals better writing.

But conciseness isn't just about using fewer words. It's about conveying your meaning with the fewest words possible while still achieving your communication goals. And those goals extend far beyond just transmitting information—they include building trust, maintaining engagement, and respecting your reader's time and cognitive load.

The challenge is that most writers, when they try to be concise, fall into one of two traps. They either strip away so much that their writing becomes cold and mechanical, or they swing in the opposite direction and add so much personality and explanation that they bury their message in unnecessary words. Both approaches fail because they misunderstand what conciseness is really about.

While our previous discussions of accuracy and completeness operated at the content and topic level, conciseness drops us down to the section, paragraph, and sentence level. This is where the rubber meets the road in terms of user experience—where individual sentences either help or hinder comprehension, where word choices either build or erode trust, and where tone either supports or undermines your message.

The False Economy of Extreme Brevity

Consider this example of a letter thanking an aunt for a gift:

"Dear Aunt Jane,

Thank you for the sweater. It's a lovely shade of blue. I love how warm it is, and it fits perfectly. I know I'll wear it for years to come.

Love,

Dave"

Following conciseness to its extreme, you might revise this to:

"Aunt Jane:

Thank you for the sweater. It is:

  • Warm

  • Blue

  • Well-fitting

I will wear it for at least two years.

Dave"

The second version is undeniably more concise. It uses fewer words, eliminates subjective language, and presents information more efficiently. But it's also a terrible letter. Aunt Jane isn't going to send you anything else any time soon.

This example illustrates the fundamental problem with treating conciseness as pure word reduction. The goal isn't to minimize word count—it's to maximize communication effectiveness. Sometimes that requires more words, not fewer. Sometimes personality and warmth are essential to your message, not obstacles to it.

The same principle applies to technical writing. Users aren't just trying to extract information from your documentation—they're trying to accomplish goals, solve problems, and build confidence in your product. Pure brevity can undermine these objectives just as much as excessive verbosity.

The Three Extremes of Technical Writing Style

When it comes to conciseness in technical writing, I find it helpful to think about three distinct styles that represent different approaches to balancing brevity with communication effectiveness.

Academic Writing

Anyone who went to high school or college knows what academic writing sounds like. It's formal, precise, and often unnecessarily complex. Interestingly enough, many of the conventions we associate with academic writing exist because, historically, academics wanted to make people work to understand what they were saying. Complexity was a feature, not a bug—it demonstrated sophistication and seriousness.

Academic writing in technical documentation sounds like this:

"In order to facilitate the implementation of the authentication mechanism, it is necessary to configure the appropriate parameters within the configuration file, ensuring that the requisite security protocols are properly instantiated before attempting to establish a connection to the remote server."

This sentence contains accurate information, but it buries simple concepts under layers of unnecessary formality. "Configure authentication settings in the config file before connecting to the server" would convey the same information more effectively.

Teams often drift toward academic writing because it feels professional and authoritative. Writers who learned formal writing in educational contexts may default to this style without realizing how it affects their readers. The problem isn't that academic writing is wrong—it's that it optimizes for perceived authority rather than user success.

Casual Writing

On the opposite extreme, we have casual writing. This is what many novice writers default to when they want to avoid sounding academic. They've heard that technical writing should be conversational and accessible, so they swing hard in the direction of informality.

Casual writing is equally verbose but in a different way. It fails to respect the user's time by adding unnecessary personality, explanations, and conversational elements that don't serve the user's immediate goals.

Casual writing sounds like this:

"Alright, so now we're going to check out this really cool authentication function that we just wrote. It's pretty neat how it handles all the security stuff automatically! Let's dive in and see how it does all this magic behind the scenes. Don't worry if it seems complicated at first—we'll walk through it step by step, and I promise it'll make sense by the end!"

This style might feel friendly and approachable, but it's actually disrespectful to users who are trying to accomplish specific tasks. They don't need encouragement or entertainment—they need clear, actionable information. The chattiness becomes cognitive overhead that interferes with comprehension.

The casual approach often emerges from good intentions. Writers want to sound approachable and human rather than robotic. They've seen examples of technical writing that feels warm and engaging, and they want to replicate that connection with their readers. The desire to avoid sounding like a machine or a textbook is understandable—nobody wants their writing to feel cold or intimidating.

Some writers are genuinely skilled at using humor and personality to enhance their communication, creating content that's both informative and engaging. But developing that skill requires significant practice and careful attention to audience needs. These skilled writers understand when personality serves the message and when it becomes a distraction. They know how to add warmth without adding confusion, and they can gauge whether their voice is helping or hindering their readers' success.

Many writers attempt a casual tone without considering how their personality affects comprehension, especially for readers for whom English is a second language. Cultural references, idioms, humor, and conversational asides that feel natural to native speakers can create cognitive overhead and confusion for ESL readers who are already working harder to process technical information in their non-native language. This consideration becomes even more critical when content needs to be localized or translated—casual writing that relies heavily on cultural context or wordplay often doesn't translate well, creating additional barriers for global audiences who depend on clear, direct communication to accomplish their technical goals.

Informal Writing: The Sweet Spot

Neither academic nor casual writing serves users effectively. What we want is what I call informal writing. Informal writing strips away unnecessary academic formality while still respecting the user's time and cognitive resources. It's neither pretentious nor chatty—it's direct, clear, and appropriately human.

Here's my test for informal writing: Pretend you're working with a colleague on a document. Your colleague is about to board a plane and won't have internet access during the flight. They have just a few minutes before boarding, and they need crucial information from you. How would you convey your message?

You wouldn't be formally academic—you know this person, and formality would waste precious time. You also wouldn't be excessively casual or chatty—they need to catch a flight, and joking around would be inappropriate and ineffective. Instead, you'd be clear, direct, and appropriately personal. You'd focus on what they need to know, expressed in the most efficient way possible. And you'd be supportive, because you want your colleague to be successful.

That's informal writing: respectful of the relationship, mindful of constraints, focused on effectiveness.

Applied to our authentication example, informal writing would sound like this:

"Configure your authentication settings in the config file before connecting to the server. Set the security_protocol parameter to 'TLS' and add your API key to the credentials section."

This version is concise without being terse, clear without being condescending, and direct without being robotic. It respects the user's time while providing the information they need to succeed.

Why Teams Fall Into These Traps

The academic and casual extremes aren't random choices—they emerge from understandable and well-intentioned thinking.

The Academic Trap often catches writers whose last significant writing experience was in college. Academic writing was rewarded in educational contexts, so it feels like "good writing" even when it's counterproductive for user documentation. Writers may also believe that formal language makes them sound more professional or authoritative, especially when documenting complex technical systems.

The Casual Trap catches writers who want to "sound cool" or make their content more engaging. They've seen examples of successful casual writing—often from skilled writers who have developed that ability over time—and they try to replicate the style without understanding why it worked in those specific contexts. They may also be reacting against overly formal documentation they've encountered, swinging too far in the opposite direction.

It's important to note that these aren't vocational patterns. I've seen casual technical writers and overly formal developer advocates. The tendency toward one extreme or another seems to depend more on individual background and intentions than on job role.

Recognizing Your Writing Style

Most writers have difficulty objectively evaluating their own tone and conciseness. We're too close to our own writing to hear how it sounds to others. Here are some practical techniques for developing awareness of your writing style:

Read Aloud with Minimal Inflection: Read your content out loud, but try to use as little vocal inflection as possible. Speak in a flat, monotone voice. This technique forces you to hear the actual words and sentence structure without the emotional coloring that your internal voice adds when reading silently. Academic writing will sound pretentious and unnecessarily complex. Casual writing will sound juvenile or condescending.

Try Hostile Inflection: As a follow-up test, try reading your content with deliberately sarcastic or hostile inflection. Quality writing should be resilient enough to survive this kind of stress test—the core message and logic should remain clear even when someone's trying to make it sound bad. If sarcastic delivery completely undermines your writing or reveals potential misinterpretations, it might indicate that your writing relies too heavily on assumed reader goodwill. This technique is particularly useful for identifying problematic word choices like "simply" or "just" that can sound condescending when read sarcastically. You don't necessarily need to change your writing based on this test, but it can help you anticipate how your content might be received and prepare you for potential reader reactions.

Use Text-to-Speech Tools: Screen readers and text-to-speech software provide an even more objective perspective on your writing. Hearing your content in a synthetic voice reveals patterns you might miss when reading with your own internal voice. The artificial delivery makes obvious any unnecessary complexity or chattiness that interferes with comprehension.

Apply the Coding Analogy: When working with developers, I often explain conciseness using programming principles they already understand. Code that is too terse becomes difficult to understand and maintain later. Code that is too verbose—using twenty lines when ten would suffice—becomes cluttered and hard to navigate. The best code is tightly written but verbose enough that it remains comprehensible and maintainable.

The same principles apply to technical writing. After all, content is essentially code that gets compiled by the human brain. Your readers need to parse your sentences, understand your logic, and execute your instructions. Unnecessary complexity creates cognitive overhead. Excessive casualness creates processing delays. Optimal writing minimizes both while maximizing comprehension and task completion.

The Developer Perspective

The coding analogy resonates particularly well with technical audiences because it reframes writing quality in terms they already understand and value.

Maintainable Code vs. Maintainable Content: Just as code needs to be maintainable by future developers (including your future self), content needs to be understandable by future readers (including users with different backgrounds and expertise levels). Academic writing creates maintenance problems because it's unnecessarily complex. Casual writing creates maintenance problems because it includes too much irrelevant information.

Performance Optimization: Developers understand that code performance matters—inefficient code wastes computational resources and creates poor user experiences. Similarly, inefficient writing wastes cognitive resources and creates poor reading experiences. Every unnecessary word, every confusing sentence structure, every irrelevant tangent is like inefficient code that slows down the user's mental processing.

Clean Code Principles: The programming concept of "clean code"—code that is easy to read, understand, and modify—applies directly to technical writing. Clean writing follows consistent patterns, uses clear variable names (in writing, this means precise word choices), eliminates redundancy, and focuses on functionality over cleverness.

However, there's an important caveat when applying coding principles to writing. Developers often follow DRY (Don't Repeat Yourself) principles when coding, but this can be problematic when applied too strictly to documentation. It often leads to attempts to "single source" content so that the same message is included in multiple topics through shared snippets or references. This creates a maintenance nightmare—when you need to change a note or explanation, how do you know how many topics are impacted? Does the updated message apply equally to all those different contexts? Unlike code, where a function serves the same purpose everywhere it's called, content often needs slight variations based on context, user type, or specific workflow. Sometimes a little redundancy in documentation is actually beneficial for user comprehension and content maintainability.

Conciseness Beyond the Sentence Level

While conciseness primarily operates at the sentence and paragraph level, the principles can influence broader content architecture decisions. When you consistently write concisely, you might discover that you need fewer topics than originally planned, or that information can be organized more efficiently.

However, these broader implications are secondary to the primary goal of sentence-level clarity. The completeness framework we discussed earlier provides better guidance for topic-level decisions. Conciseness is most valuable when applied to how you express ideas within those topics, not whether those topics should exist at all.

Practical Application

Achieving effective conciseness requires ongoing practice and attention. Here are some approaches that work consistently:

Start with Clarity, Then Trim: Don't try to be concise in your first draft. Focus on getting your ideas down clearly and completely, then revise for conciseness. It's easier to cut unnecessary words from clear writing than to add clarity to overly brief writing.

Consider Your Voice and Tone: Spend time thinking about your tone and voice. How do you want to communicate? Are you a knowledgeable advisor, a respected expert, a visionary leader? Your words construct your personality in the mind of the reader. Think about that personality and make sure your words support it. This isn't about adding unnecessary flourishes—it's about ensuring that every word choice aligns with the relationship you want to build with your readers and the role you want to play in their success.

Test with Real Users: The ultimate test of concise writing is whether it helps real users complete real tasks more effectively. You don't need a full user research study—talking with just a few users can yield great insights about whether your attempts at conciseness are helping or creating new barriers to comprehension.

Consider Your Audience's Context: Conciseness isn't just about word count—it's about respecting your reader's cognitive resources and time constraints. A user troubleshooting a production issue needs different conciseness than someone learning a new concept. Adjust your approach based on the urgency and complexity of your reader's situation.

The goal isn't to achieve some arbitrary standard of brevity. It's to find the optimal balance between efficiency and effectiveness for your specific users in their specific contexts. Sometimes that means more words, sometimes fewer, but always with intentionality about how those words serve your reader's success.

Chapter 9: On Discoverability

In my twenty-five years of writing technical documentation and thinking about how users actually consume information, there is one metaphor that continues to persist, over and over again: the book metaphor. If you look at any documentation set, they all have a left navigation that outlines the contents of the documentation just like you would with a book. There's an introduction, a number of chapters organized in logical sequence, and an underlying assumption—whether acknowledged or not—that users will read the content in that order, from start to finish.

That's not how people read technical documentation.

The book metaphor made sense when documentation was literally printed in books, when users had to flip through pages sequentially to find information. But in our digital world of search engines and AI assistants, where users can land on any page from any search query, this linear thinking actively works against user success.

The reality is that any topic might be the first topic a user reads, and any topic might be the last. Users don't start at your carefully crafted introduction and work their way through your logical progression. They arrive at your content through Google searches, AI queries, colleague recommendations, and support ticket links. They jump between topics based on immediate needs, not your intended narrative flow.

When we think about putting documentation systems together—because they are systems, not books—we need to acknowledge how users actually discover and navigate content.

Beyond Content Types

The traditional approach to documentation organization focuses on content types: reference materials, API documentation, conceptual overviews, tutorials, and troubleshooting guides, just to name a few. This approach assumes that users think in terms of content types—that they wake up in the morning and decide, "Today I need to read some conceptual material."

Users don't think this way. They think in terms of goals and problems: "I need to integrate payments into my application" or "Why is my API call failing?" They don't care whether the answer comes from a reference page, a tutorial, or a troubleshooting guide—they just want to accomplish their objective.

The best documentation sets don't restrict themselves to artificial content-type boundaries. Stripe's API documentation exemplifies this approach beautifully. They didn't limit their API reference to just describing objects and methods. They included comprehensive coverage of authentication workflows, checkout processes, webhook handling, and error management—topics that traditional thinking would categorize as "conceptual" or "tutorial" content that doesn't "belong" in an API reference.

This works because Stripe organized their documentation around user workflows rather than internal content taxonomies. When developers are implementing payment processing, they need to understand both the specific API calls and the broader context of how those calls fit into secure transaction flows. Stripe's documentation serves both needs in one coherent experience rather than forcing users to jump between different content types to piece together a complete understanding.

The Sherpa Approach

Great documentation doesn't just respond to user queries—it acts like a sherpa or guide. It's there to get you where you want to go, but also to help you discover interesting and valuable things along the way that you might not have known to look for.

A good sherpa doesn't just follow your exact instructions to get from point A to point B. They help you understand the terrain, point out important landmarks, suggest better routes based on current conditions, and alert you to opportunities or hazards you might not be aware of. They're proactive guides who enhance your journey rather than reactive responders who only answer direct questions.

Traditional documentation is more like basic signposts. It points you toward the specific destination you asked about, but it doesn't help you discover that you're asking the wrong question, or that there's a better route you haven't considered, or that there's a related destination that would solve your broader problem more elegantly.

But implementing the sherpa approach requires careful balance and respect for user intentions. There's a tension between guiding users toward valuable discoveries and respecting their immediate goals and cognitive load.

Respecting User Time and Intent

The first principle of discoverable documentation is respecting your users' time and intentions. Users aren't typically on a journey to find something new and exciting—they're trying to get something specific done, often under pressure or time constraints.

I learned this lesson clearly during my time at Stripe, where we faced constant pressure from product teams who wanted to promote beta features and new capabilities in our documentation. These teams were acting with good intentions—they wanted to share innovative solutions that could genuinely help users. But those users weren't browsing for innovations. They were trying to implement payment processing, resolve integration issues, or meet project deadlines.

Adding promotional content or feature callouts to task-focused documentation creates cognitive overhead that interferes with user success. It's like a sherpa asking if you want to climb Kilimanjaro instead when you're focused on reaching Everest base camp before dark. Kilimanjaro might be a genuinely rewarding expedition, but it's not what you signed up for and it doesn't help you accomplish your immediate goal.

This doesn't mean never exposing users to new capabilities—it means being strategic about when and how you do it. The key is understanding the difference between users who are in exploration mode versus execution mode, and designing your content experience accordingly.

Orientation and Self-Triage

Since any topic might be a user's first encounter with your documentation, every piece of content needs to help users quickly determine whether they're in the right place. This is like calling a support line and reaching the wrong department—a good support center quickly helps you identify where you actually need to be and gets you there efficiently.

Effective content orientation happens upfront, usually in the first paragraph or section. There's a simple pattern you can use to implement this approach consistently. For every topic, create a three-sentence introduction: The first sentence explains why the content matters—what value or outcome it provides. The second sentence describes what the topic will specifically cover. The third sentence offers links to related topics in case the user realizes they're in the wrong place.

Good orientation also includes contextual information about prerequisites, scope, and related topics. If a user needs to complete other setup steps first, or if the content assumes familiarity with certain concepts, that should be clear from the beginning. Similarly, if there are alternative approaches or more appropriate starting points for their specific situation, those should be signposted early.

However, be careful not to create prerequisite chains that frustrate users. I've encountered documentation sets where I had to navigate through three levels of prerequisites before I could access the topic I actually wanted to read. Good orientation mentions essential prerequisites without creating a maze of dependencies that prevents users from reaching their goals.

This upfront investment in orientation saves time for both users who are in the right place (they can proceed with confidence) and users who are in the wrong place (they can redirect their efforts quickly rather than getting lost in irrelevant content).

No Dead Ends

One of my fundamental rules for documentation structure is that no topic should ever be a dead end. Every piece of content should give users logical places to go next, should they choose to continue their journey.

This principle flows naturally from thinking like your users. After completing a tutorial, users might want to learn more about the API calls they just implemented, or they might want to add additional features to what they built, or they might want to understand how to troubleshoot common issues. After reading a reference page, they might want to see practical examples of implementation, or understand how that feature fits into larger workflows.

Designing for logical next steps requires understanding user progression patterns and common workflow sequences. It also helps identify unintended gaps in your content—if you can't think of appropriate next steps for a topic, that might indicate missing content that would serve your users.

The "no dead ends" principle doesn't mean overwhelming users with every possible option. It means providing 2-3 thoughtful suggestions that represent the most common and valuable paths forward from that specific content. These suggestions should be based on actual user behavior and feedback rather than assumptions about what users might find interesting.

The Information Architecture Problem

Despite the importance of user-centric organization, information architecture remains crucial even in an age of search and AI-powered content discovery. I've seen many documentation sets—Angular was one example when I worked on it several years ago, though it has improved since—where the categories of content simply didn't make sense. They were generic, poorly defined, and worst of all, there was often content that didn't fit into any of the established buckets.

To this day, when I see a section of documentation labeled "Advanced Topics," I know what that really means: "We didn't know where to put this information, so we shoved it here." Advanced Topics is the kitchen junk drawer of documentation—a catch-all category that serves no one well.

The broader problem is relying on complexity-based categories at all. Terms like "Fundamentals," "Intermediate," and "Advanced" are meaningless without user context. What's fundamental to a database administrator is radically different from what's fundamental to an application developer. What Stripe considers basic payment processing might be incredibly advanced for someone who's never handled financial transactions before.

These vague labels are actually a symptom of not understanding your users well enough. Effective information architecture uses categories that describe actual user goals and contexts: "Setting up authentication," "Handling payment failures," "Multi-party transactions," "Compliance requirements." These labels help users identify relevant content based on what they're trying to accomplish rather than forcing them to guess which arbitrary complexity level matches their needs.

The Garage Cleanout Process

When I work with teams on information architecture, I often use the typical garage that you'd find anywhere in the United States as a metaphor. In the US, people store all sorts of things in their garages—tools, holiday decorations, sports equipment, old furniture, boxes of miscellaneous items. So much accumulates that many people can't even park their cars in their garages anymore.

Sooner or later, a homeowner decides they need to clean their garage out. This process has two essential steps. First, you pull everything out and ask yourself: "Do I actually need this? Why?" You ruthlessly get rid of things that aren't important or useful anymore. Second, when you put things back, you organize them not only so you know where everything is, but so there's room for additional items as you acquire them.

The same principle applies to documentation. Review everything in your current information architecture and honestly assess whether it's still helpful to users. You can determine helpfulness partially through user feedback and analytics, but you can also evaluate it by observing how often content gets updated and maintained. Content that consistently falls behind or gets ignored probably isn't serving an important user need.

When you rebuild your content system, look at your product roadmap and planned releases. Can you quickly identify where future content would fit in your new information architecture? If you can't immediately see where new features or capabilities would belong, your IA needs more work.

This is a continuous process, not a one-time project. Every couple of years, you should review your documentation and clean out the garage. Products evolve, user needs change, and content that was once valuable may become obsolete or redundant.

The Development Analogy

When working with developer audiences, I often explain information architecture problems using programming concepts they already understand. Just as novice developers might create a "helper" class that seems logical initially but quickly becomes a dumping ground for unrelated properties and methods, documentation teams create categories like "Advanced Topics" that seem reasonable but become incoherent grab bags over time.

Both problems stem from taking the easy categorization path instead of doing the harder work of understanding actual relationships and usage patterns. It's much easier to create an "Advanced Topics" category than to figure out why certain content doesn't fit your existing structure, or whether your structure needs to evolve to accommodate new types of content.

Like helper classes, these vague documentation categories seem harmless at first. "Advanced Topics" might start with 2-3 legitimately complex pieces of content. But over time, anything that doesn't obviously fit elsewhere gets tossed there, until it becomes an incoherent collection that serves no user need effectively.

The maintenance problems parallel programming as well. Helper classes become harder to refactor over time because dependencies become unclear and interconnected. Similarly, the longer you let content accumulate in vague organizational buckets, the harder it becomes to reorganize properly because you lose track of user relationships and workflow connections.

The Roadmap Test

I mentioned earlier that, when cleaning out your garage, you should not only remove what you don't need, but think about what you might need in the future. For documentation, I call this the Roadmap Test. Look at your product's planned features and releases for the next 12-18 months. For each new capability or enhancement, ask yourself: "Where would documentation for this feature belong in our current IA? Can I identify the logical location immediately, or would I be tempted to create a new top-level category or throw it into a miscellaneous section?"

If you constantly struggle to place future content in your existing structure, that's a strong signal that your IA is organized around your current product state rather than user workflows and goals. Good information architecture should be flexible enough to accommodate product evolution without requiring constant restructuring.

The roadmap test also helps you identify emerging content themes that might warrant new organizational approaches. If you notice that several upcoming features all relate to a specific user workflow or use case that isn't well-represented in your current structure, that might indicate an opportunity to reorganize around that user journey.

And just as you don't clean out your garage only once, you should continuously examine your information architecture. The roadmap test isn't a one-time evaluation—it's an ongoing practice that helps you stay ahead of organizational problems before they become entrenched.

Discoverability in the Age of AI

Even as users increasingly find content through AI queries rather than browsing hierarchical navigation, underlying information architecture remains crucial. AI systems need to understand the relationships between concepts, the logical progression of user workflows, and the context in which different pieces of content are most valuable.

Well-structured information architecture actually enhances AI-powered discovery by providing clear semantic relationships that help AI systems surface the most relevant content for specific queries. When your IA is organized around user goals and workflows, AI tools can better match user intent with appropriate content, even when users don't know exactly what they're looking for.

The sherpa principle becomes even more important in AI-mediated discovery. When users ask an AI assistant for help, they want guidance that goes beyond just answering their immediate question—they want to understand the broader context, learn about related concepts that might be relevant, and discover solutions they hadn't considered.

Building for Discovery

Effective discoverability requires thinking systematically about user journeys while designing for the reality that users will enter and exit your content at unpredictable points. This means:

Every topic must be able to act as an entry point into the rest of the documentation set while connecting meaningfully to the broader system. Users should be able to understand and act on the content regardless of where they came from or what they read previously.

Navigation should reflect user workflows rather than internal product organization. Categories and labels should match how users think about their work, not how your company organizes its feature development.

Content relationships should be explicit rather than assumed. If topics build on each other or relate to common workflows, those connections should be clearly surfaced through strategic linking, contextual suggestions, and logical progression cues.

Discovery should be progressive rather than overwhelming. Instead of presenting users with every possible option, focus on the 2-3 most valuable next steps based on common usage patterns and user feedback.

The goal isn't to control how users navigate your content—it's to support their natural discovery patterns while gently guiding them toward information that will help them succeed. Like a good sherpa, effective documentation gets users where they want to go while helping them discover valuable things they didn't know they needed.

Chapter 10: Consistency

No single piece of writing exists on its own. Every topic, every document, every help article is part of a larger ecosystem of information that users navigate to accomplish their goals. For users to navigate this ecosystem effectively, we need to think about another aspect of quality: consistency.

Consistency in technical documentation means that users can rely on predictable patterns, terminology, structure, and voice as they move between different pieces of information—whether within a single topic, across a documentation set, or throughout an entire product ecosystem.

Unfortunately, consistency remains one of the most overlooked aspects of documentation quality, particularly as organizations scale and teams become more distributed.

To address this shortcoming, it helps to look at consistency as having three layers: topic, documentation set, and ecosystem. Understanding these layers—and the unique challenges each presents—is essential for creating documentation that truly serves users.

Topic Consistency

At the first layer, we have topic consistency. This means referring to things the same way and maintaining the same voice throughout a single piece of content. More importantly, it means speaking at the same level of detail throughout the topic.

You'd think that maintaining the same level of detail from the start of a topic to its end would be straightforward. But even the best documentation sets often struggle with this problem. It affects everyone who writes content—not just professional writers, but product managers, engineers, designers, and anyone else who contributes to documentation. It's natural to explain things that are easy in great detail, while glossing over concepts that are difficult or complex.

Consider a tutorial that walks through making an API call. The author might spend three paragraphs explaining how to construct a basic HTTP request—something they understand well and can articulate clearly. But when they reach the authentication section, suddenly the explanation becomes abstract and hand-wavy: "Configure your authentication as needed." The level of detail drops dramatically because authentication is harder to explain, or because there are too many options available, or because they're concerned about legal ramifications if they give the wrong information.

This inconsistency in detail level makes content feel unreliable. If users learn to expect thorough explanations in the early sections, when they encounter vague guidance later—precisely when they're likely to need more help, not less—they lose confidence in the documentation. Users need to know they can trust the content they read will support them evenly from start to finish.

Maintaining consistent detail level requires conscious effort and often multiple revision passes. It means identifying the appropriate level of explanation for your audience and maintaining that level even when covering topics that are harder to write about or less familiar to the author.

Documentation Set Consistency

The second layer is consistency across an entire documentation set. Here we need to ensure that we structure content in similar ways across all sections and that readers can develop reliable mental models for how to find information.

If you establish a pattern—perhaps QuickStart → How-to Guides → Concepts → Reference—you need to stick to that pattern consistently. Keep your how-to guides focused on procedures across all sections, and organize your reference material using the same principles from page to page.

This is similar to how grocery stores work. When I go to my local grocery store, I expect things to be laid out in the same way every time—produce near the entrance, dairy along the back wall, checkout lanes at the front. I also expect this consistency when I visit another location of the same store chain. When the layout differs significantly from one location to another, I get disoriented and frustrated, even though both stores carry the same products I need.

Documentation works the same way. Users develop mental maps for how to navigate your content, and they appreciate documentation most when they can rely on those learned patterns. Inconsistent organization forces them to relearn how to use your documentation system for each new section they visit, which increases cognitive load unnecessarily.

This level of consistency extends beyond just structural patterns to include:

Voice and tone consistency: The language and character should remain stable across all content in the set. A conversational tone in tutorials followed by terse language in the API reference creates jarring transitions that disrupt the user experience.

Terminology consistency: Technical terms, product names, UI labels, and conceptual language should be used consistently throughout. If you call something a "workspace" in one section, don't refer to it as a "project area" in another.

Formatting and style consistency: Code examples, screenshots, callouts, warnings, and other formatting elements should follow predictable patterns that users can recognize and interpret quickly.

Assumption consistency: The level of technical knowledge you assume from readers should remain stable, or change predictably with clear signaling when it does shift.

Ecosystem Consistency

The third and most challenging layer is consistency across multiple documentation sets within a larger product ecosystem. This is where things get particularly difficult for product teams, and where the most significant user experience problems often emerge.

Teams naturally want to customize their documentation in ways that make sense to them and their customers—and this instinct is often correct, as these teams do know their users best. But I often find I have to remind teams that they're thinking of their users only in the context of their own product. At companies like Stripe, Google, and Amazon, users don't stay confined within single product boundaries. They use EC2 alongside RedShift alongside SageMaker. They implement authentication while setting up payments while configuring monitoring. They're trying to accomplish larger goals that require multiple products working together seamlessly.

This ecosystem-level inconsistency creates several problems:

Terminology conflicts: When different teams use different names for the same concepts, or worse, use the same names for different concepts, users get confused and make mistakes.

Structural inconsistency: When each product team organizes their documentation completely differently, users have to relearn navigation patterns for each new product they encounter.

Tone and voice fragmentation: Dramatic shifts in communication style between related products make the overall experience feel disjointed and unprofessional.

Integration gaps: When teams focus only on their own product's documentation, the critical information about how products work together often falls through the cracks or becomes outdated.

The Cognitive Biases That Break Consistency

Understanding why consistency breaks down requires recognizing two cognitive biases that affect how teams approach documentation:

Proximity bias in documentation contexts means that teams prioritize their own product's documentation needs because those needs are immediately visible and urgent to them. Meanwhile, they de-emphasize or ignore the documentation needs of other teams—even when users frequently need both products to work together to accomplish their goals.

This isn't done with malicious intent. It's natural human focus combined with organizational structures. Teams think, "I'll care about my product, and other teams will care about theirs, and everything will work out fine." But users don't experience products in isolation like that.

The curse of knowledge occurs when writers and product teams assume that users understand concepts, terminology, and workflows that seem obvious to the team but are actually specialized knowledge. Teams become so expert in their own domain that they forget what it's like to encounter their product for the first time, or to use it in combination with unfamiliar tools.

These biases compound each other. Teams are both too focused on their immediate concerns AND too expert in their specific domain to recognize how their documentation fits into the broader user experience.

The Critical Role of Dedicated Writers

More than anywhere else, ecosystem-level consistency requires having dedicated writers, content strategists, or editors who can look at content holistically. Someone needs to be able to spot inconsistencies that are masked by proximity bias and the curse of knowledge.

These roles are critical because they can:

See across team boundaries: While product teams are naturally focused on their specific area, dedicated content professionals can maintain awareness of how different products and services connect in real user workflows.

Maintain user perspective: Professional writers are less likely to be blinded by expert knowledge because maintaining the user perspective is literally their job.

Advocate for consistency: When consistency conflicts with team preferences or convenience, someone needs to have the authority and responsibility to advocate for the user experience over internal team dynamics.

Identify pattern opportunities: Dedicated content professionals can spot opportunities to create reusable patterns, shared terminology, and coordinated approaches that individual teams might not notice.

Maintain style and voice: Ensuring consistent tone and voice across multiple teams requires someone whose primary focus is on the content experience rather than product features.

AI as a Consistency Tool

Having just advocated for dedicated technical writers, content strategists, and editors, I should also mention that AI is a very useful tool for ensuring consistency—provided you remember its limitations. AI tools can quickly identify terminology conflicts, flag formatting inconsistencies, and spot structural deviations from established patterns. They can scan large volumes of content far faster than humans and catch mechanical issues that might otherwise slip through.

However, AI isn't a complete solution for consistency challenges. AI can tell you that one document uses "workspace" while another uses "project area," but it can't evaluate whether that inconsistency actually matters to users in context. It can't understand the nuanced relationships between products in an ecosystem, make strategic decisions about when consistency should be flexible versus rigid, or navigate the organizational dynamics that often drive consistency problems.

Most importantly, AI can't replace the human judgment needed to understand user workflows that span multiple products and teams. The ecosystem-level consistency challenges that cause the most user friction require understanding business context, user goals, and cross-product relationships that go well beyond what current AI can effectively evaluate.

From a consistency perspective, AI works best as a supporting tool for dedicated content professionals, helping them identify potential issues more efficiently so they can focus their human judgment on the strategic consistency decisions that matter most to users.

The Compounding Effect of Inconsistency

Inconsistency problems compound as they move outward through the layers. A small terminology inconsistency within a single topic might cause momentary confusion. The same inconsistency across a documentation set creates ongoing navigation difficulties. But when that inconsistency extends across an entire product ecosystem, it can create fundamental trust and usability problems that affect user success and business outcomes.

Users notice these problems subconsciously because they're trying to accomplish real work under real constraints. They feel frustrated or confused even when they can't articulate exactly why the experience feels harder than it should. When they encounter inconsistent levels of detail between related products—one topic that explains everything thoroughly while another glosses over critical steps—they lose confidence in their ability to successfully complete complex workflows that span multiple products. When navigation patterns change dramatically between related documentation sets, their learned efficiency gets reset and they have to invest cognitive energy in relearning how to find information.

These might seem like minor friction points, but just as $100 invested over time grows into a large sum, they accumulate into significant barriers to user success, particularly for complex workflows that span multiple products or services.

Building Consistency Systems

Effective consistency doesn't happen accidentally—it requires intentional systems and processes:

Establish clear patterns early: Define your structural, stylistic, and tonal patterns when you have fewer competing interests and less content to reorganize.

Document your standards: Style guides, voice and tone guidelines, and structural templates need to be accessible and actively maintained, not just created once and forgotten.

Create feedback loops: Regular content reviews, user feedback analysis, and cross-team collaboration should specifically look for consistency issues across all three layers.

Assign responsibility: Someone needs to be explicitly responsible for maintaining consistency at each layer—this can't be an everyone-and-no-one responsibility.

Plan for scale: Consider how your consistency approaches will work as your team, your product, and your content volume grow over time.

Measure what matters: Track consistency-related user feedback, support ticket themes, and user behavior patterns that might indicate consistency problems.

The goal isn't to make everything identical—it's to reduce the mental overhead users face when moving between your products. Users shouldn't have to relearn how to navigate documentation or guess whether they'll get the help they need. When they can rely on consistent patterns and detail levels, they spend less time reading your documentation and more time getting their work done.

Chapter 11: On Meaning

For content to have any quality, it has to have meaning for its intended user.

This seems obvious, almost trivially true. Of course documentation needs to be meaningful to users. But meaning in technical writing is more complex and fragile than most people realize. Content can be perfectly accurate, strategically complete, efficiently concise, easily discoverable, and rigorously consistent—and still fail completely if it doesn't connect to what users are actually trying to accomplish.

Meaning isn't just about having relevant information. It's about creating content that resonates with users' mental models, supports their workflows, and helps them make progress toward their goals. Without meaning, all the other characteristics of quality become irrelevant.

I've seen this failure of meaning countless times: API documentation that meticulously describes every parameter but doesn't explain when you'd use the API. Tutorials that walk through every step of a process but never clarify what problem the process solves. Reference guides that comprehensively catalog features but don't connect those features to user outcomes.

The cruel irony is that teams often create meaningless content while believing they're being user-focused. They conduct user research, gather requirements, and carefully document what users asked for. But they miss the deeper layer of meaning that connects information to purpose.

The Difference Between Information and Meaning

Information is what your product does. Meaning is why it matters to your users.

Consider these two approaches to documenting the same authentication feature:
Information-focused: "The authenticate() method accepts a username string and password string as parameters and returns a boolean value indicating success or failure."

Meaning-focused: "Before users can access protected resources in your application, you need to verify their identity. The authenticate() method takes their login credentials and confirms whether they should be granted access."

Both are accurate. The first is more technically precise. But the second creates meaning by connecting the technical capability to the user's broader goal of controlling access to their application.
The information-focused approach treats documentation as a catalog of capabilities. The meaning-focused approach treats documentation as a bridge between capabilities and accomplishments.

This distinction becomes critical as systems grow more complex. Users can memorize information about individual features, but they need meaning to understand how those features work together to solve their problems.

The Three Levels of Meaning

Meaning in technical documentation operates at three interconnected levels: task-level, workflow-level, and strategic-level. Understanding these levels helps explain why some documentation feels immediately useful while other documentation requires users to do significant translation work.

Task-Level Meaning

At the most granular level, meaning connects individual actions to immediate outcomes. When you document a specific API call, configuration setting, or user interface element, task-level meaning answers the question: "What does this accomplish?"

Poor task-level meaning sounds like this: "Set the retry_count parameter to control retries." This tells users what the parameter does but not why they'd want to control retries or how to decide what value to use.

Strong task-level meaning sounds like this: "Set retry_count to 3 to automatically recover from temporary network failures without overwhelming your servers with repeated requests." This connects the technical action to a meaningful outcome users care about.

Task-level meaning requires understanding not just what your product does, but what problems that functionality solves for users. It requires connecting features to outcomes that matter in users' contexts.

Workflow-Level Meaning

The second level connects individual tasks to larger workflows that users are trying to complete. This is where many documentation sets struggle, because it requires understanding how users actually work, not just how your product works.

Workflow-level meaning answers questions like: "When would I use this?" and "What do I do next?" It acknowledges that users don't invoke features in isolation—they're following sequences of actions to accomplish larger goals.

I learned the importance of workflow-level meaning during my time documenting AWS architecture patterns. Individual services like EC2 and RDS were well-documented at the task level—users could learn how to launch instances or create databases. But users struggling to architect complete applications needed to understand how these services connected to support real-world workflows.

The breakthrough came when we started organizing content around workflow patterns: "Building a scalable web application," "Processing batch data reliably," "Implementing disaster recovery." Each pattern showed how multiple services worked together to solve a complete problem, not just how each service worked individually.

This workflow-level meaning transformed our documentation from a collection of service manuals into guidance for accomplishing business goals.

Strategic-Level Meaning

The highest level of meaning connects workflows to business outcomes and strategic objectives. This level answers the question: "Why does this matter to my organization?"

Strategic-level meaning is often overlooked in technical documentation because it seems "too high-level" or "too business-focused." But for decision-makers evaluating tools and approaches, this level of meaning is crucial.

When Stripe documents their payment processing capabilities, they don't just explain how to charge credit cards (task-level) or how to build a checkout flow (workflow-level). They connect these capabilities to business outcomes like reducing cart abandonment, expanding to international markets, and maintaining PCI compliance (strategic-level).

This strategic meaning helps users understand not just what they can build with Stripe, but why they should invest time and resources in building it.

The User Journey Connection

Meaningful content aligns with how users actually discover, evaluate, and use your product. This requires understanding user journeys not just within your documentation, but within their broader context of solving problems and accomplishing goals.

Most documentation fails at meaning because it's organized around product capabilities rather than user journeys. Teams create content that mirrors their internal organization—separate sections for each feature, organized by the team that built them—rather than content that matches how users approach problems.

I experienced this challenge firsthand while working on Angular documentation. The framework had dozens of features: components, services, directives, pipes, routing, HTTP clients, testing utilities, and more. The natural inclination was to document each feature thoroughly in its own section.
But users weren't trying to learn about Angular features in isolation. They were trying to build applications. Their journey started with problems like "I need to display dynamic data" or "I need to handle user input" or "I need to make API calls."

We discovered that meaningful documentation needed to start with these user problems and then explain how Angular's features solved them. Instead of a section called "HTTP Client" with comprehensive coverage of every method and option, we created content organized around user needs: "Fetching data from APIs," "Handling loading states," "Managing authentication tokens."

This shift from feature-focused to journey-focused organization dramatically improved the meaning our documentation provided to users.

The Context Problem

One of the biggest threats to meaningful content is what I call the context problem: teams create documentation that makes sense within their context but loses meaning when users encounter it in different contexts.

This happens because teams know too much about their own product. They understand the assumptions, background knowledge, and workflow patterns that make their content meaningful. Users approaching the same content without that context struggle to extract meaning from it.

Consider this common example: "Configure your webhook endpoint to handle payment notifications." To the team that wrote this, the meaning is clear—they understand what webhooks are, why payment notifications matter, and what "handling" them entails. To a user who's never worked with webhooks before, this instruction is meaningless without additional context.

The context problem becomes more severe as organizations scale. Different teams develop different contexts and assumptions. What seems obviously meaningful to the team building a feature may be incomprehensible to users (or even to other teams within the same company).

The solution isn't to provide exhaustive context for every piece of content—that would make documentation overwhelming and inefficient. Instead, it's to understand which contextual knowledge is essential for meaning and which is optional for your specific users.

Testing for Meaning

Unlike the other characteristics of quality, meaning can't be evaluated purely through analytical review. You can audit content for accuracy, completeness, or consistency, but meaning requires observing how real users interact with real content in real contexts.

The good news is that you don't need massive user research studies to test for meaning. Jakob Nielsen's research showed that testing with just 5 users can identify 85% of usability problems, and similar principles apply to content meaning. The most striking truth is that zero users give zero insights. As soon as you collect data from a single test user, your insights shoot up and you have already learned almost a third of all there is to know about whether your content creates meaning for users.

For testing content meaning specifically, you can get valuable insights by observing 5-8 users attempt to apply what they've learned from your documentation. The key questions are: Can they successfully use the information to accomplish their goals? Do they understand not just what to do, but why they're doing it? Can they adapt the guidance to their specific context, or can they only repeat the exact steps you provided?

AI as a Meaning Test

There's also a surprisingly effective technique using AI tools to test whether your content has clear meaning. Here's how it works:

  1. Write your documentation as you normally would
  2. Write a separate summary of what you think the main purpose and key takeaways of that documentation should be
  3. Ask an AI tool to summarize your documentation without showing it your intended summary
  4. Compare the two summaries - either manually or by asking the AI to compare them

If the AI's summary aligns with your intended purpose and takeaways, there's a good chance your content successfully conveys meaning. If the summaries diverge significantly, it often indicates that your content isn't clearly connecting information to purpose.

This technique works because AI tools are reasonably good at extracting apparent meaning from text, but they're not good at inferring meaning that isn't explicitly present. If an AI can identify the same key purposes and takeaways that you intended, it suggests that those meanings are clearly embedded in your content rather than just existing in your head.

The AI technique isn't a replacement for user testing, but it's a useful preliminary check that can help you identify meaning problems before you invest time in user research. It's particularly helpful for quickly testing multiple drafts or revisions to see which version more clearly conveys your intended meaning.

When Meaning Conflicts with Other Characteristics

Sometimes creating meaningful content requires trade-offs with other aspects of quality. Meaning might require more explanation than pure conciseness would suggest. It might require organizing content in ways that feel less complete from a feature-coverage perspective. It might require inconsistency in how deeply different topics are covered.

These trade-offs can be uncomfortable for teams used to optimizing for other characteristics. But meaning should usually win these conflicts, because meaningless content can't achieve its purpose regardless of how well it performs in otheron other dimensions.

I learned this lesson during a project documenting complex data processing workflows. The most accurate and complete approach would have been to document each processing step in isolation, with comprehensive coverage of all options and configurations. But this approach would have made it nearly impossible for users to understand how the steps connected to solve their actual data problems.

Instead, we organized the content around common data processing scenarios: cleaning customer data, aggregating sales metrics, preparing data for machine learning. Each scenario was less comprehensive than a complete feature reference would have been, but far more meaningful to users trying to accomplish specific goals.

The result was documentation that sacrificed some theoretical completeness to gain practical meaning. Users could successfully apply what they learned because they understood not just how to use individual features, but why those features mattered in their context.

Building Meaning Systematically

Creating meaningful content requires intentional design and ongoing attention. It's not something that emerges naturally from accurate, complete information.

Start with User Goals: Before documenting features, understand what users are trying to accomplish. What problems are they solving? What outcomes do they need to achieve? How does your product fit into their broader workflows?

Connect Features to Outcomes: For every capability you document, explicitly connect it to user benefits. Don't just explain what a feature does—explain why users would want that outcome.
Provide Context Appropriately: Identify what background knowledge users need to extract meaning from your content. Provide essential context upfront, but don't overwhelm users with information they don't need for their specific goals.

Test with Real Users: Regularly validate that users can extract meaning from your content by observing them attempt to apply what they've learned. Look for gaps between what you think you've communicated and what users actually understand.

Maintain Connection to Purpose: As products evolve and expand, regularly review whether your content still connects clearly to user purposes. Feature additions and changes can gradually erode the meaning of existing content.

The Foundation of Quality

Meaning serves as the foundation for all other aspects of content quality. Accurate information that doesn't connect to user goals is meaningless precision. Complete coverage that doesn't help users accomplish anything is meaningless comprehensiveness. Concise writing that doesn't serve user purposes is meaningless efficiency.

But when content has strong meaning—when it clearly connects to what users are trying to accomplish—the other characteristics of quality become powerful amplifiers of that meaning. Accuracy ensures that the meaningful connections you've created are reliable. Completeness ensures that users can follow meaningful paths to completion. Conciseness ensures that meaning isn't buried under unnecessary information. Discoverability ensures that users can find meaningful content when they need it. Consistency ensures that meaning remains reliable across different contexts.

Without meaning, technical writing becomes merely technical information. With meaning, it becomes a tool that empowers users to solve problems and accomplish goals. And that transformation is what quality in technical writing is ultimately about.

Chapter 12: The Quality Trajectory

When I interviewed at Google, I was asked the following question:

“We have 6 projects that need to get done. You only have time to do 3 of them. What do you do?”

I answered without hesitation: “I do 3 of them and I make sure the people I work with understand this.”

The interviewer—a dev manager—replied: “But we need all 6 things done.”

“Are we hiring more staff—assuming there’s time to do so?” I asked.

“No.”

“Are we willing to move the deadline?”

“Can’t.”

“Well then,” I said. “We’re doing 3 things. Let’s figure out which 3 are the most important.”

“They’re all important.”

“I’m sure they are, but they’re all not equally important.” I continued. “Look, you’re a dev manager, right?”

“Yup!”

“I’m sure the list of feature requests and bug fixes exceeds what your team can handle in a given period of time. You have to triage too.”

“Good point,” my interviewer said. We were both enjoying the conversation. “But is there any way you can do more than 3?”

This is where the conversation got interesting. To me, at least.

“Sure,” I said, after taking a moment to think. “Let’s say we have 3 months until our deadline. Each project takes 1 month to complete. That includes one week for tech reviews and testing.

“If we agree that we don’t need tech reviews or testing, then we can get 4 projects done instead of 3. How does that sound?”

My interviewer thought for a moment. “Hm. I’m not sure I want to publish something that we haven’t reviewed.”

“It’s not my preferred way of doing things either,” I replied. “But sometimes it happens, and it’s sometimes the right call to make. But it does come at a cost, so we have to think about what the trade-offs are.”

We had to move on to other topics, but this exchange always stuck with me.

The Iron Triangle

Many of us know about the Iron Triangle for project management. If you’re not familiar with it, the triangle looks like this:

This triangle shows (or makes the case that) the quality of work is constrained by three criteria: scope, schedule, and budget. As you saw in my response to my interviewer (and I’m sure my interviewer knew where I was going with my questions), when asked when I could get something done, I asked about two of these three constraints: schedule (”Can we push out the date?”) and budget (”Can we hire more people?”).

The iron triangle isn’t a perfect analogy of quality—more on that in a moment. But it is a good way of determining how different constraints impact the quality of a given documentation project.

A missing piece: Trajectory

One way the iron triangle isn’t perfect is that it assumes that quality is a constant, fixed state. It’s not—it has a trajectory that changes over the lifetime of the documentation. And, if you don’t pay attention to that trajectory—if you only focus on how scope, cost, and time affect a project in the immediate or near-term, you risk enabling a downward trajectory of quality that becomes increasingly difficult to recover from.

On the flip side, it is equally important to remember that you can always increase your quality trajectory. This idea is encompassed in the saying: “Don’t let perfect be the enemy of good.” It’s understandable that, for all releases, nothing will be perfect. It’s also understandable that some parts of the product will be closer to perfection than others. When it comes to documentation, I find that I need to remind myself that this is okay! Unevenness in content quality is very normal. But this fact also reminds me that I should remain committed to equalizing the state of all the documentation—from APIs to tutorials. For example, sometimes you need to prioritize improving the documentation for an existing feature over documenting a brand new feature. Sometimes you need to be laser-focused on the API and set that cool tutorial you’re working on aside. And sometimes, you need to take a step back to make sure the whole documentation experience is working as it should.

By keeping this focus, I can help make sure that the quality trajectory of our content is always trending up.

The documentation decay cycle

Here's a pattern I've seen play out repeatedly across multiple companies: A team launches a new product or feature with great documentation. They've invested time and energy into creating comprehensive, well-organized content. Users are happy. The documentation is genuinely helpful.

Then the team moves on to the next priority. The product continues to evolve—new features get added, APIs change, workflows get optimized. But the documentation updates become sporadic. Small inaccuracies creep in. Gaps in content start to appear but don’t get addressed. The information architecture that made sense at launch becomes strained as content gets tacked on without strategic planning.

For a while, nobody notices. The core documentation still mostly works. Users can usually figure things out. Support tickets increase gradually, not dramatically.

Then the complaints reach a critical mass. Users are frustrated. Support is overwhelmed. Leadership demands action.

I experienced this firsthand early in my time at Google. Something triggered a complete documentation overhaul across Google Cloud Platform. "Code purple" refers to when hospitals stop accepting new patients to focus on critical cases—and that's exactly what we did. For months, the entire documentation team ceased any new work and focused exclusively on improving existing documentation. New templates were defined, new styles and patterns were implemented. It was a long, grueling effort, and we were all exhausted by the end of it.

I saw a similar pattern with Angular documentation. When I joined that project, we found entire sections that had been around for years but were no longer relevant or even accurate. The number of times I would ask a question about a given Angular topic, only to get the response: “How long has THAT been there? That’s not even true anymore!” became hard to track. To address this, I implemented a rule: if you needed to update a topic, you had to review the entire topic, not just what you changed. There were simply too many instances of old or outdated content to trust that any topic was current.

The good news: for Google Cloud Platform, some of those best practices that were defined during that code purple experience continue to benefit customers to this day. And the Angular team recently revamped their entire documentation set. But each is still an unfortunate example of what happens when an engineering organization decides to de-emphasize documentation quality for too long. And these efforts are no guarantee that the problem won’t repeat itself in the future, as the demands for more documentation for more features continues to grow.

This boom-and-bust approach to documentation quality is expensive, exhausting, and ultimately inefficient. It’s like only taking your car in for maintenance when the engine breaks down. Sure, you can do it, but regular maintenance would have everyone’s lives a lot easier. (Well, maybe the mechanic is okay with how things are, but that’s taking the analogy too far.)

Maintaining Upward Trajectory

Think about how a plane stays in the air. The engines don't fire once during takeoff and then shut off. They run continuously throughout the entire flight, providing constant thrust to keep the plane aloft. The moment the engines stop, the plane begins to descend.

Documentation works the same way. You can't achieve quality once and then stop paying attention. Quality requires continuous energy and focus. Every product update, every new feature, every API change affects your documentation's trajectory. If you're not actively maintaining and improving your content, it's decaying—even if you can't see it happening yet.

Maintaining an upward quality trajectory means always thinking about documentation. Is it accurate as the product evolves? Is it complete for the workflows users actually need? Is it still relevant, or has it been superseded by new approaches? Does it maintain consistency with newer content? Does it still connect to what users care about accomplishing?

This is where the six characteristics framework becomes essential for sustainable quality. Each characteristic degrades over time if not maintained:

  • Accuracy drifts as products evolve. APIs change, features get deprecated, recommended practices shift. Documentation that was perfectly accurate at launch can become misleading or wrong months later.
  • Completeness develops gaps as new features get added. Each product release potentially creates new user workflows that need documentation. The content set that felt complete last quarter may have significant holes today.
  • Conciseness suffers as content accumulates. Teams add new information without removing obsolete content, leading to bloated topics that bury important information under outdated material.
  • Discoverability breaks down as the information architecture strains under content growth. The navigation structure that worked for 50 topics becomes unwieldy with 500 topics.
  • Consistency fragments as different people contribute content over time, especially if standards aren't actively maintained and reinforced.
  • Meaning erodes as the gap between what documentation describes and what users actually need grows wider. Content written for last year's user journeys may not serve this year's use cases.

The solution isn't occasional heroic efforts—it's continuous, sustainable attention to documentation quality. This requires:

  • Partnership with engineering: Documentation cannot be an afterthought that happens after code is complete. Writers need to be involved in planning conversations, understand what's changing and why, and have time allocated for documentation updates in every release cycle.
  • Regular content audits: Systematically review existing content to identify accuracy problems, completeness gaps, and relevance issues before they compound into crisis-level problems. The rubric from Chapter 13 can help teams assess whether content is ready for production—and whether existing content is still production-ready.
  • Deprecation policies: Just as engineering teams deprecate old code, documentation teams need clear policies for retiring outdated content. Keeping obsolete information around creates confusion and erodes trust.
  • Quality gates: New content should meet the same quality standards as initial documentation. It's tempting to accept lower quality for "just one more feature" when deadlines loom, but each compromise accelerates the decay cycle.
  • Investment in foundations: Sometimes you need to prioritize improving the documentation for existing features over documenting brand new features. Sometimes you need to refactor your information architecture rather than adding more content to a strained structure. These foundational investments maintain trajectory even though they don't produce visible new content.

The goal is to make documentation quality a continuous practice rather than a periodic project. Small, consistent investments in maintenance prevent the decay that leads to crisis-level overhauls.

Summing up

"That interview conversation always reminds me that quality is a trajectory, not a fixed state. It reminds me that I need to balance my drive to improve the documentation experience (something that I will always consider to be of high priority) with my understanding that this experience may not always align with my team's objectives. Simultaneously, it is part of my role to ensure that the quality trajectory of our content is always trending up—through continuous attention to the six characteristics, through partnership with engineering, through regular maintenance rather than periodic heroics—making our users' lives better and their efforts more successful.

Chapter 13: The Weighted Rubric

One of my pet peeves around technical writing relates to the challenges around when you should engage with a technical writer, or even think about technical documentation. Often, when I bring this matter up, folks rightfully assume its because I’m annoyed that, once again, documentation has been left to the very last minute. I admit that’s annoying—but it’s a situation that can happen to a lot of engineering-adjacent disciplines. But what makes this a pet peeve for me is that we—technical writers—haven’t done a great job of explaining to teams when they should think about their project’s documentation.

At a prior company, this situation was causing no end of stress for everyone—writers, engineers, product managers, and so on. To try and help, I built a rubric that teams could use to figure out if they were ready for production-ready content. I thought I’d share a revised version of that rubric here, along with some guidance about how to use it.

Dave’s documentation rubric

The rubric is pretty basic. There are four categories. In each category, you score your project—be it a release, a feature, or whatever—on a scale of 1 to 4.

Rubric1234
Use casesUse cases are unknown or unclear. Timeline to clarify use cases is to be determined.Some use cases are known, but ambiguity remains. It may be weeks or months before use cases are fully established.Most uses cases are known and little ambiguity remains. Timeline to get additional clarity is measured in days and weeks.Use cases are clear and unambiguous.
DeadlinesDeadline is unknown, as is when the deadline will be established.Deadline is known but highly tentative.Deadline is established but there may still be factors that change the date.Deadline is firmly established and there are few, if any, risks that the date will change.
Code stabilityCode is under heavy development and is highly unstable.Code is under significant development. A few features might be stable, but there’s still a lot more work to be done.Code is mostly stable. Some additional work is still in flight, but you can at least run and test the code from a user’s perspective.Code is stable and testable. Only remaining work relates to bug fixes, performance enhancements, and other changes that don’t affect overall functionality.
Subject Matter Expert AvailabilityNo subject matter expert available (usually because they’re busy writing the code).Subject matter expert may be available, but most of their time is still spent on building the feature or figuring out user needs.Subject matter expert is available to answer questions and provide clear guidance on intended use of the feature or product.Subject matter expert is available to answer questions, provide guidance, collaborate on code samples, and provide technical reviews of content.

Interpreting the rubric

To me, this rubric helps teams quickly figure out if they’re ready for production content. When I used a previous version of this rubric, I provided this guidance for scoring:

ScoreMeaning
16The product is built and ready to ship. Chances are, you’re shipping within days. Depending on the size of the project, you might have waited too long to think about documentation.
12-15Most of the product is ready for documentation. The combination of use case clarity, code stability, and subject matter expertise are at a point where creating production content is efficient.
8-11There’s probably still too much ambiguity to think heavily about production content. However, there are other options. See Content Prep.
<8Great to think about what the user journeys should be, but too soon to think about production content yet. Again, see Content Prep.

Content prep

One of the benefits of this rubric is that you should be able to quickly determine if a given project is ready for production content or not. But what if your project isn’t ready? What if your project scores an 8, for example?

In those cases, it’s been my experience that the biggest issues revolve around code stability and subject matter expert availability. The codebase is still in flux—engineers are still figuring stuff out. And those engineers usually end up being the subject matter experts. It’s tremendously difficult—maybe impossible!—to explain something while you’re creating it.

However, there’s still an opportunity to think about other content work:

  • It might be a good time to think about the information architecture. Where will the content live in our documentation set?
  • It may be a good time to clarify who the intended audiences are for the product. The content author can then spend some time learning more about the needs of that audience and plan on how to create content in the most accessible way possible.
  • There may be concepts that we’ll need to explain to users to help ensure their success, and it may be possible to write those conceptual topics ahead of time.

In many cases, there’s almost always content work that can help get the team ready for production content, or help get production content ready faster.

Just the beginning

The rubric I’ve shared here is based on my own experience. It’s proven helpful to teams, providing them a bit more clarity about when to start thinking about documentation and engage with a technical writer. That said, it’s just my own interpretation of how I look at projects, and I’m sure there are many other viewpoints that are just as valid. But I think having some way of measuring and tracking where your project is at can help ensure that we efficiently create documentation that helps users succeed with our products.

Chapter 14: AI and the Future of Technical Writing

I have a small confession: I'm excited about AI's role in technical writing. Not because I think it will replace technical writers—quite the opposite. I think AI will free us to focus on the content that truly matters, the content that makes users think "that's really cool."

For decades, technical writers have been buried under the sheer volume of necessary but routine documentation. Every API needs reference docs. Every feature needs a how-to guide. Every product needs a quickstart. This essential content consumes enormous amounts of time and energy, leaving little capacity for the kind of writing that genuinely transforms user experiences.

AI changes this equation fundamentally. For the first time, we have tools that can handle much of the routine content creation, freeing human writers to focus on what Steven Brust calls "something really cool"—the connections, insights, and strategic guidance that only humans can provide.

The Liberation from Routine

The average documentation set contains thousands of pages of content that, while necessary, follows predictable patterns. API reference pages describe endpoints, parameters, and responses using consistent formats. How-to guides walk through step-by-step procedures. Troubleshooting docs catalog common problems and solutions. Getting-started tutorials introduce basic concepts and workflows.

This content is essential—users absolutely need it. But creating it manually is often a mechanical exercise that doesn't require the strategic thinking, user empathy, and creative problem-solving that make technical writers valuable. We spend our days documenting individual trees instead of helping users navigate the forest.

AI shows significant promise for this routine content creation. Large language models can generate draft API documentation from code comments and specifications. They can create initial procedural how-to guides from product requirements and feature descriptions. Early experiments show they can even help produce troubleshooting guides from support ticket patterns and known issue databases.

The potential for AI to maintain this content as products evolve is particularly exciting. AI could update documentation automatically when API endpoints change, regenerate reference pages when new parameters are added, and flag related content for review when features are deprecated. While these capabilities are still emerging, the foundational technology exists.

This isn't just about efficiency—though the time savings are substantial. It's about focus. When AI handles the routine documentation, technical writers can invest their expertise where it creates the most value for users.

The Content We Couldn't Write Before

Here's what gets me excited about this shift: we can finally tackle the content we never had time to create. The sophisticated, strategic content that helps users succeed with complex systems and workflows.

Consider the connections between services. In a typical cloud platform like AWS, users don't just use S3 for storage or Lambda for computing—they build architectures where these services work together to solve business problems. But documentation traditionally covers services in isolation because it's too resource-intensive to document every possible integration pattern.

AI could potentially generate the routine documentation for individual services, freeing technical writers to create content about architectural patterns, integration strategies, and design principles. Instead of documenting what each service does, we could focus on why you'd combine them and how they work together in real-world scenarios.

Or consider the user journey content that falls between traditional documentation categories. Users don't progress linearly from "beginner" to "advanced"—they develop expertise in specific areas while remaining novices in others. A database expert might be a complete beginner with machine learning. A frontend developer might need help with infrastructure concepts.

Traditional documentation struggles with these complex user journeys because they don't fit neatly into product-focused organization. But as AI becomes more capable of handling routine feature documentation, technical writers could create content organized around user progression patterns, workflow sequences, and cross-functional challenges.

The "Something Really Cool" Principle

Steven Brust, one of my favorite fantasy authors, keeps a sign on his desk: "And now I'm going to tell you something really cool." He tries to live up to that statement in everything he writes.

This principle transforms how I think about technical writing. Instead of asking "What do users need to know about this feature?" I ask "What's genuinely exciting about what users can accomplish with this feature?"

AI could enable this shift in mindset by handling the "what users need to know" content. When the routine documentation exists, I can focus on the "here's something really cool you can do" content that creates genuine enthusiasm and drives adoption.

Consider a tutorial about deploying machine learning models. The traditional approach documents the deployment process step-by-step—configure the environment, upload the model, set up the inference endpoint, test the deployment. This content is necessary but not particularly inspiring.

But what's really cool is the broader concept: "You've spent weeks training a model that can predict customer behavior, detect fraud, or recommend products. Now you're going to put that intelligence directly into your users' hands through your application." That transformation from trained model to user-facing capability—that's exciting. That's worth writing about with enthusiasm.

Beyond Individual Products

One of the most significant opportunities AI could create is documentation that spans product boundaries. Users don't think in terms of individual products—they think in terms of workflows and outcomes that often require multiple tools working together.

Traditional documentation is organized around individual products because that's how companies are structured and how development teams are divided. But users are trying to accomplish goals that cross these boundaries: "I need to collect user data, process it for insights, and present those insights in my application."

As AI becomes more capable of maintaining individual product documentation, human writers could focus on the cross-product workflows that deliver complete solutions. Instead of explaining how each service works in isolation, we could show how they combine to solve real problems.

This shift is already happening in forward-thinking companies. AWS has architectural guidance that shows how multiple services work together for common use cases. Stripe has integration guides that assume you're building complete applications, not just processing payments in isolation. Google Cloud has solution architectures that demonstrate end-to-end workflows.

But these examples represent a tiny fraction of the cross-product content that users actually need. As AI becomes more capable of handling routine documentation, technical writers could invest in the strategic content that helps users architect complete solutions.

The Quality Amplification Effect

Here's where AI becomes particularly promising when combined with the quality framework we've been discussing throughout this book. AI doesn't just create more content—it has the potential to help ensure that content meets our quality standards systematically.

Accuracy: AI can verify that code examples compile and run correctly. It can check that API documentation matches actual service behavior. It shows promise for flagging inconsistencies between different pieces of content.

Completeness: AI could potentially identify gaps in documentation coverage by analyzing user workflows and support ticket patterns. It might suggest missing content based on product roadmaps and feature releases.

Conciseness: AI tools can help identify verbose explanations and suggest more efficient alternatives. They show capability for maintaining consistent voice and tone across large documentation sets.

Discoverability: AI could generate appropriate metadata, tags, and cross-references to improve content findability. It might suggest logical next steps and related content.

Consistency: AI shows particular promise for maintaining consistent terminology, formatting, and structural patterns across thousands of pages of content.

Meaning: This is where human judgment remains essential, but AI can help test whether content successfully conveys intended meaning through summarization techniques and other analytical approaches.

The result isn't just more content—it's more consistent, maintainable, and user-focused content than we could create manually.

The Strategic Role Evolution

This shift positions technical writers as content strategists and user advocates rather than content generators. Instead of asking "How do I document this feature?" we ask "How does this feature fit into user workflows?" and "What strategic guidance do users need to succeed with this capability?"

This evolution requires developing new skills:

User journey mapping: Understanding how users progress through complex workflows that span multiple products and teams.

Content ecosystem thinking: Designing information architectures that support user goals rather than mirroring internal product organization.

Strategic prioritization: Identifying which content investments will have the greatest impact on user success and business outcomes.

Cross-functional collaboration: Working with product managers, engineers, and designers to ensure that user needs drive content strategy rather than internal convenience.

AI tool proficiency: Understanding how to leverage AI effectively while maintaining editorial oversight and strategic direction.

But these aren't entirely new skills—they're extensions of what the best technical writers already do. We're already user advocates who think strategically about information architecture and content impact. AI just gives us more capacity to apply these skills where they matter most.

The Implementation Reality

The transition to AI-augmented technical writing isn't automatic or simple. Organizations need to develop processes for AI content generation, review, and maintenance. Teams need to learn which content types work well with AI assistance and which require human creativity and judgment.

There are also legitimate concerns about AI-generated content quality, particularly around accuracy and meaning. AI can produce content that looks professionally written but contains subtle errors or fails to address user needs effectively.

The solution isn't to avoid AI tools but to implement them thoughtfully within robust editorial processes. AI-generated content should go through the same quality review processes as human-generated content. The six characteristics framework we've discussed throughout this book provides a systematic approach for evaluating AI content just as it does for human content.

Most importantly, AI should augment human judgment, not replace it. Technical writers remain responsible for content strategy, user advocacy, and ensuring that documentation serves real user needs effectively.

Looking Forward

I believe we're at the beginning of a golden age for technical writing. For the first time, we have tools that can handle routine content creation at scale, freeing human writers to focus on strategic, creative, and genuinely valuable content.

This doesn't mean fewer technical writing jobs—it means more impactful technical writing jobs. Instead of being buried under routine documentation tasks, technical writers can focus on the user experience challenges, strategic content initiatives, and cross-product guidance that drive real business value.

The writers who thrive in this environment will be those who embrace AI as a powerful tool while doubling down on the uniquely human skills that make technical writing valuable: empathy for user needs, strategic thinking about content impact, and the ability to find and convey what's genuinely exciting about complex technical capabilities.

As Steven Brust reminds us, our job is to tell users something really cool. AI just gives us more time and capacity to figure out what that cool thing is and share it effectively.

The future of technical writing isn't about competing with AI—it's about leveraging AI to do what humans do best: understand user needs, think strategically about information architecture, and create content that transforms how people accomplish their goals.

And that, I think, is really cool.

Chapter 15: What I Do Now

I'm still asked what I do for a living. But lately, I think I've had a better answer—especially as my role has evolved from technical writer to content strategist.

"I help tell stories that matter to users. And I help my colleagues in documentation and training find ways to make sure we tell our customers what they want and need to know, and not just what we want to tell them."

And when people want to know more—and increasingly, they do—I have a real framework to share with them.

The Framework That Changed Everything

My quest for quality started with my failure to get promoted at Google, when I was told my writing quality wasn't good enough. That feedback devastated me, but it also sparked the most important question of my career: How do we actually define content quality?

The answer, as it turns out, isn't a single definition—it's a systematic approach to understanding quality through six interconnected characteristics:

Accuracy that's appropriate for your product's maturity and your users' needs, not just technically correct in the abstract.

Completeness that covers what users need to succeed in their workflows, not everything that exists in your product.

Conciseness that respects users' time and cognitive load while maintaining the warmth and context they need.

Discoverability that works with how users actually find and navigate content, not how we wish they would.

Consistency that reduces mental overhead across topics, documentation sets, and entire product ecosystems.

Meaning that connects information to purpose, helping users understand not just what to do but why it matters.

These characteristics don't exist in isolation—they work together to create content that truly serves users. When content has strong meaning, accuracy becomes more valuable because it supports something that matters. When content is discoverable, consistency becomes more important because users will encounter multiple pieces in unpredictable sequences. When content is complete and concise, it creates the space for meaning to emerge.

From Individual Craft to Systematic Practice

What excites me most about this framework isn't that it helps individual writers create better content—though it does. It's that it transforms technical writing from individual craft to systematic practice.

Teams can use these characteristics to evaluate content objectively rather than relying on subjective preferences. Product managers can specify what kind of quality they need rather than asking vaguely for "better docs." Organizations can invest in content improvements that align with business outcomes rather than hoping that more writing automatically means better user experiences.

Most importantly, these characteristics scale. They work whether you're writing a single API reference or architecting content experiences across dozens of products and services. They apply whether you're a solo technical writer at a startup or part of a content organization at a company with millions of users.

The Questions That Drive Quality

Throughout this book, I've shared stories and frameworks, but at its core, quality technical writing comes down to asking better questions:

Instead of "Is this information correct?" ask "Is this accurate for my users in their specific context?"

Instead of "Have I documented everything?" ask "Have I provided everything users need to succeed in their workflows?"

Instead of "How can I make this shorter?" ask "How can I respect my users' time while giving them what they need?"

Instead of "Where should this content live?" ask "How will users actually discover and navigate this information?"

Instead of "Does this match our style guide?" ask "Will users have a consistent experience as they move through our content ecosystem?"

Instead of "What does this feature do?" ask "Why should users care about this capability?"

These questions shift the focus from internal concerns to user outcomes. They move us from documenting products to serving people. They transform technical writing from a necessary evil into a competitive advantage.

The AI Amplification

As I was finishing this book, I kept thinking about how AI changes everything I've written here. Does a systematic approach to content quality matter when machines can generate documentation at unprecedented scale and speed?

The answer is: more than ever.

AI amplifies whatever approach you bring to content creation. If you don't have clear standards for quality, AI will produce more content that fails to help users succeed. But if you have systematic ways to evaluate and improve content, AI becomes an incredibly powerful tool for achieving quality at scale.

The six characteristics framework works just as well for evaluating AI-generated content as human-generated content. AI can help with accuracy by checking code examples and flagging inconsistencies. It can support completeness by identifying content gaps and suggesting coverage improvements. It can enhance discoverability through better metadata and cross-referencing.

But AI can't replace human judgment about meaning, user workflows, and strategic content priorities. If anything, as AI handles more routine documentation tasks, human technical writers become more valuable for the strategic, creative, and empathetic work that creates genuinely useful content experiences.

Where to Start

If you're feeling overwhelmed by everything we've covered, start small. Pick one piece of content you've written recently—a tutorial, a how-to guide, an API reference—and evaluate it against the six characteristics:

  • Is it accurate for your intended users in their specific context?
  • Does it provide everything those users need to succeed?
  • Does it respect their time while giving them sufficient context?
  • Can users find it and navigate from it to related information?
  • Is it consistent with other content they might encounter?
  • Does it connect to something users actually care about accomplishing?

Don't try to optimize across all characteristics simultaneously. Pick the one or two that seem most problematic and focus your improvement efforts there. Quality is an iterative process, not a one-time achievement.

If you're working with a team, start conversations about these characteristics. What does accuracy mean for your product and users? How do you currently evaluate completeness? What consistency standards matter most for your content ecosystem? These discussions will reveal assumptions and priorities that can guide your content strategy.

If you're leading content efforts across an organization, consider how these characteristics could inform your investment decisions. Which characteristic improvements would have the greatest impact on user success and business outcomes? How could you measure progress against these quality dimensions over time?

The Real Answer

Back to that conference conversation. The product manager I was talking with wanted to know more about measuring content quality, so I walked her through the six characteristics. By the end of our conversation, she was thinking about how to apply them to her own team's documentation challenges.

"This is exactly what we've been missing," she said. "We keep asking our writers for 'better docs' but we've never been able to explain what that means."

That's when I realized I finally had the real answer to that original question from my parents' dinner party. I'm not just a technical writer who creates content. I'm someone who helps teams systematically understand and achieve content quality. I help organizations move beyond hoping their documentation will be useful to ensuring it actually serves users effectively.

I help teams build documentation that users are grateful to find rather than frustrated to need.

And that transformation—from necessary evil to competitive advantage, from afterthought to strategic asset, from something users tolerate to something they value—that's what technical writing can be when we stop accepting "good enough" and start demanding quality.

The framework exists. The tools are available. The only question is whether we'll use them to create content that truly serves the people who depend on our work.

I think we should. I think we must. I think we can.

And, most of all, I think we should care.

Because when care leads, quality follows.