Tag Archives: extreme programming

How to be an Agile Technical Writer with a cool acronym like XTW

One reason why I like the show Dirty Jobs so much is because Mike Rowe, the host, is so respectful and honest about the work that people do each day. Dirty Jobs offers such a great viewpoint on work that is done each and every day. I recently discovered that Sarah Maddox, a technical writer at Atlassian (makers of the Confluence wiki engine), has written two great posts about being an Agile technical writer, or an eXtreme Technical Writer (XTW). These posts on Agile Technical Writing offer wonderful windows into the work that technical writers are doing around the world. Plus, they offer some down-to-earth how-tos that make sense to apply in a modern technical writing career. If you’re thinking about technical writing as a career, check out these two posts for your research, because I believe that agility is one of the best skills you can bring to this career path when I consider the direction it is heading.

  • Plus don’t miss the great photo art mashup in the agile technical writer part II, another excellent post that really describes what it’s like to write in the Agile development environment.

Here are some highlights from each that I could identify with:

  • Responding to IMs from all over. Carrying on multiple conversations intelligently is a gift.
  • Concerns about information overload – it’s daunting, but do-able. (Okay, funny side note – My typo for overload was “overlord.” Yipes. That can happen too.) My advice is to ride the serendipitous river of information. Sounds like hippy advice but somehow it works.
  • What a wonderful viewpoint of how the daily life of a technical writer has changed so greatly over the years. I listened to Linda Oestriech’s podcast about the Direction the STC is Heading at Tech Writer Voices and one great quote from her was, “We’re not the technical writer from the 70s.” I’d say we’re not even the technical writer from the 90s.
  • Love the Swaparoo idea – similar to pair programming, but call it a swaparoo when you want to trade tasks with another writer to get cross-product knowledge.
  • Respond to customers from the varied means of communication that is offered to you in this awesome world of documentation. And if it doesn’t have to do with the doc, don’t meddle, pass it along to the support team.

Thanks Sarah, for sharing such a great “day in the life” slice for technical writers.

Authoring and delivery tools for the Agile Technical Writer?

I’ve had a question via email recently, asking something like, “What is the ideal toolkit for writing in an Agile environment?” Or, “What would you choose if you had to write in an Agile environment in order to be most effective as a technical writer?”

It’s tempting to actually try to answer that question with a straightforward response of one tool – but of course the answer is not that easy. The product and audience and company that you’re writing for all dictate the documentation deliverable with far more weight than the “manufacturing” process that is used to build the product. Sarah’s posts don’t directly mention their toolkit, but her “eat your own dog food” bullet point hints that the doc is delivered via their wiki engine. (Sarah, do correct me if that’s an inaccurate leap in logic.)

But, if the product you’re documenting isn’t itself a wiki, you’re going to need to evaluate tools. I borrow directly from Don Day’s editor evaluation heuristic for a methodology for evaluating a writing and publishing toolkit to fit in an Agile environment. Evaluate a tool (no matter what you’re trying to deliver or how you’re authoring it) using “cost/benefit judgments on the features that mean most to your intended scenarios, business rules that need to be supported, and the willingness of your team to learn some new ways of doing things.” Well stated, Don, and whether you’re trying to find the right DITA toolkit or the right Agile toolkit, scenarios and business processes are quite useful. Anyone have great authoring or publishing scenario or business process suggestions for the XTW?

What makes a doc plan Agile?

Does working in an Agile development environment change the documentation plan, or user information plan?

For many writers, the purpose of a doc plan is to inform the stakeholders what you (or your team) intends to deliver for docs for a particular product release and when you will meet key milestones.

Here’s an outline of a typical user information plan that includes signoffs from interested parties, with the goal being that you work with your team to agree to what information deliverables are necessary for a software release to be complete.

  1. List of deliverables
  2. Risks and dependencies
  3. New features
  4. Documentation milestones and deadlines
  5. Techpubs expectations and assumptions for meeting the deadlines

That outline was from an actual doc plan from about three years ago. It was feature-dependent and waterfall-driven.

In an Agile environment, the doc plan might be more minimalist, listing only:

  1. key players
  2. release theme and top features in the release
  3. dates

Then the real work of planning the details of what needs to be addressed in the doc happens during iteration planning. The detailed planning may not be written down in a doc plan, spending the time on writing the end-user doc instead.

I suppose the most Agile doc plan would be a simple verbal agreement to what will be in the doc each iteration.

If there’s a sliding scale, what is the least Agile doc plan? What is the most Agile doc plan?

Making the documentation cruft calculation more user-friendly

Our tech pubs group enjoyed Scott W. Ambler’s “Calculating Documentation Cruft” article in Dr. Dobb’s Journal related to Agile methods and documentation. We had some fun in an email thread and I’d like to capture some of our discussion here.

Scott cleverly turns CRUFT (a geek term that either combines crust with fluff or refers to a lab where useless objects piled up in the hallways) into a measuring stick for the effectiveness of documentation:

  • Correct = The percentage of the document that is currently “correct”.
  • Read = The chance that the document will be read by the intended audience.
  • Understood = The percentage of the document that is actually understood by the intended audience.
  • Followed = The chance that the material contained in document will be followed.
  • Trusted = The chance that the document will be trusted.

Typically, I focus on Agile and end-user documentation, but Scott dwells on the type of documentation that you use while developing the software. Knowing that the Agile manifesto doesn’t value comprehensive documentation over working software, the fact that he’s calculating cruft in a document is refreshing and unexpected. However, my co-worker Shannon Greywalker and I both thought his algorithm could use some refinement. Shannon, being a math whiz, was able to re-mix it and gave me permission to post the idea here.

Shannon says “His formula is essentially 1 – (C * R * U * F * T) = a flat percentage “cruft” rating, wherein higher values are worse.

Let’s assume a perfect score for all 5 values (100% or 1.0 for each). It’s 100% correct; will have a 100% chance of being read; and will be 100% used, followed, and trusted.

1 – ( 1 * 1 * 1 * 1 * 1) = 0.0 = 0% CRUFT. An ultimate low score and so far so good.

Now let’s assume a 99% score for all 5 values:

1 – (.99 * .99 * .99 * .99 * .99) = 0.049 = 4.9% CRUFT. Our score has jumped almost +5% from 0. Still looks okay, mostly…

But now let’s assume scores of 40%, and 30%, for all 5 values:

1 – (.4 * .4 * .4 * .4 * .4) = 0.989 = 98.9% CRUFT.

1 – ( .3 * .3 * .3 * .3 * .3) = 0.997 = 99.7% CRUFT.

The curve produced by his formula is sharply logarithmic instead of linear. Logarithmic curves are the hardest for most of us to use in everyday life just because most of us can’t natively compare one logarithmic value to another.

The best example of an original formula that is not human-intuitive and therefore not all that useful is the Richter scale for earthquakes. The average layperson has no idea what those numbers really mean because 6.0 doesn’t sound that much worse than 5.0 – but it is much worse, because the Richter’s a logarithmic scale.

A more straightforward, intuitive, and linear results curve would be expressed by the formula:

1 – ((C + R + U + F + T ) / 5)

Examples:

With a 60% score in all 5 factors: 1 – ((.6 + .6 + .6 + .6 + .6) / 5) = 0.4 = 40% CRUFT
With an 80% score in all 5 factors: 1 – ((.8 + .8 + .8 + .8 + .8) / 5) = 0.2 = 20% CRUFT
With the scores from Scott’s example: 1 – ((.65 + .9 + .8 + 1.0 + .9) / 5) = 0.15 = 15% CRUFT ”

I would take it one step further, with these descriptive ratings:

40% CRUFT, “document is getting a little crufty” rating

20% CRUFT, “the cruft hasn’t yet overtaken your doc” rating

15% CRUFT, “this document is approaching cruft-free” rating

Shannon’s argument is a good one, the best way to increase the understandability of the CRUFT rating for documentation would be to create a linear calculation for it.

Shannon also argues that there should be a weighting factor for each of the five factors used in scoring a cruft rating. Now, I say that people would disagree on which factors are the most valuable in a document, so weighting might overly complicate the calculation.

I also find the weighting to be too subjective because it seems that Correct could influence Trusted (docs that are found to be incorrect are then distrusted). Also, Understood is elusive for a writer to measure and also to target that specifically – if a selected audience member doesn’t understand what you’ve written, is that always the fault of the writer?

I do like Scott’s tips for creating less crufty documentation, most of which can also be translated for end-user doc. Here’s my attempt at translation for end-user docs.

Scott says “Write stable, not speculative, concepts.” Scott’s talking about writing conceptual information only when its factual and can be trusted. For end-user doc, sometimes writing the concepts out helps you uncover problems, so I would argue that it’s better to write some concepts even early on.

Scott says “Prefer executable work products over static ones.” Scott’s talking about using test executables for code rather than documenting the code and its limitations. For end-user doc, I think that an executable work product could be a screencast or live demo sandbox that lets people try out a feature safely, or perhaps having a link in the online help that opens the dialog box in context like Windows Help in Windows 98 that would open the Control Panel being described in a task about setting display resolution.

Scott says “Document only what people need.” Amen, brother, preaching to the minimalism choir. Know your audience before you write down words that will be cruft.

Scott says “Take an iterative approach.” All writers know that re-writing is as essential as writing.

Scott says Be concise.” Agreed.

What are your thoughts about calculating documentation cruft? Are we taking it way too seriously, or is there a good tool in this (possibly crufty) post?