Tag Archives: content

STC2008 – From Nightclub DJ to Content Management Consultant

Subtitle: Developing a Business Career The Content Wrangler WayScott Abel\'s career path at STC Summit in Philadelphia, June 2008
From the ever entertaining Scott Abel, this was an invigorating session that still kicks you in the butt to get out of your whiney mode and into a winner mode. Sounds cheesy to repeat, but it worked. Here are my notes from the session. I’d love to hear your thoughts and critique on my “live blogging” style – too much information, not enough information, not the right information? Let me know.

Routes to tech comm – English major or developers accidentally become tech writers

scottabel.com – crafted a career – but Scott didn’t grab that URL (he’s obviously not That Scott Abel.:)

He earned 146 credit in four different programs, and didn’t earn a degree
he could get a college degree, but decided not to pay the “fees.”

Still takes classes like knowledge enabled information management – Indiana University 8-5 every day for three days, presentation to 200 people as a capstone, and you fail if you’re late, or don’t play by their rules. But it’s three credit hours.

John Herron school of art in Indianapolis – foundational school – you should have drawing or sculpting skills, though.
Business School, next stop – he lasted one semester, it wasn’t about the answers, it was about how you get the answers – answers are on the back of the syllabus

Next stop, photography – first working with digital photography, won some photography contests by accident.

Journalism school – at Indiana University – and he worked there too. He went to and helped with computer assisted journalism conference. Use computer technology to cull through all the data.

He started in entertainment journalism, friend of Margaret Cho, has interviewed Elton John, other celebs.

Started a local alternative magazine… fun exciting and profitable. Assignment in journalism school – business plan for a magazine… just did the magazine, didn’t do a plan. 72-page monthly publication, two guys with two much time on their hands – sold highscale ads and actually made revenue.

He waited tables to get through school, learning that he could make 200-300 bucks a night, he met influential people. PanAm games, miniature Olypics hosted in Indy, got more experience.

He had the attention span of a worm – didn’t lead to very many opportunities.

Became a bartender – clock in at midnight, clocked out at 3-4 am. But felt he lost time during those “young” years even though he had flexibility and enough money.

Age 14: my first gig as a DJ. Learned how to mix, taught him about content reuse and personalization… wrong song – every one hides like roaches. or perhaps on purpose, when music sucks, beer and drink sales go up.

Wrong song, wrong version of the song. He had a remix of a chitty chitty bang bang that got played on Chicago radio.

Remixes were user-generated, 45s were all they had to work with, they’d buy 2 copies of the single, because they needed songs longer than 3 minutes. So… two turntables and a mixer – had to understand tempo, tone, feel of a song, but tempo control was the key. The Technicas 1200 Turntables are still instrument of choice for many dejays.

Reuse is in the remix… that’s how tracks were laid down… vocals reused identically but combined with different styles of music.

Madonna explained how her voice could be changed, the tools allowed her voice to stretch like a proportional sqaure stretches proportionally when you hold down shift key…

DJ mixing and increasing complexity similar to content choreography that we do with content – the technology is increasingly.

1999 – employment counselor said, you’d be an excellent technical communicator with your skill set.

Put together a portfolio

First job, documenting mortgage loan automation software, $45,000, he could buy groceries, kick out his roomates. Bedazzled by corporate America… benefits, paycheck, vacation.
Had folders called “Betsy’s documents” – totally disorganized, inefficient, wasteful, later they were sued out of business. Their automated software was

Started reading Ann Rockley, Bob Glushko, JoAnn Hackos, all of whom had really good best practices towards fixing the mess of content he was seeing at work.

Ann Rockley sent Scott a draft of her book, Unified Content Strategy, and he became technical editor on the book.

He needed a way to get organized, get away from notes on paper in his backpack, started a blog to be a storage container for his knowledge.

(Side note – I have to enter my “cringe” essays from grad school)

Once he got attention for his blog, he got more people talking to him, asking questions, help solving questions.

Started speaking at events, but then had to define his value proposition. Rebranded himself as a Content Management Strategist.

Tools that can tell management that content is valuable and that the product can’t ship without it. Value proposition can’t circle around their job – content needs to be valued.

Syndicate Conference 2006, encouraged to think bigger. He started commoditizing the site. Conference are a natural extension of what he was writing about, his readers wanted to learn more about what he was writing about.

Presenters seek attention – same folks who speak at conferences write articles and participate in groups.

Need for a community – 1900 members of the Content Wrangler community… there needed to be a way for people to connect to one another without Scott’s help.

Being an individual consultant is not scaleable – and this is good news for you. You can create your own value proposition.

The discipline of Document Engineering – Bob Glushko, no future in commodity writing – the future is in solving content challenges. Structured content, XML, move content around, but not just documents – documents married with data from databases. Opens up a brand new world.

Road to success – don’t allow others to define you, no one right way to become a content management expert.

Questions?
He’ll post to slideshare.net (youtube for ppt)

scribd.com (youtube for pdf) ipaper service

http://thecontentwrangler.ning.com Community site

Harmonizer product – will eventually let you analyze content using web page

acrolinxacrocheck product

How much coding does Scott know?
If you don’t know how to model content, you shouldn’t be coding. You have to be able to analyze content before you model it, even.

What’s next for Scott – providing service designs, such as RSS feeds. Problem solving providing services that give them answers before they ask them. Such as mortgage being due, or governments issuing fishing licenses.

Another question – any certificate programs you’d recommend? None, says Scott. Writing for reuse isn’t part of these certification programs, what about DITA, often focused on tools, not skill differentiators.

Advertisements

Putting content into context in a wiki – especially in a large environment

An interesting read on the front page of wordpress.com of all places. I enjoy random clicking, and this one came up with a great commentary on the difficulty of using a wiki to get how to information.

From Learning about Second Life from Google:

Over at SL, the main source of information is on the WIKI, which in my opinion has some great information but because Linden primarily lets the users run the show isn’t as helpful as some sort of information clearing house. Trying to sort out how to sculpt, for example, is an exercise in total frustration. There are some wonderful tutorials, but SL does nothing to properly aggregate and put these tutorials into context.

I wonder what Second Life could do to properly aggregate those tutorials to meet this user’s needs? I suppose long-time wiki writers would answer: use categories and encourage tagging, while looking out for orphans. Any other ideas?

I got a great question from Tom Johnson of I’d Rather Be Writing:

I’m just wondering if you have any thoughts on the WordPress Codex, http://codex.wordpress.org/Main_Page. Yesterday I was looking at this Codex wondering what to make of it all. I think I want to be a contributor, but there are so many topics. It’s chaotic. The organization is like a maize. I don’t know if I should go in there with a wrecking ball and rennovate, or not. Probably 25% of it is outdated. What happens to those outdated pages? Will I offend people if I just delete things that are outdated?

Can you recommend a book or strategy for making sense of massive wikis? Where should I start? I spent a good hour editing a page of it last night that I considered critical. It’s then that I realized this is a huge project and I have no sense of direction. Any insight you can give me would be much appreciated.

With the OLPC wiki, David Farning on the Library list went through the wiki and said he found these categories. It’s quite an accurate content analysis from what I’ve seen, so I was impressed. At the same time, it also helped explain my initial wonderment at how to wrap my arms around the entire wiki – and in fact, it is barely possible to do.

Content
1 Philosophy
2 Contributing
3 Creating
4 Curatoring

5 Projects
Deliverable
In progress
Ideas

6 Management

Once David came up with these categories, he then asked SJ Klein, director of community content and long-time Wikipedian, if he thought the wiki needed structure.

SJ said that the wiki is purposefully without hierarchy – flat, especially for projects, to not force a parent or sibling sense for projects. He also said, however, if you have a specific tree hierarchy in mind, feel free to develop the idea in some temporary space.

So, when working on a large wiki if you have good organization ideas, set them up, and then ask for community feedback. Seems like an appropriate approach to a large wiki.

Other ideas for starting out in a large wiki environment:

While it might seem like it’s a question similar to “how do I get started on a huge writing project?” in my experience, wiki editing has some subtleties due to the collaboration and community vibe already present behind the pages. You have to work harder to figure out that vibe, and then determine your course.

For new people, there’s the whole question of getting a feel for the community so you can start to answer “who am I going to potentially irritate by editing this” and “as a newbie do I have the confidence I’m right?”

So, knowing your role within the wiki community is a first step. You might take a while to get to know who’s there, what their roles are as well, and where you might best fit in. Introduce yourself with your profile page, following the WikiPattern, MySpace – see http://www.wikipatterns.com/display/wikipatterns/MySpace.

Just like a newbie on a writing team, find out if there’s some scut work that you can do to get your feet wet, if needed, to gain the community’s trust.

Deletions are going to bring much more wrath in a wiki situation, I would guess, so they seem risky to do to start out. If you do think something needs deletion, message or email the original author or the big contributors and ask if it’s okay to mark it for deletion. Then, mark it, and hope that someone else (a wiki admin) determines if it should be deleted.

Start small, like tagging, or applying templates. That’ll help you get a feel for the bigger picture.

Let us know your ideas for wrapping your head around a large wiki, we’d love to hear them.

Wiki as online help source

A response to the question, Wiki-to-Help? on the Help Authoring Tool Yahoo Group.

One of our test engineers (and the lead developer of our company wiki) just approached me with the idea of using our company’s internal wiki as the central repository for all company material and using it to generate online help.

I’m following the discussion with interest. I, too, had a similar question asked of me from a developer when we were working in an Agile development environment at BMC Software. In that case, which was at least three years ago, the matchup between the wiki HTML output and the HTML output I needed for our particular help system just wasn’t a good fit. But today, there are better pairings, input to output. I think it’s feasible to go from a wiki to an online help system. It really depends on what output you need, and what you’re willing to do to ensure that the wiki source is worthy of publishing (tested, vetted, trusted, and so on).

I’ve been working on wikis as source for manuals, where the output is a PDF file. In general, yes, wikis are a little clumsy to work in for authoring. For example, some wikitext doesn’t understand that you want a numbered step list with images in between each step and that you want the numbering to continue after each image. So if you’re accustomed to a nice HTML authoring interface, a wiki authoring interface will “feel” like a step about 10 years back in time. 🙂

On the more interesting issue, the cultural issue (or the career issue, depending on how you think about it), I think the basis of most arguments against using wikis as source is the fear of loss of authoring control. See wikipatterns.com for the many anti-people patterns that wikis tend to foster if you don’t take steps to avoid them. I especially liked one of the responder’s comments to the list that he didn’t want to become an editor for a wiki. I think he’s right – that “magazine editor” is one of the roles you could take as a wiki-based author. You could also consider your role to be “community director” if you think you can motivate others to contribute to your wiki that will eventually be the help system. There are different roles that will evolve, and it’s up to you to figure out what role might work well in your environment (or if it would work at all). I wrote up a blog post last week about determining where your role as technical writer is most valued in the company, and building from that role.

I believe the cultural or social difficulties are the more difficult hurdle – you have to ensure that the community surrounding a wiki (those that can and will edit) is a group that is willing to work together and collaborate towards the common goal of publishing a customer-facing help system from the wiki. In a SXSW Interactive session titled “Edit Me! How Gamers are Adopting the Wiki Way” one panelist said that a core group of five editors on a wiki may be the best practice for the size of the group. This type of small number is represented and described in the 90-9-1 theory on wikipatterns.

A solution that might help you wrap your arms around the wiki as source is to set aside only one area or category of the wiki as the articles from which the online help gets generated. Again, without knowing the wiki engine you’re working with and the types of output you’d require, it’s difficult to know if a “wikislice” solution could help in your situation.

Anyway, I could go on and on (and I believe I just did go on and on) about using wikis as source for end-user documentation. I’m pleased that Sarah O’Keefe has just published a white paper titled “Friend or Foe? Web 2.0 in Technical Communication” that should be helpful as we begin to define our roles in each company and how we integrate user-generated content with our own on our product’s web sites.

I hope this information can help you build an argument for or against the use of wikis as source for online help. Please let me know the eventual outcome, and I’d love to hear your thoughts on my response.

DITA and wiki hybrids – they’re here

Combinations - DNA and dice, relevant to Darwin?

Lisa Dyer and Alan Porter presented at last week’s DITA Central Texas User Group meeting, and both told tales of end-user doc written and sourced in DITA, with wikitext in mind as an output. About 20 people attended and we all enjoyed the show. I wanted to post my notes to follow up, and I’ll post a link to slide shows as well.

This post covers Lisa Dyer’s presentation on a wiki sourced with DITA topics. I’ll write another post to cover Alan’s presentation.

Although, actually, first, Bob Beims shared Meet Charlie, a description of Enterprise 2.0. Seems very appropriate for the discussions we’ve had at recent Central Texas DITA User Group meetings talking about wikis and RSS subscriptions and web-based documentation.

Lisa has made her presentation available online. My notes are below the slideshow.

DITA source to wiki output case study

Lisa Dyer walked us through her DITA to wiki project. Their high level vision and business goals merged with a wiki as one solution, and Lombardi has customers who had requested a wiki. Lombardi’s wiki is available to customers that have a support login, so I won’t link to it, but she was able to demo the system they’ve had in place since July 2007.

What wiki toolset – open source or entprise wiki engine?

On the question of choosing an open source or enterprise wiki engine, Lisa said to ask questions while evaluating, such as where do you want the intellectual property to develop? Will you pay for support? Who are your key resources internally, and do you need to supplement resources with external help? They found it faster to get up and running and supported with an enterprise engine and chose Confluence, but she also noted that you “vote” for updates and enhancements with dollars rather than, say, community influence. (Editorial note – I’m opining on whether you get updates to open source wiki engines through community influence. I’m certain you can pay for support and enhancements to open source efforts with dollars.)

Run a pilot wiki project

She recommends a pilot wiki, internal only at first, to ferret problems out while building in time to fix the problems. While Michele Guthrie from Cisco couldn’t present on the panel at the last minute, she also has found that internal-only wikis helped them understand the best practices for wiki documentation.

Meet customer needs – or decipher what they want and need

Lisa said that customers wanted immediate updates, knowledge of what’s new with the product and doc (800 pages worth), and wanted to tell others what they had learned. She found that all of these customer requests could be met with a wiki engine – RSS feeds, immediate updates, and the ability to share lessons learned. At her workplace, customers work extensively with the services people and document the implementation specifically, and that information could be scrubbed of customer-specific info. They found that rating and voting features give good content more exposure. Also, by putting the information into wikis, they found that there were fewer “I can’t find this information” complaints.

Intelligent wiki definition and separate audiences for each wiki

They have two wikis – one is for end-user documentation, one is for Services information. In the screens she showed us, Wiki was the tab label for the Services wiki, Documentation was the tab label for the doc wiki. The Documentation wiki does not allow anyone but the technical writers to edit content, but people can comment on the content and attach their own documents or images. The Services wiki allows for edits, comments, and attachments. The customers and services people wanted a way to share their unsanctioned knowledge such as samples, tips, and tricks, and the wiki lets them do that. The Services wiki has all the necessary disclaimers of a community-based wiki, such as “use this info at your own risk” type of disclaimers. Edited to add: The search feature lets users search both wikis, though.

Getting DITA to talk wiki

There are definite rules they’ve had to follow to get DITA to “talk wiki” and to ensure that Confluence knows what the intent is for the DITA content. For one, when they want to use different commands for UNIX and Windows steps in an installation or configuration task, they would use ditaval metadata around in the command line text (using the “platform” property) and use conditional processing for that topic. However, because of the Confluence engine’s limitation of one unique name for each wiki article, they had to create separate Spaces for each condition of the deliverable (UNIX Admin guide or Windows Admin guide, for example). This limit results in something like 12 Spaces, but considering it’s output for several books for separate platforms, 32 individual books in all, that number of Spaces didn’t seem daunting to me. She uses a set of properties files during the build process to tell Confluence what file set to use, and what ditavals they’re seeking, and then passes the properties to the ant build task. The additional wiki Spaces does mean that your URLs aren’t as simple as they could be – but in my estimation, they’re not completely awful either.

While I was researching this blog post further, Lisa also added these details about the Spaces and their individual SKU’s (Stock Keeping Unit, or individual deliverable). “Building on this baseline set of spaces, each new SKU would add 1 to 7 spaces hosting 3 to 21 deliverables, depending on the complexity of the ditaval rules and the product. Obviously, the long pole in this system is ditaval. A more ideal implementation would probably be to render the correct content based on user preferences (or some other mechanism to pass the user’s context to the engine for runtime rendition). Or, a ditaslice approach where you describe what you need, and the ditaslice is presented with the right content. Certainly innovation to be done there.

Creating a wiki table of contents from a DITA map

She creates a static view of the TOC from the DITA map as the “home page” of the wiki. She currently uses the Sort ID assignment a DITA map XSLT transform to generate the TOC. She said they implemented a dynamic TOC based on the logical order of the ditamap by dynamically adding a piece of metadata to each topic – a sort id using a {set-sort-id} Confluence macro. The IDs are used to populate a page tree macro (the engine involved is Direct Web Remoting, or DWR… an Ajax technology). Currently, their dynamic TOC is broken due to a DWR engine conflict, which should be fixed in the next release. In the meantime, they are auto-generating a more static but fully hyperlinked TOC page on the home page of each Space. A functional solution, not great for back and forth navigation, but it shows the logical order which is pretty critical for a decent starting point.

Dynamic TOC created with sort-id attribute

DITA conref element becoming a transcluded wiki article

Another innovation she wanted to demonstrate was the use of DITA conrefs output as translusions in the Confluence wiki engine, so that in the wiki, the transcluded content can’t be edited inside of an article that transcludes the content. I don’t think it quite behaved the way she wanted it to, but knowing it’s a possibility is exciting. Edited to add: This innovation really does work, Lisa simply was looking at the wrong content (she admits, red-faced.) 🙂

Wikitext editor view of a conref referenced into a wiki page with a wiki macro

Burst the enthusiasm bubble, there are limitations and considerations

One limitation that I observed is that when you transform the DITA source to Confluence wikitext, there are macros embedded, so when someone clicks the edit tab in the wiki, they must edit in wikitext, not the rich-text editor, to make sure the macros are preserved. In the case of the Documentation wiki, they can instruct their writers to always use the wikitext editor. But, for the Services wiki, one attendee asked if users prefer the wikitext editor, and Lisa believes they do. Someone running MoinMoin at their office said they finally just disabled the rich text editor because they didn’t want to risk losing the “cool” things that they could do with wiki text. The problem at the heart of this issue is that if users really like the wikitext editor and do a lot of “fancy” wiki text markup (like macros), then another wiki user using the rich-text editor can break the macros by saving over in rich text. Edited to add: Lisa wrote me with these additional details which are very helpful – “actually, the macros are preserved when in Rich Text Editor (RTE) mode. the problem is that it looks ugly as heck – and if the user is not techie, potentially confusing. the RTE does add all kinds of espace characters to the content– in a seeming random way – and can negatively impact the formatting in general when viewing, but it doesn’t seem to affect our macros. However, if a user wants to use macros to spiffy up the content, then wiki markup mode is definitely recommended.”

What does DITA have to do with wiki?

We tackled this question and then some at the January Central Texas DITA User Group meeting. I’m a little tardy in writing up my notes and thoughts about the presentation but it went really well and I appreciate all the attendee’s participation as well. We had a high school teacher in the audience and I applaud him for wanting to learn more about DITA to pass that knowledge on to high school students.

I brought along my XO laptop since I was talking about my work with wiki.laptop.org and Floss Manuals and found some more Austin-based XO fans, so that was a great side benefit to me as well.

One of Ben’s answers to the question “What does DITA have to do with wiki?” is “Maybe nothing.” Love it!

Ben introduced another the triangle of choices – you have likely heard of “cheap/good/fast, pick two.” How about “knowledge/reuse/structure, pick one.”

I have to do some thinking about that one and his perception of the limitations and tradeoffs offered by those choices or priorities. Reuse and structure are particularly difficult to pair but also give you the most payoff. Structure and knowledge are another likely pair, but it could be difficult to find subject matter experts who are also able to organize their writing in a very structured manner, and finding writers who know DITA really well and also have specific content knowledge may also be difficult to obtain. His workaround for the difficulty you’d face while trying to come up with a structured wiki is a sluice box – where raw, unstructured data is the top input, some sort of raw wiki is the next filter, and the final tightest filter of all is a topic-oriented wiki.

Sluice box, by Tara, http://flickr.com/people/wheatland/
Original photo of a sluice box by t-dawg.

My take on the question is that there are three potential hybrid DITA wiki combinations, and Chris Almond at this presentation introduced the fourth that I have seen, using DITA as an intermediate storage device, interestingly.

The three DITA-wiki combination concepts I’ve seen are:

  • Wikislices – using a DITA map to keep up with wiki “topic” (article) changes. Michael Priestly is working on this for the One Laptop Per Child Project (OLPC.)
  • DITA Storm – web-enabled DITA editor, but not very wiki-like. However, with just the addition of a History/Revision and Discussion tab, and an RSS feed, you could get some nice wiki features going with that product. Don Day had an interesting observation that sometimes when you add in too many wiki features on a web page you can hardly tell what’s content and where to edit it. I’d agree with that assessment.
  • DITA to wikitext XSLT transform- but no round trip, have writers determine what content goes back to DITA source. Lisa Dyer will describe this content flow in the February session.

The slides are available on slideshare.net. Here are the slides that Ben Allums, Ragan Haggard, and I used.

Here are Chris Almond’s slides and his blog entry about the presentation. I described Chris’s project to Stewart Mader of wikipatterns.com and he blogged about our presentation as well at his blog ikiw.org.

Community support – don’t think of yourself as a customer but as a member of a movement

I’ve signed up for the Give 1 Get 1 program for One Laptop Per Child, and just received the email today, November 12, 2007, with the link to the site, www.laptopgiving.org.

group-giving_v2.jpgI read the terms and conditions with interest because I am seriously considering purchasing a laptop either for my son, who is four, or for his classroom of four-year-olds. Plus, I’ve been volunteering to help with their end-user documentation.

I’d love to buy one for every classroom at my son’s preschool but that’ll take some fundraising. I’ll boldly propose here that you can contact me if you’re interested in buying enough for a small preschool in Austin, Texas in addition to kids in least developed countries around the world.

I absolutely LOVE the spirit of the support statement. It reads as follows:

Neither OLPC Foundation nor One Laptop per Child, Inc. has service facilities, a help desk or maintenance personnel in the United States or Canada. Although we believe you will love your XO laptop, you should understand that it is not a commercially available product and, if you want help using it, you will have to seek it from friends, family, and bloggers. One goal of the G1G1 initiative is to create an informal network of XO laptop users in the developed world, who will provide feedback about the utility of the XO laptop as an educational tool for children, participate in the worldwide effort to create open-source educational applications for the XO laptop, and serve as a resource for those in the developing world who seek to optimize the value of the XO laptop as an educational tool. A fee based tech support service will be available to all who desire it. We urge participants in the G1G1 initiative to think of themselves as members of an international educational movement rather than as “customers.”

I’ve been working on documentation for the XO laptop in the wiki at wiki.laptop.org/go/Simplified_user_guide and then taking the wiki content over to an Author-it instance. I’ll write more later about a wiki-based workflow, especially with translation in mind, and we are putting a process in place. Please, feel free to edit that page or contact me if you are interested in contributing.

Personally, the most difficult part so far has been my limited ability with design and layout. I have grand visions but feel my layout skills are inadequate for a kid- and parent-friendly look within Word. Nonetheless, it is an exciting time to be a small part of such an influential project.

I’m one of the friends, family, and bloggers who is willing to help with the XO laptop. So I urge you to go to www.laptopgiving.org and put your U$399 to good use.

Interview about wikis for tech doc with Dee Elling of CodeGear

While researching my STC Intercom article about wikis and technical documentation to be published in a few months, I interviewed Dee Elling via phone and email because she left a helpful comment on my talk.bmc.com blog entry about a DITA and wiki combo. Dee’s the manager of the documentation group at CodeGear and she blogs at http://blogs.codegear.com/deeelling/.

Anne -What are some of the factors for selecting a wiki software package?
Dee -I’ve encountered hesitation from some writers about using a markup interface. Many writers preferred a Word-like GUI interface, such as Confluence provides. Another consideration is cost, since there is not always a budget for new systems; at CodeGear we use MediaWiki. Primarily we manage internal information such as schedules and doc plans; lately we are collaborating with engineers to write FAQs and release notes.

At my previous employer, one engineering team was writing the documentation themselves on the wiki (using outlines provided by the writer), and the writer cleaned it up and converted it to PDF for distribution with the product. That is a great use case which I believe could seed the adoption of wikis into the documentation process, especially in companies where there are limited doc resources.

At CodeGear I can post copyrighted material to our Developer Network. The Developer Network technology allows comments on postings, which is not the same as wiki but a good start. Since joining, I have started to post traditional doc content as “articles” there. I’ve already fixed a few doc issues due to rapid customer feedback! We are also working on a design to make the website interface more wiki-like.

Anne -How do you get legal approval for such an open-edit site?
Dee -At my previous company I never got to the stage of implementing a public wiki. However I had many discussions about the legal aspects elated to the product documentation. The legal aspects seem complex, but lawyers can write new terms for new situations.

At CodeGear the issue will involve intellectual property, but the user base is so active on the internet that there are few “secrets”. More important will be the issue of releasing information too soon or otherwise getting in trouble with SOX compliance. (That makes my head spin!)

Anne -What are the considerations when choosing where the wiki is hosted?
Dee -Cost and reliability are factors, but most important is buy-in from the IT department, who would likely manage the hosting.

Anne -Which types of products are best targeted for a wiki?
Dee -Complex software products are a good example. There is so much flexibility in software and product documentation cannot cover every use case. The wiki lets customers add content that is relevant to their own use cases, and that will benefit others.

Anne -How can you encourage your users to contribute?
Dee -Keeping up a dialog with the customers is helpful. If you respond to them, a dialog develops, and they are more likely to contribute again.

Anne -What are some of the success factors for the wikis that contain
technical documentation?
Dee -Does it result in more positive feedback from customers? Do customers help each other and contribute to strengthening the installed base? Does it increase product visibility and mindshare in the market? Is it perceived as a strategic advantage over competitors? Does it cut down on tech support phone calls?

Anne -What traps should be avoided?
Dee -The trap of not responding or not paying attention. Writers must diligently track the “living documents” they have created, and they must truly collaborate. If customers contribute and their contributions go unrecognized, they may think that the company is not fully supporting them.

Public wiki documentation must be actively managed and that takes more writer effort than in the past, when documents were forgotten as soon as they went to manufacturing. Contrary to the fear that with wikis you don’t need writers anymore, I believe that with wikis the role of the writer will grow.

Thanks so much for your help, Dee, and for sharing your experience with all of us who are interested in the best ways to harness the power of wikis for end-user doc.