Tag Archives: DITA

STC2008 – Mining Web 2.0 Content for Enterprise Gold

Most definitions of Web 2. 0 are illustrative, but Michael Priestly prefers text.

He’ll pick 2 core Web 2.0 concepts for today’s talk – wikis and mashups to discuss, but there’s also blogs, tagging, social networking that could also be mined.

Wiki’s problems
Content is unstructured, you don’t know if it contains the elements of, say, a tutorial, because there’s no validation.
Content is non-standard
Content is tangled – links are easy, but selecting just a subset of wiki content results in broken links.

Problems with mashups – sources of content are standards, can’t share mashup definitions

Sum of it all – wikis don’t mash well.
You just get faster creation of silo’d content, faster creation of redundant content, faster creation of more content that you can’t reuse.
So true – “If we want others to collaborate with us on content, we usually make them use our tool.”

Scenarios he has done or is doing at IBM:
Create DITA, publish to wiki

Create DITA, feed to wiki-make those DITA pages non-editable. Example: tech support database when answer eventually moves into product docs with stamp of approval
Example: One Laptop Per Child working on collecting Wikipedia articles out of DITA to let teachers make custom curriculum that small, lightweight, portable.

Create DITA, migrate to wiki (with roundtrip in mind). Migrate to DITA is more difficult because of version history tracking.
Throw away formerly semantic content, unfortunately. Funny comparison to archeology dig – why did our predecessors bold this text? It must have had some meaning? About something? Here, the example is porting previous releases’ scenarios.

Create wiki, publish to DITA – wiki redirects edit actions to the CMS, which houses DITA, then republishes the DITA XML to wikitext using an XSLT transform. Invision is doing something like this where you edit the wiki page in a DITA editor, store it back to DITA, publish it to the wiki page. Also Web Works Publisher will publish source to wiki text (although I don’t know about getting back to DITA).

Or: native DITA wiki: portable content – move content in and out

with standardized sources, you can dependably point a tool at a wiki and get reliable source.
with added semantics, ou could make customizable travel guides in PDF format from Google maps, travel sites, combined together.

Common source for multiple wikis based on: audience, products, or platforms
This scenario provides a forum for comments on source (this is basically what Lisa Dyer is doing at Lombardi software).

When they engaged with the community while creating the content, there was a lot more activity – people wanted to “watch’ the superstars create content.

Portable content means repeatable collaboration.
Just one tool will not cut it – insist on standard-compliant tools. Blog about it, ask about it on wikis, log requirements on sourceforge – this isn’t just for vendors selling but also for the open source community. When you get something working, share your experiences with others.

IBM has a Custom Content Assembler in beta that you can try out. It uses Lotus product docs as source and you can build your own custom guides, and then choose to publish to PDF or HTML.

The conflict between structure and collaboration is solvable – use DITA as a common currency.

DocTrain West 2008 – Joe Gollner, XML in the Wilderness

Here are my notes from the morning keynote with Joe Gollner.

This session was a wonderful kickoff for the conference. For the first time, someone was able to connect for me that XML enabled Web 2.0 connectivity. The social web is a direct result of XML allowing for easy combinations and the participatory web. He had many nice diagrams throughout history proving his point, and I really appreciated him making that connection.

He began with a description of Saint Jerome, calling him the patron saint of content management, er, librarians. Saint Jerome was a monk librarian.

A funny game that Joe played with an XML group was, “Who came to XML from the most unusual background?” This game came after Joe showed a picture of his car with an XML license plate, humorously proving he had “arrived” in XML. The third place winner was probably Joe, who has been part of the Canadian artillery. The second place winner was a former prison guard, and the first place prize was a former surfer pottery maker.

During this session, I was reminded of the Washington Post article that John Hunt pointed out at the March DITA User Group meeting, Re-Created Library Speaks Volumes about Jefferson. Jefferson did mashups of books by tearing them apart, even different language books, and then would bind them into new books – reassembly of content 200 years ahead of his time. In 1815, in order to protect his collection after a fire, Jefferson sold his library to Congress for $24,000, the price that Congress felt was reasonable. It became the Library of Congress, a U.S. establishment that as one library says in the article, “These are the books that made America.”Jefferson had created his own taxonomy, using the terms memory, reason, or imagination. Wow, are there parallels to reference, concept, and task? Well… task may be from a stretch of the imagination for some products but hopefully they are ground in fact.

A great start to an excellent conference.

Building a DITA-Wiki Hybrid

Article PDF for Building a DITA-Wiki hybrid

The April 2008 issue of the STC Intercom magazine is dedicated to DITA (Darwin Information Typing Architecture).

I’m pleased that the Building a DITA-Wiki Hybrid article that I co-authored with Lisa Dyer and Michael Priestly is available online for free to anyone, STC member or non-member. The article discusses these three ideas for merging DITA and wiki technologies:

  • DITA Storm, an online DITA editor with an edit button on each page. While it’s not quite a DITA wiki, it seems like it could become one with some RSS notification and comment or discussion ability on each page.
  • Wikislices are a cross-section of a wiki such as Wikipedia, currently created with school curriculum in mind. Michael Priestly and I are working on a team to find ways to use DITA maps to manage and build wikislices.
  • Lisa Dyer has implemented DITA as a single-source with wiki as output for a documentation site housed behind a Lombardi customer support login.

I’d love to hear your comments on the article here and any other ideas you have seen for a DITA-wiki hybrid.

April 16 Central Texas DITA User Group meeting

From http://dita.xml.org/book/central-texas-dita-user-group

Using DITA Content for Learning Content Development

John Hunt, of IBM, will give a presentation regarding the use of DITA for learning content. He’s been working on a new Learning and Training Content specialization that will be part of OASIS DITA 1.2 release. If you’d like to do a little pre-work, check out this article about using XML (such as DITA) for learning content: http://www.ibm.com/developerworks/xml/library/x-dita9a/

A map domain for a learning object

Presenter

John Hunt, DITA Learning and Training Content Specialization SC chair. John is also DITA Architect and Learning Design Strategist in the Lotus Information Development Center at IBM.

I’m also looking forward to Mike Wethington’s presentation to the DITA user group for the May 21 meeting, where he will talk about Agile development and its affect on technical communications. Mike’s the manager of technical communications at Troux Technologies here in Austin.

Quadralay, a wiki-driven company

Alan Porter’s presentation at the Central Texas DITA User Group meeting talked about Quadralay’s use of wikis internally and their external wiki at wiki.webworks.com.

They have four wikis in operation right now, with one more to come. Back in 2003, they started their first wiki for the development team. Their company is a small one, based in Austin, and they now have absolutely every employee (with the exception of one person) having contributed to the wiki at some point or another. Currently, with their staff of 15 people, half of them contribute several times a week.

They held a brown bag training session for the whole company when a wiki for the company came out, to help people get comfortable with editing.

At their WebWorks RoundUp user forum last year, they demonstrated a proof of concept that they could take a mix of FrameMaker, DITA-XML and Word source and turn it into wiki text. I was at the demo and it has such a nice “cool” factor even if it was a simple Proof of Concept (PoC).

Another case study – they use their wiki to communicate with clients and customers on the bid and contract process, and people say it makes things go so smoothly with great communication. They use the very secure MoinMoin wiki engine and it is locked down with tight controls.

The WebWorks Services wiki:

  • used to create and track task tickets
  • offers single point of contact
  • facilitates interaction between customers and engineers
  • gives a timeline for edits on a page
  • gives them milestones and percent completion

In the next six months or so, they’re planning on a new doc site, docs.webworks.com (not yet live) to be authored in DITA using structured FrameMaker, then published to wikitext using WebWorks.

DITA and wiki hybrids – they’re here

Combinations - DNA and dice, relevant to Darwin?

Lisa Dyer and Alan Porter presented at last week’s DITA Central Texas User Group meeting, and both told tales of end-user doc written and sourced in DITA, with wikitext in mind as an output. About 20 people attended and we all enjoyed the show. I wanted to post my notes to follow up, and I’ll post a link to slide shows as well.

This post covers Lisa Dyer’s presentation on a wiki sourced with DITA topics. I’ll write another post to cover Alan’s presentation.

Although, actually, first, Bob Beims shared Meet Charlie, a description of Enterprise 2.0. Seems very appropriate for the discussions we’ve had at recent Central Texas DITA User Group meetings talking about wikis and RSS subscriptions and web-based documentation.

Lisa has made her presentation available online. My notes are below the slideshow.

DITA source to wiki output case study

Lisa Dyer walked us through her DITA to wiki project. Their high level vision and business goals merged with a wiki as one solution, and Lombardi has customers who had requested a wiki. Lombardi’s wiki is available to customers that have a support login, so I won’t link to it, but she was able to demo the system they’ve had in place since July 2007.

What wiki toolset – open source or entprise wiki engine?

On the question of choosing an open source or enterprise wiki engine, Lisa said to ask questions while evaluating, such as where do you want the intellectual property to develop? Will you pay for support? Who are your key resources internally, and do you need to supplement resources with external help? They found it faster to get up and running and supported with an enterprise engine and chose Confluence, but she also noted that you “vote” for updates and enhancements with dollars rather than, say, community influence. (Editorial note – I’m opining on whether you get updates to open source wiki engines through community influence. I’m certain you can pay for support and enhancements to open source efforts with dollars.)

Run a pilot wiki project

She recommends a pilot wiki, internal only at first, to ferret problems out while building in time to fix the problems. While Michele Guthrie from Cisco couldn’t present on the panel at the last minute, she also has found that internal-only wikis helped them understand the best practices for wiki documentation.

Meet customer needs – or decipher what they want and need

Lisa said that customers wanted immediate updates, knowledge of what’s new with the product and doc (800 pages worth), and wanted to tell others what they had learned. She found that all of these customer requests could be met with a wiki engine – RSS feeds, immediate updates, and the ability to share lessons learned. At her workplace, customers work extensively with the services people and document the implementation specifically, and that information could be scrubbed of customer-specific info. They found that rating and voting features give good content more exposure. Also, by putting the information into wikis, they found that there were fewer “I can’t find this information” complaints.

Intelligent wiki definition and separate audiences for each wiki

They have two wikis – one is for end-user documentation, one is for Services information. In the screens she showed us, Wiki was the tab label for the Services wiki, Documentation was the tab label for the doc wiki. The Documentation wiki does not allow anyone but the technical writers to edit content, but people can comment on the content and attach their own documents or images. The Services wiki allows for edits, comments, and attachments. The customers and services people wanted a way to share their unsanctioned knowledge such as samples, tips, and tricks, and the wiki lets them do that. The Services wiki has all the necessary disclaimers of a community-based wiki, such as “use this info at your own risk” type of disclaimers. Edited to add: The search feature lets users search both wikis, though.

Getting DITA to talk wiki

There are definite rules they’ve had to follow to get DITA to “talk wiki” and to ensure that Confluence knows what the intent is for the DITA content. For one, when they want to use different commands for UNIX and Windows steps in an installation or configuration task, they would use ditaval metadata around in the command line text (using the “platform” property) and use conditional processing for that topic. However, because of the Confluence engine’s limitation of one unique name for each wiki article, they had to create separate Spaces for each condition of the deliverable (UNIX Admin guide or Windows Admin guide, for example). This limit results in something like 12 Spaces, but considering it’s output for several books for separate platforms, 32 individual books in all, that number of Spaces didn’t seem daunting to me. She uses a set of properties files during the build process to tell Confluence what file set to use, and what ditavals they’re seeking, and then passes the properties to the ant build task. The additional wiki Spaces does mean that your URLs aren’t as simple as they could be – but in my estimation, they’re not completely awful either.

While I was researching this blog post further, Lisa also added these details about the Spaces and their individual SKU’s (Stock Keeping Unit, or individual deliverable). “Building on this baseline set of spaces, each new SKU would add 1 to 7 spaces hosting 3 to 21 deliverables, depending on the complexity of the ditaval rules and the product. Obviously, the long pole in this system is ditaval. A more ideal implementation would probably be to render the correct content based on user preferences (or some other mechanism to pass the user’s context to the engine for runtime rendition). Or, a ditaslice approach where you describe what you need, and the ditaslice is presented with the right content. Certainly innovation to be done there.

Creating a wiki table of contents from a DITA map

She creates a static view of the TOC from the DITA map as the “home page” of the wiki. She currently uses the Sort ID assignment a DITA map XSLT transform to generate the TOC. She said they implemented a dynamic TOC based on the logical order of the ditamap by dynamically adding a piece of metadata to each topic – a sort id using a {set-sort-id} Confluence macro. The IDs are used to populate a page tree macro (the engine involved is Direct Web Remoting, or DWR… an Ajax technology). Currently, their dynamic TOC is broken due to a DWR engine conflict, which should be fixed in the next release. In the meantime, they are auto-generating a more static but fully hyperlinked TOC page on the home page of each Space. A functional solution, not great for back and forth navigation, but it shows the logical order which is pretty critical for a decent starting point.

Dynamic TOC created with sort-id attribute

DITA conref element becoming a transcluded wiki article

Another innovation she wanted to demonstrate was the use of DITA conrefs output as translusions in the Confluence wiki engine, so that in the wiki, the transcluded content can’t be edited inside of an article that transcludes the content. I don’t think it quite behaved the way she wanted it to, but knowing it’s a possibility is exciting. Edited to add: This innovation really does work, Lisa simply was looking at the wrong content (she admits, red-faced.) 🙂

Wikitext editor view of a conref referenced into a wiki page with a wiki macro

Burst the enthusiasm bubble, there are limitations and considerations

One limitation that I observed is that when you transform the DITA source to Confluence wikitext, there are macros embedded, so when someone clicks the edit tab in the wiki, they must edit in wikitext, not the rich-text editor, to make sure the macros are preserved. In the case of the Documentation wiki, they can instruct their writers to always use the wikitext editor. But, for the Services wiki, one attendee asked if users prefer the wikitext editor, and Lisa believes they do. Someone running MoinMoin at their office said they finally just disabled the rich text editor because they didn’t want to risk losing the “cool” things that they could do with wiki text. The problem at the heart of this issue is that if users really like the wikitext editor and do a lot of “fancy” wiki text markup (like macros), then another wiki user using the rich-text editor can break the macros by saving over in rich text. Edited to add: Lisa wrote me with these additional details which are very helpful – “actually, the macros are preserved when in Rich Text Editor (RTE) mode. the problem is that it looks ugly as heck – and if the user is not techie, potentially confusing. the RTE does add all kinds of espace characters to the content– in a seeming random way – and can negatively impact the formatting in general when viewing, but it doesn’t seem to affect our macros. However, if a user wants to use macros to spiffy up the content, then wiki markup mode is definitely recommended.”

Can DITA train writers? Or does it require too much programming?

DITA for writers (content creators)

I just did a search on amazon.com for books for beginning technical writers and also to investigate what books are being written for our profession and for others wanting to start in our profession. I came across a book called Writing Software Documentation: A Task-Oriented Approach that suggests three categories for writing:

  • writing to teach (for eager learners)
  • writing to guide (for reluctant users of the product)
  • writing to provide a reference (for experts who need only occasional support)

I immediately saw a connection to the three content types that DITA prescribes:

  • concepts to teach understanding
  • tasks to guide performance
  • reference to offer facts or lists of information

Because writers have to immediately place the information they want to record into one of these three types of information, they are being trained on how to write in a task-oriented, performance-based manner, via DITA. I am especially interested in this “training” for wiki authors and talked about the idea at our recent presentation at the Central Texas DITA User Group meeting.

DITA for publishers (formatters)

Recently a few techpubs bloggers have been talking about DITA and its weaknesses, such as a lack of online help outputs, and how difficult it can be technically if you don’t already have a staff with pseudo-programming skills. Gordon Mclean writes “DITA is not the answer” and I think the question he is trying to answer is, “what is a single-sourcing tool we can use in our environment (which includes Technical Communications, Training, Pre-Sales and Marketing) with our current resources?” Instead of DITA, it looks like he’ll go with Author-it.

Since I just this past year moved from BMC, which is still moving to DITA, to a small techpubs group that uses Author-it, I can understand his reasoning and agree with his business case assessment. The toolchain for DITA is very nearly there, but often a CMS-based approach has too much overhead for small companies. It can be cost overkill when you have few topics to contain.

Scott Nesbitt followed up with his post, “DITA’s not THE answer for single sourcing.” I think he’s spot on with the analysis “it’s difficult to get good PDF or online help from DITA without extensively customizing XSL stylesheets or passing DITA source files through tools like FrameMaker, Flare, or WebWorks.” One of his commenters said something about consultants smelling blood in the water, yikes. In other words, I think he meant that XML consultants knew how much customization would be desired and can have a feeding frenzy on the potential work possibilities. My guess is that the people who have been around XML for years know that there are still basic needs for output, and their experiences have shown them that nothing that is structured is an “out of the box” experience. So much of the success depends on your content to begin with.

I’ve found the same conclusions about the output in my experience. When you dig into single sourcing, be it with DITA or another tool (Madcap Flare, Author-it, FrameMaker, RoboHelp, and the Adobe Technical Communication Suite) the real business-case killer seems to still be, where can I get pretty PDFs that are formatted just as I like them? With DITA, one answer is to go get the Mekon FrameMaker plug-in for the DITA Open Toolkit. No XSLT-FO knowledge required.

People love their tools to get their pretty PDFs or sleek online help systems. Plus, so many of the employers out there have a lot of content that looks pretty nice already in a specific tool. The legacy documentation may be one reason why hasn’t DITA helped our industry get away from tool love. Tech writers and their employers fall in love with tools. I’m not saying Gordon or Scott are tool lovers, but certainly some people they’re hiring will be. There is probably also an element of “if it ain’t broke, don’t fix it.”

DITA for all?

Sarah O’Keefe has a thought-provoking analogy in her comments on her post signed “DITA Dissident.” The analogy is that creating desserts using a frozen pie crust is one method of getting results. If a pretty PDF is your ultimate dessert, then for some, DITA is a bag of flour, meaning you’d better be a skilled baker if you’re going to use it for the best pie (PDF) ever. For others, DITA is a frozen pie crust that makes a perfectly good cherry pie (PDF) or apple pie (plain HTML) or chocolate creme pie (Eclipse help). Although isn’t the filling the content and the pie crust the DITA map?

Their conversation first started with Eliot Kimber discussing DITA’s use for narrative documents. Alan Porter talks about DITA use for narrative writing as well, but in a different line of thought in his post, Is DITA Just a Story?

All the posts I’ve linked to are enjoyable for me to read and to point to and to think about. I’ve read it before, and I’ll say it again, I believe along with others that DITA has the potential to transform our industry. Just last night I said to the San Antonio STC group, today we all speak HTML tags pretty fluently. In ten years, will we all speak DITA tags just as fluently? “I wrote the shortdesc according to the guidelines and it works for the topic, but I am not sure if my conref target is going to be there every time. I guess I should rewrite the concept topic.” Heed the warnings and experiences of others before making the leap to topic-oriented single-sourcing or your expectations and those of your customers may not be met.