Tag Archives: CMS

STC2008 – From Nightclub DJ to Content Management Consultant

Subtitle: Developing a Business Career The Content Wrangler WayScott Abel\'s career path at STC Summit in Philadelphia, June 2008
From the ever entertaining Scott Abel, this was an invigorating session that still kicks you in the butt to get out of your whiney mode and into a winner mode. Sounds cheesy to repeat, but it worked. Here are my notes from the session. I’d love to hear your thoughts and critique on my “live blogging” style – too much information, not enough information, not the right information? Let me know.

Routes to tech comm – English major or developers accidentally become tech writers

scottabel.com – crafted a career – but Scott didn’t grab that URL (he’s obviously not That Scott Abel.:)

He earned 146 credit in four different programs, and didn’t earn a degree
he could get a college degree, but decided not to pay the “fees.”

Still takes classes like knowledge enabled information management – Indiana University 8-5 every day for three days, presentation to 200 people as a capstone, and you fail if you’re late, or don’t play by their rules. But it’s three credit hours.

John Herron school of art in Indianapolis – foundational school – you should have drawing or sculpting skills, though.
Business School, next stop – he lasted one semester, it wasn’t about the answers, it was about how you get the answers – answers are on the back of the syllabus

Next stop, photography – first working with digital photography, won some photography contests by accident.

Journalism school – at Indiana University – and he worked there too. He went to and helped with computer assisted journalism conference. Use computer technology to cull through all the data.

He started in entertainment journalism, friend of Margaret Cho, has interviewed Elton John, other celebs.

Started a local alternative magazine… fun exciting and profitable. Assignment in journalism school – business plan for a magazine… just did the magazine, didn’t do a plan. 72-page monthly publication, two guys with two much time on their hands – sold highscale ads and actually made revenue.

He waited tables to get through school, learning that he could make 200-300 bucks a night, he met influential people. PanAm games, miniature Olypics hosted in Indy, got more experience.

He had the attention span of a worm – didn’t lead to very many opportunities.

Became a bartender – clock in at midnight, clocked out at 3-4 am. But felt he lost time during those “young” years even though he had flexibility and enough money.

Age 14: my first gig as a DJ. Learned how to mix, taught him about content reuse and personalization… wrong song – every one hides like roaches. or perhaps on purpose, when music sucks, beer and drink sales go up.

Wrong song, wrong version of the song. He had a remix of a chitty chitty bang bang that got played on Chicago radio.

Remixes were user-generated, 45s were all they had to work with, they’d buy 2 copies of the single, because they needed songs longer than 3 minutes. So… two turntables and a mixer – had to understand tempo, tone, feel of a song, but tempo control was the key. The Technicas 1200 Turntables are still instrument of choice for many dejays.

Reuse is in the remix… that’s how tracks were laid down… vocals reused identically but combined with different styles of music.

Madonna explained how her voice could be changed, the tools allowed her voice to stretch like a proportional sqaure stretches proportionally when you hold down shift key…

DJ mixing and increasing complexity similar to content choreography that we do with content – the technology is increasingly.

1999 – employment counselor said, you’d be an excellent technical communicator with your skill set.

Put together a portfolio

First job, documenting mortgage loan automation software, $45,000, he could buy groceries, kick out his roomates. Bedazzled by corporate America… benefits, paycheck, vacation.
Had folders called “Betsy’s documents” – totally disorganized, inefficient, wasteful, later they were sued out of business. Their automated software was

Started reading Ann Rockley, Bob Glushko, JoAnn Hackos, all of whom had really good best practices towards fixing the mess of content he was seeing at work.

Ann Rockley sent Scott a draft of her book, Unified Content Strategy, and he became technical editor on the book.

He needed a way to get organized, get away from notes on paper in his backpack, started a blog to be a storage container for his knowledge.

(Side note – I have to enter my “cringe” essays from grad school)

Once he got attention for his blog, he got more people talking to him, asking questions, help solving questions.

Started speaking at events, but then had to define his value proposition. Rebranded himself as a Content Management Strategist.

Tools that can tell management that content is valuable and that the product can’t ship without it. Value proposition can’t circle around their job – content needs to be valued.

Syndicate Conference 2006, encouraged to think bigger. He started commoditizing the site. Conference are a natural extension of what he was writing about, his readers wanted to learn more about what he was writing about.

Presenters seek attention – same folks who speak at conferences write articles and participate in groups.

Need for a community – 1900 members of the Content Wrangler community… there needed to be a way for people to connect to one another without Scott’s help.

Being an individual consultant is not scaleable – and this is good news for you. You can create your own value proposition.

The discipline of Document Engineering – Bob Glushko, no future in commodity writing – the future is in solving content challenges. Structured content, XML, move content around, but not just documents – documents married with data from databases. Opens up a brand new world.

Road to success – don’t allow others to define you, no one right way to become a content management expert.

He’ll post to slideshare.net (youtube for ppt)

scribd.com (youtube for pdf) ipaper service

http://thecontentwrangler.ning.com Community site

Harmonizer product – will eventually let you analyze content using web page

acrolinxacrocheck product

How much coding does Scott know?
If you don’t know how to model content, you shouldn’t be coding. You have to be able to analyze content before you model it, even.

What’s next for Scott – providing service designs, such as RSS feeds. Problem solving providing services that give them answers before they ask them. Such as mortgage being due, or governments issuing fishing licenses.

Another question – any certificate programs you’d recommend? None, says Scott. Writing for reuse isn’t part of these certification programs, what about DITA, often focused on tools, not skill differentiators.

The Rockley Blog – wiki’s delivery mechanisms

I’m thoroughly enjoying the new Rockley Blog at http://rockley.com/blog/. I’m so glad Steve Manning and Ann Rockley are blogging, especially about wikis.

I appreciated Steve’s post “Wikis for Documentation” especially where he says that the delivery side is the weak point still. Agreed, but I have seen pockets of improvement there and need to be blogging heartily about them.

About three years ago I was equally as unconvinced of a wikis usefulness for end-user doc. An Agile-advocating developer mentioned the idea of using wikis so that the end-user doc could stay in sync with their fast Agile iterations. Yipes I thought! Wikis did work well for convincing the developers to contribute their knowledge, though, and internally they became a useful knowledge sharing (and finding) system.

Now I’m starting to see more and more actual customer need for them. I just had a great discussion about the difference between what a customer needs and what a customer thinks he or she wants, though, so some of what’s necessary is interpreting whether a wiki can fulfill a customer need. I got a small chuckle out of the title of this blog entry – Wikify Documentum Already – but he’s talking precisely about the gains you and your customers make when documentation is in a wiki. The interesting momentum going now is whether the current large enterprise content management systems can start to see the value in a wiki output, or whether the wiki engine providers themselves are going to catch up with the full feature set available in the large enterprise systems.

But wow, I’m learning how difficult wiki maintenance and trust patterns can be while using the wiki.laptop.org site to help out with their end-user doc. In some ways, wiki doc is more difficult than using the real CMS tools that we’ve become accustomed to (read: spoiled by). But I’m also learning how amazing the collaboration opportunities are when using a wiki. I’m still marveling at the communication going on in the discussion pages as well as the volunteer spirit that has come through a request on an art network.

So, what to do to get decent output for content delivery using the multiple channels that us advanced single-sourcers are already accustomed to? I’m planning to move the XO’s end-user doc towards the Flossmanuals.net model of a highly customized Twiki implementation where you can get and print a PDF from the wiki. I can’t wait to learn more and I’ll blog about it as I find out. It’s a step in that direction, though, where you deliver user guides and online help and web sites tailored to the needs of specific audiences. Language translations in a highly distributed environment are going to be an important part of the project, and I’m curious about how Flossmanuals provides for that aspect. riverbend.jpg

I’ve learned that you can write a Confluence plug-in that will take DITA source and turn it into wiki text. Confluence has PDF output capability as well, so it’s another step in the right direction to get that just-in-time content delivery that a customer needs (but doesn’t know that they want.)

Putting documentation in a wiki (or any really-well-indexed web location, really) can increase findability. If you get internal comments that say “you haven’t documented this particular feature enough” and you feel the feature is sufficiently documented, examine the findability of your documentation.

Also in our user advocacy role we are learning how to listen to customers and then interpret their needs. As information acquisition continues to gather speed, we not only provide the information but should also make informed choices about delivery methods.

Examples of what a customer wants but might not know that a wiki can deliver:

  • What’s new with the product?
  • How do I interact with documentation, support, the company represented as an actual live person?
  • I want immediate information updates.
  • I want to discuss the nuances of an implementation decision.
  • I need to find others who are attempting what I am in the same type of field (insurance or banking).

What are your thoughts? Are we spoiled by our advanced delivery systems and waiting for wiki engines to catch up? Or are wiki authoring and delivery systems already giving us collaborative opportunities that are unparalleled?

Podcast about wikis with Tom Johnson at TechWriterVoices.com

Everyone, please give the TechWriterVoices.com site a look or two or three or more. It’s such a great site. I’m not just saying that because I have a podcast there now, Answering Tough Questions about Wikis – Anne Gentle, but because I do want to tell everyone what a great interviewer Tom Johnson is.headphones

Tom sent ten excellent questions ahead of time via email and I spent some time answering them by writing out some notes. I’ve sharpened them up for those of you who, like me, don’t really have the auditory learning style that listening to podcasts serves best. I always want to be able to scan my media and pluck out the pertinent tidbits, and few podcasts have that feature. (Side note: with QuickTime movies you can make chapters which allows you to scan the chapter titles for the content you’re most interested in.)

So, here’s the interview in text form, but not a complete transcript. We skipped around, Tom edited around the audio difficulties, and sometimes I just answered the question differently.

1. How we can manage and re-use content when it’s wrapped up in mediums like wikis? Can you single source topics? Can you export the content to Word? Or is it locked in its own format?
Nice lead in. This is the ultimate question that many people are asking me and I am having lots of conversations via email and lunches, trying to come up with ideas.

Wiki engines are becoming more and more powerful and CMS-like. WikiMatrix.org has 95 wiki engines listed as of this week. Yes there are still very simple wikis but there are also people making wikis run really powerful websites with amazing wiki extras – like threaded comments, conflict resolution, and so on.

But about single sourcing and wikis…

One idea is that you export your source to a wiki, and then when any changes come in, have to have a human cull through all the edits and figure out what should go into your souce files the next version of the wiki should be, like book publishing. This method is similar to going through support forums or support call logs to see what you’ll include in the next version of your book. Lots of work and thinking.

Another idea for single sourcing a wiki is that the wiki files themselves are the source – if we could invent a DITA wiki where a webform is used to edit DITA topics then we’re there. The wiki platform would also become a publishing engine. Also Confluence appears to work on this idea, that the wiki files are the source and you can publish out to PDF.

Paul Kandel at Intel has this idea that you’d have an offline wiki, with synchronizations every once in a while (you’d set the time parameters). He’s talking about the Dojo Offline Toolkit ( http://dojotoolkit.org/offline), which is based on Google Gears (http://code.google.com/apis/gears/ ), could provide an outstanding solution to the issue of generating offline doc formats. You could enable the wiki itself to be viewed offline, and user additions could automatically synchronize with the online version. I’m not sure I fully understand the advantages and disadvantages here, but the source could be “frozen” in time every once in a while.
Another idea is to be very selective in what you put in a wiki, and make the wiki the source for that selective info only.

Many wiki platforms have import and export capability. However, as my co-worker Mary said on the Author-it users Yahoo group, try not to have your precious content be downstream, meaning, if someone asks you to export all your Author-it topics to Word so that they can be imported into Wikimedia, try to find a way to have the content go the other way around (export Wikimedia content and then import it into Author-it).

I’m also working on the One Laptop Per Child project which is an education project to provide opportunities for children to explore and express themselves. It’s a neat project where we would really like to figure out how to go from wiki content to Author-it and then use Idiom for translation… but it’s probably a “copy from the OLPC Wiki and paste into Author-it” method because the wiki.laptop.org content is not written for the audience we have in mind. The docs for the kids are translated into at least seven languages. It’s an interesting project to be on, and I’m learning so much as I go.

2. You found that with Wikipedia, fewer than 1 percent of users contribute more than half the edits. Should technical writers expect the same 1 percent user contribution effort with the wikis they create?
I just read on keycontent.org that even the MSDN wiki has 5 top contributors. Of the 1876 contributors, 5 contributors (three from Microsoft) made about 1500 of the edits (out of 5800 edits).  I believe you should expect a similar contribution effort as long as those numbers continue to be displayed in other wikis. I suppose the key is recruiting and maintaining relationships with those users. And really, you probably want just a few core inner circle type of people so that you can maintain positive relationships. It’s like an active listserv – you can probably count on one or two hands who the inner circle of contributors are. Those are your experts who are also helpful and giving.

3. If it’s only 1 percent, and your audience is 500 users, that’s only 5 people making edits. Is the wiki even worth it then?

It’s certainly worth it to the 5 contributors who are motivated for whatever reason. And the audience size is difficult to speak to the value – if your manuals had an audience of only 5 people, would you write them anyway? In some cases, yes, because many products need the manual for quality purposes – it’s just an expectation of a software or consumer product. For some products, a wiki is an expectation (probably as an extension of highly visible/visited forums).

I write more about motivations for contributing to wikis in this article on The Content Wrangler. (Think of Reciprocity, Reputation, Efficiency, Attachment to a group).

4. What type of information is a wiki best suited for? Reference? Troubleshooting? Living documents? Why?

I believe that there are several reasons to use a wiki for certain types of information. For Reference information, a wiki is searchable and allows for easy upload of large log files (although if the reference info is in tables, well, I’m not sure how each wiki engine would handle table formatting). Troubleshooting info in a wiki follows the “better than a support forum” model for wiki building. A document that changes often or should change often might be suited for a wiki. Internally, developers can use wikis to explain why the software was designed the way it was, what gotchas in the code to be aware of. Wikis are quick methods for team meeting note-taking and that sort of thing, which would be reference info. So I guess the type of information is the type that people want fast and want to collaborate on.

One person on Gordon McLean’s website onemanwrites said that a wiki plus a Google search appliance is a really good thing. I found that to be the case at BMC for the internal wikis and searching, the combo was killer for finding info or for finding collaborators quickly. That commentor also had the good sense to say “it’s worth watching what goes into it to guard against people setting bad advice in stone.”

5. Is the typical wiki editor robust enough to support all the complicated styling that technical writers do? Can you create your own styles? How hard is it to work with graphics?

Some wiki engines are robust enough… lots of technical writers that I talk to like the Confluence editor’s robustness. I’m not sure if technical writers do much complicated styling really, compared to magazine layouts and glossy brochures. Or if we do need complex styling, the copy is sent to layout.

I’m also not sure how complex tables are in most wiki engines. I think for the most part, you get headings, paragraphs, and lists. Beyond that and levels of those, what else do you need for much technical writing?

Graphics can be a complete bear to get in… apparently one of the Motorola writers had a heckuva time importing the graphics from the user manual into the Mediawiki-based wiki with the time crunch she was up against.

6. Who is actually using a wiki? Have you personally used a wiki successfully for a product?

Many people are using wikis internally. I myself was more interested in wiki use externally for product documentation, and interviewed Emily Kaplan at Motorola and Dee Elling at Borland for information about technical product documentation house in a wiki or wiki-like structure. Harry Miller at Microsoft also interviewed one of the PMs for the MSDN wiki.
Wikis are a favorite quick website set up for many open source-type projects. Also the gaming communities seem to have flocked to wikis either as a replacement or enhancement to customer support or game discussion forums.

I’m using www.imiscommunity.com for my current job, and it’s highly effective for making quick web pages for internal or external use. Plus the search engine is useful. It’s a Drupal site but nearly every page has an Edit tab, so it’s wiki-like.

7. Wikis have been around for 10 years. Wouldn’t they take off if they were going to?
I think they have taken off in certain circles, just not yet in ours. I think that with a set of best practices or push from customers or employers, wikis will be in demand, and it’s a matter of positioning your tech pubs department as the overseers of that content. Without a strong guiding hand, a wiki isn’t that useful because people don’t know what to contribute or how to handle revisions and so forth.

8. What if a user makes a change you don’t like? Do you change it back, offending the contributor? Or do you leave it in, offending the other users? How long do you stick around making these decisions?

I think that you almost need to think of the contributor as a team member, and then behave as you would with a colleague writer. If it’s inaccurate, you change it and reason with the person as to why it has changed. If you’re looking to save time, or don’t have the time to arbitrate, then don’t take long to make those decisions, or don’t get involved with that particular change. But don’t expect to build a wiki and have all these contributions come for free or little resource or time investment. That’s just not realistic.

9. How does a wiki build community?

In itself, a wiki can’t build community. There’s a great quote from Wiki for Dummies with a heading entitled something like “don’t go on a wiki suicide mission.” It says “Wikis don’t have magical powers. They cannot create camaraderie where none exists, nor can they streamline an out-of-control operation. They are not powerful information magnets, nor will they make your team better writers, more organized, or more intelligent. In short, without a strong guiding hand, wikis are useless.

Wikis cannot promise instant returns or unbelievable creativity. Wikis allow users to quickly and easily update and upload information.

I enjoyed the heck out of that quote. But, many web workers are finding that a wiki is a place to find other like-minded individuals trying to tackle the same problems and offering similar solutions, much like a customer support forum. So a wiki can help build community by offering information and identity to the contributors.

10. As a technical writer, are you ever done with a wiki project?

I’d say No. Wiki building is a lot more about relationships and connections so you’d never want to sever those ties if there’s still a bond there. Anyone who thinks that starting a wiki will make less work for their techpubs team by crowdsourcing the writing is fooling themselves. But if you want to serve and connect with a certain set of customers, you’ll do what it takes to keep the wiki alive and kicking.

Author-it webinar on version 5.0

Our team is so excited for the new Author-it version 5.0 that we invited other Austin techpubs teams over to our office to watch the US & Canada webinar. It was like a movie premiere. Okay, we’re not really that big of dorks. But we had a good time with it.

By my calculations, it was about four AM Australia time but both the presenters were troopers, even when the typed-in license key didn’t “take” during the migration portion of the demo. All of us in the room empathized with her and she smoothly avoided any delays.

Ribbon bar and organized styles and templatesI’m mostly excited about the new interface. And Australians New Zealanders say “ribbon bar” so sweetly. It’s such a nice update. I’m really looking forward to using it. The organization of styles and templates makes sense as well, and I am glad to have separation between paragraph styles and character styles.

The search bulk-ups contain the features my co-worker was looking for – search within a folder and match case or whole word. Also the search within a topic as a customizable panel pop-out is going to be highly useful. That new editor interface is especially exciting. It reminds me of XMetal’s editing environment.

Author-it publishing profilesWe can’t wait to start trying publishing profiles. We wanted to just start clicking in the dialog boxes displayed during the demo. Their knowledge base says that eleven profiles are shipped right out of the box (mapped directly to publishing outputs). The output types I see in version 4.5 are DITA, Word, PDF, HTML, XHTML, HTML Help, Microsoft Windows Help (RTF-based), Java Help, Oracle Help, XML, and Author-it Website Manager format. Since that list adds up to eleven types, I guess there are no new outputs with this release.

Author-it XtendWhile we don’t currently have a use for Author-it Xtend, I found it fascinating as a concept. Why would a techpubs department pay money to a vendor for an embedded search engine to try to encourage writers to re-use? Why not just spend that money in training (or hiring) writers to think more about topic orientation and re-usability?

My co-worker pointed out that in a translation situation, Xtend might pay for itself in one translation round. It seems so very Google-like. There’s this sliding bar for more Fuzzy matches and more Relevant matches. There’s color coding for matches. I’m automatically drawn to it like a moth to a back porch light, yet I’m not sure of the best applications for the search hits nor what problem teams should expect to solve with this functionality. Perhaps someone can tell me the best scenarios for this add-on?

We copied the Questions and Answers from the webinar, and people asked plenty of questions. Here’s just a sampling – does 5.0 run on SQL Server 2005 (yes), does it run on Windows Vista (yes), questions about the new Project Manager (purchased as a separate module but is integrated into 5.0), is the 5.0 upgrade covered by a maintenance contract (yes), can you upgrade from 4.3 to 5.0? (yes), are presentations part of 5.0 (yes), and the final question was a good comparison: Can I use the Filters (Variables) on the Publishing Profiles as I would have used conditional build tags in RoboHelp to include/exclude content from specific output types? Am I understanding this correctly? The answer: yes, that’s correct.

So, exciting new features abound and we can’t wait to get our hands on them. I’ll keep you posted on our progress.

Author IT – boldly climbing the learning curve

I thought I’d continue posting about my experience with Author IT since my initial review of AuthorIT. This past month I’ve been exploring ways to help out with the Author IT nuts and bolts, doing maintenance-type and infrastructure tasks such as changing the on-screen style formatting and revising a book template.

I’ve also been learning more about what is involved with the overall tasks of maintaining single source with nearly 20,000 objects in your library. Objects can be topics or books or hyperlinks or index entries or graphics or… many other items, so I’ll have to dig deeper to get a sense of how many topic objects we deal with daily. Ah, here we go – do an Advanced unqualified search filtered for results Of type “Topic Object” and then select the ones not marked “Obsolete” or “Orphaned” (meaning not used in a book object) and the answer is, we have over 7,000 topic objects in our library.

I maintain that the learning curve is steep but I’m fortunate (or sometimes unfortunate) that I’m approaching these tasks with an idea of how I think it might work. Plus I have an Author IT expert sitting in the office next to me who still answers my IMs when I ask Author IT questions. (Thanks, Mary!)

Changing the state

It is still taking me a while to get accustomed to the workflow that requires that I change the state of an object before making edits to it. If the topic I want to edit isn’t in a writeable state, then I can’t make my edits until I locate the object so I can right-click it to change the state. Maybe I’m missing some shortcut to how to change the state while editing a topic’s text. I’ll have to poke around the tabs a while. It’s more likely that I need to make a shift in my workflow and remember to select the topic objects I want to edit, change the state, and then begin the edits.

Search mechanisms

My understanding is that there are two basic search mechanisms and both are rather underpowered for the amount of legacy information we have stored. (I’ll have to get a topic count to give real numbers here.) The first search mechanism is searching the entire collection of topics and books and sub books. The Advanced Search checkbox is always checked in my environment.

The second search mechanism is on the actual text within topics – you can search for text within a topic, within a book, or within the entire library. This mechanism is found from the Edit > Find menu command in AuthorIT Enterprise Edition.

What I’ve found recently, however, is that you cannot replace formatting on the found items. This limitation means that you could have semantically tagged items that are not able to be retagged. For example, if you had tagged all your menu items as “menucascade” but needed to change the tagging to “breadcrumbnav” you would have to export the topics to an XML editor and do search and replace there. I don’t yet know how to batch export say thousands of topics to do this search and replace to get the semantic tagging you wanted. This analysis and potential workaround is based on searching within the Author-it Yahoo Group’s messages, so perhaps there is another way to search for text and formatting and change both the text and the formatting but I haven’t found it yet.

Even with these two search mechanisms at our disposal, we find it easier to use a Google search tool on our external database at docs.imis.com, then right-click on the HTML page to get the topic ID, then use that topic ID to search the AIT object database.

Author IT Yahoo Group

Now, I just went through the Yahoo Group messages again to learn more about the searches in AIT, and I really do like the community there. People are very helpful and still maintain a nice sense of humor and goodwill. That’s an important aspect of any tool selection I think. Anyway, there is a way to search within a set of found items, and that is to do a Search using the Search tab first, then press Ctrl+A to select all topics found that match the search criteria, and then do the Find and Replace command on the selected topics. That search also revealed a potential limitation of AIT’s inability to find period space space and replace it with period space (explanation of why period space is correct, because The PC Is Not A Typewriter).

A third search mechanism that we could make use of but that doesn’t yet exist would be the ability to search within a folder. We can use the trick mentioned above where you select all the objects in a folder then do a Find. A Folder in AIT is just a representation of the objects in a collection within a folder in the CMS of AIT (sorry, too many acronyms to qualify as a real explanation, but it’s basically another view of the database but not searchable within each Folder). But that’s a find on text, not a find on objects or metadata on those objects.

Variables to substitute text values

I find that the variable mechanism is a little bit clumsy. Variables are simply text enclosed in angle brackets <substitutethisforthat>. So you still have to do a search and replace for text when you want to choose a different variable name. If you use angle brackets in your documentation, AIT has to be told specially that you meant to do that and that those should not be resolved to a variable name. So, if you really want angle brackets to appear as angle brackets and not resolve to a variable, you have to use the HTML trick of ampersand lt semicolon.

Running AIT publishing from the command line

One nice feature is the publishing engine’s batch processing that will even output the commands for you so that you can include it in a batch process. We found that the outputs are always placed within the users folder that is logged in to AIT, despite using a documented command line parameter where you feed in a user and password for running the batch processing. Mary found a nice workaround where she just copies the files she needs out of another folder (the _Output folder in our environment), but it seems like a waste of disk space to me to have a second copy of output in each user’s folder. We can do some cleanup using the batch files to ensure that disk space is freed up, however.

Magical price point

Let’s face it, since it’s in the four figures for a seat license, Author IT is a relatively inexpensive all-in-one single sourcing tool that has both a straightforward editor and a content management system. A small techpubs department looks pretty darn good when they can deliver manuals and help as part of an automated build in a lights-off no-touch system. And the savings in translation costs when you single source are unmatched.

Where AIT “feels” inexpensive though, is in the slightly outdated interface (why can’t it remember the window size after being shut down?), somewhat underpowered search methods, and so far, I just can’t shake the general feeling that you’re not really owning or editing “source” files but rather some Word-like representation of the source.

Still, it works wonders and lets our small techpubs department output some high-quality professional content, more content than possible without a single-sourcing tool. So I’ll face the learning curve and continue to climb it.

Hyperviews Online article about CMS for website

Here’s a link to my latest article on Hyperviews Online, the newsletter/blog for the Society for Technical Communication Online Special Interest Group (STC Online SIG). It’s called Using a Content Management System (CMS) for your STC community web site.

The STC Austin chapter is re-designing the community website, and I volunteered to help. We started researching CMS use and I found that quite a few sites were using WordPress, so I emailed the webmasters to learn more. The article is a result of that email survey where I learned more about WordPress as a CMS.