return to OCLUG Web Site
A Django site.
October 7, 2012

Rob Echlin
echlin
Talk Software
» Review: DITA for Practitioners, Vol. 1

Author: Eliot Kimber, XML Press

I will start this review with my conclusion:
I recommend this book for the DITA developer, and also for the person designing your DITA Information Model. Some parts are useful for the writer who is using DITA, but there are other books for them.

In this review, I will describe how I think the software developer for a DITA implementation, someone whom Kimber would call a DITA Practitioner, should use this book for the first year or more of their first DITA deployment. Then I will describe what parts of the book I think an Information Model designer should concentrate on.

For the DITA developer:

When I got Eliot’s book in April 2012, I thought I had a good grounding in DITA development.
However, I found in “DITA for Practitioners” a lot of useful data that I had not seen before. In some areas, I thought the writing was a bit pedantic, but I often found that valuable nuggets of information are hidden inside even those parts.

First, Chapter 1 is a good summary about DITA. I refer new DITA developers to it. It helps define what you need to know when you move to DITA from other technologies. It explains that DITA looks very differently at XML processing compared to previous systems. DITA especially treats all processing with the goal of extending DITA while allowing data interchange with other sites. This is further explained in Chapter 5.

Chapter 2 is about setting up your DITA environment. I did not set my systems up exactly the same way as Eliot did, but I made some changes after reading this chapter.

I wish I had been able to read Chapter 3 before I set up my company’s DITA prototype setup, because I would have set up new local Shell Document Types (DTD’s). It would have been better than modifying the existing DTD’s for our use. For instance, when we later opened files from our first DITA prototype, we could not immediately process them, because our DTD’s removes some elements that were used in those DITA files. By immediately setting up new DTD’s for our own use, we would have avoided that problem.

Chapter 3 has a lot of information for authors, but don’t skip it. Like most of the book, it is mainly for DITA implementers, and there is a lot of useful information squirreled away here that the authors don’t need. In particular, don’t skip the last half of this 100 page long chapter. Task steps for conditional processing (Authoring Step 15), and using DITAVAL’s (filtering) (Authoring Step 16) are hidden here. Guidelines for CCMS requirements are also hiding there.

In Chapter 4, you need to read “Introduction to Open Toolkit Customization and Extension”, pages 145-154, even if you have already written one or two plugins. It describes the XML elements of several plugin files. After reading it while creating my first plugins, I still refer to it occasionally.

When you are planning a new plugin, the examples at the end of Chapter 4 for HTML and PDF are required reading. They are sufficient to create simple working plugins, a bit more than what you need for “Hello World”.

Chapter 5, “Vocabulary Composition and Specialization”, tells how DITA creates the vocabulary of elements you use in your DTD’s. This is very useful background information, but to create a new DTD Shell yourself, you need the information in Eliot Kimber’s tutorials online, which will be in Volume 2 of this book. Read this chapter before you start, then keep it handy when you are using those aids. The last section, “DITA and Namespaces”, is academic material only useful when you care to know why DITA uses namespaces in some cases but not others.

Chapter 6 “Maps and Topics” is mainly theoretical. Skim it anyway. It starts by describing why the DITA approach is so different from previous XML document standards. It describes the importance of splitting topic order and linking (maps) from topic content, metadata use and precedence, generating several files from one input file and vice versa, and the various elements found in maps. You should find and read sections of it in detail when you use those parts of DITA.

Chapter 7, “General Structural Patterns in DITA” describes how DITA files are organized, presenting theoretical data for practical use. First key take-away: just because the DTD let’s you do it, doesn’t mean you can do it in DITA. This is a good summary of how DITA files are structured, and why these choices were made by the DITA standard. Read this chapter once.

Chapter 8, “Pointing to things: Linking and Addressing in DITA”, describes DITA’s various linking methods and how to use them. You need to know this, to allow you to constrain out only the elements your organization doesn’t need, and to help your writers.

Chapter 9, “Reuse at the Element Level, the Content Reference Facility”, is about reuse. My team avoided using conref’s until we had tried out DITA on a prototype project. That still seems like a good decision to me. Add it when you are comfortable with DITA.

Chapter 10, “Conditional Processing: Filtering and Flagging”, describes a topic that will inevitably arise in most writing teams. Again, like conref’s, we put it off at first. You need to learn how to do it, to implement examples, and help the team decide when to add it to your DITA implementation.

Chapter 11, “Value Lists, Taxonomies, and Ontologies: SubjectScheme Maps” is one I have only begun to understand. However, I think it addresses a problem I see at work, where we want to have different lists of “conditional” topics for different projects. I need to point it out to my Information Model Designer, and see if it solves our problem.

Appendix A: “Character Encodings, or What Does UTF-8 Really Mean?” presents useful information about character encodings. Kimber starts it as a rant, with the most useful information at the end of the chapter.

Appendix B: “Bookmap: An unfortunate design” is valuable reading for a Sunday afternoon, but in Technical Publications, you probably want to start out by using Bookmaps as the top map of your document. The bookmap should include only other maps, so it is one small file in each project. You can replace it with something better if you run into its limitations.

For the Information Model designer:

There is a lot of information here that isn’t in the books for DITA writers, but which is very useful, or required, for the Information Model designer.

If you are the Information Model designer, or the first DITA author on your team, you need to read Chapters 3, 6, 7 first. When you want to worry about links, read Chapter 8. Chapter 8 also introduces re-usable strings, because DITA uses the linking technology as one way to store strings. Because these strings can be used conditionally, as described in Chapter 10, Chapter 8 also spends a couple of pages describing Conditional Processing as it applies to keys.

For content reuse, read Chapter 9, about “conref’s”, the Content Reference facility. You use conref to reuse whole topics and common portions of a topic. This includes DITA’s second method for storing strings, this time strings that can contain other XML elements.

You will need to consider Chapter 11 when you have lists to track that are used in XML editing. This would include different lists for each of your projects to use for conditional processing.

Summary:

I find this book is often a better choice than searching the web, because it presents information that is not described or summarized as well in the DITA Users list or other venues. It has a mass of reliable, usable information. If you are a DITA developer, you need this book. If you are an Information Model designer, you will also find information here that you need and won’t get elsewhere. Your DITA developer will almost certainly forget to tell you about some of it, if they actually know about it. I know I found info while preparing this review that my Information Model designer needs!


Tagged: DITA, software, software team, tools

February 17, 2012

Rob Echlin
echlin
Talk Software
» Your daily tools: Tortoise and ls

Gui is cute, and sometimes productive, but GNU command line saved my sanity today.

Tortoise is a good GUI for using Subversion on Windows. It nicely flags all the files with status symbols on their icons.

Usually.

Sometimes it gets confused when a change is made 2 or more folders deeper, below the one on display. I don’t know whose cache is causing this – Microsoft’s or Tortoise’s, but it’s a minor issue.

It’s been worse since I upgraded to Tortoise 1.7.5. I jumped from 1.6.x to 1.7.5 the other day while writing docs for some tech writers, including how to install Tortoise.

I have several checkouts (OK, working copies) from the same corporate repository, all checked out in C:\svn. (OK, creativity didn’t seem necessary in this case, OK?)

Today the checkout I am most interested in was mostly not displaying its status icons. Yesterday I wasn’t as worried about it. Usually the entire tree was unaccented. Sometimes a folder would light up until I changed something. Then I noticed that all the “.svn” folders were missing, except in the top folder of the tree. Weird. I checked settings on a couple of things to make sure hidden folders were visible. For a while I had a grain of doubt that maybe the .svn folders were really gone.

So I went to the command line. “dir” didn’t see any .svn folders at all. That was because they were “hidden” by a Microsoft flag on them. “dir /ah” showed them, but not any of the other files/folders. Two dir commands required. Painful.

I have GNU Win32 tools installed, which is a port of the regular GNU tools to Windows.

So the answer was “ls -Al”, or “ls -A” for that economical look.

Thanks to all the GNU developers and those who ported and packaged it for Windows. You help me stay sane on the MS platform.

The site to download for Windows is getgnuwin32.sourceforge.net.


Tagged: frustration, Linux, software, tools, Windows

February 28, 2011

Rob Echlin
echlin
Talk Software
» Yet Another Dave, A?

I appreciate Dave’s sense of rumour, his up-to-wait technical knowledge, and  his assistance.

His wiki has some (ok lotsa) stuff I don’t have.


November 16, 2010

Rob Echlin
echlin
Talk Software
» Redmine, please copy Bugzilla

And other projects with multiple dependencies that depend on other things, please copy Bugzilla.

Redmine, like Bugzilla, has this library installation problem. I installed Redmine yesterday, and installed it and installed it and installed it.

I had just installed Xubuntu, so I didn’t even have ruby on the system, and then I couldn’t install the gems, until I found out the package name for “gem” is “rubygems”. That was my second guess, so I didn’t even have to go to Google.

You might think that a place like Redmine would have the list of dependencies in order, but no, the installer page lists “gem install rake” and “gem install rails” before they list “Rubygems 1.3.1 is required”. It’s something you only notice when you are new to installing apps for a specific language. (Note to self: set up an account on their bug site and tell them directly. :-) )

Yup, I just said I found the answer on their web page, one inch below the question, or maybe 3 cm.

Solution

After only 10 years, Bugzilla finally solved this problem in version 3.2, with a little help from Perl and CPAN. When you install Bugzilla, or update it, it checks if all the requirements are in place. That’s where it used to stop, with a list of work for you. Now it installs them for you. Yup, it asks first, because it wants to know if you want them in the “global” site library for Perl, or the “local” one for the Bugzilla user.

And then it goes off and gets them, and their dependencies, and their dependencies dependencies, and compiles things that need to be compiled and generally works hard while you work on something else, checking for any questions it might have.

Redmine, and many other apps, could learn from this.

Maybe Ruby can go one better, as a community, and create a generic dependency tool that takes your list of dependencies, and calls “gem” for you, installing the rubygem package, if necessary. You could call it an obvious name like “gemcase” if it’s available or come up with something creative.

This may seem obvious, but I will say it anyway: This gem installer thing can’t be a gem. More obvious stuff: The gem installer thing needs to know how to install rubygem on several OSes or platforms, including Microsoft platforms, by whatever name or means necessary on that platform. You should have it available in your top level folder when you untar your new toy, kinda like configure, although hopefully not so slow.

And no, you don’t want me to scratch this itch, or you might end up with a shell script. Although, maybe that’s just what you need here? ;-) Nah, you can assume Ruby is installed.


Tagged: frustration, review, software, tools

» Short URLs – Reduce the risk

Short URLs are valuable in instant messaging or other text length limited places. They are also a valuable method for marketers to track which marketing tool produced the best results. I actually didn’t know that second bit until I checked out http://bit.ly/pro/.

The risks

First
Short URLs mask the real URL from the user. When a bot or a stolen account inserts a link into an email to a list, or a comment on a web page, or anywhere else, users may follow the link to a dangerous web page or an unexpected porn site.

Second
Short URLs may hide the real URL from link-checking tools that try to safeguard users from dangerous web sites.

Third
Link rot. Most link checkers today do not identify short URLs, so they don’t check if the targeted URL exists. (Do any actually do this?)

Some fixes

Our apps and sites need to protect the web user from dangerous pages linked from comments and our own content. We also need automated tools to prevent hidden link rot.

Fix 1
Take control of the short links used on our web sites. Value added short URL services allow us to control the links that we insert on our own sites, and fix them ourselves.

Fix 2
Your site should provide really short links for itself. This would be like the permanent links that blogs provide, but really short. Since the domain name would be the same, it would be obvious what site the link goes to, at least. Some blogging software sort of provides this capability now, in the form of customized perma-links, but I want both normal long permalinks with the title in them, and really short links. This might not replace normal short links for 140 character SMS messages (Tweets) if your domain name is long, but would definitely help for other purposes.

Fix 3
Ask for link checking services. Ask:

  • The virus checking service that filters your email
  • The site monitoring services that checks your site is up.
  • Google
  • Black list services
  • Any other suggestions?

Opportunity

If you want to implement Web 2.0 services inside your company, or if you want to provide them for other companies, this suggests:

  • a short URL service inside your firewall, for the Instant Message service you provide between employees.
  • a private short url service available externally, for marketing. This follows from Fix2. You can do all the analysis you want on the links that are followed. You may not be able to link to all the other info that major services have available on (almost) everyone who clicks on the web, like their name, address and phone number ;-) , but your data will be private.
  • A short URL button to go in the major browsers, available to all employees
  • A button on the button bar that you provide to all employees
  • A short URL app for major cell phones, linked with your internal server

References

Problem

  • http://www.brighthub.com/computing/smb-security/articles/58970.aspx
  • http://www.csoonline.com/article/496920/new-spam-trick-shortened-urls

Solution


Tagged: internet tools, short urls, tools, web 2.0

May 13, 2010

Pythian
pythian
» An SSH tool to make your life easier

A MySQL user group member saw that I use Poderosa as my ssh-on-Windows tool, and asked why I did not use PuTTY. My response was that I like having tabbed windows and hate having to keep opening another PuTTY program every time I want to open another connection. With Poderosa I can open a new [...]

July 5, 2009

Rob Echlin
echlin
Talk Software
» Getting the real Vim on Debian/Ubuntu

Did you notice that vi is pretty much feature-free on newer Debian systems? it doesn't even have syntax coloring.

The default vi package installed on Debian Lenny, and Ubuntu is vim-tiny, which is really restricted and very appropriate for really, really small environments.

To update, install vim-nox for the real thing.

If you want the GUI version, install vim-full, which basically installes vim-gnome.

read more