Skip to main content

Industry insights

Industry insights

WritersUA: Wireframing tools and techniques

Michael Hughes, IBM ISS Security Systems

Yay, I finally get into a session.

Wireframes can be high fidelity (rendered dialog box that looks like the real thing) or low fidelity (sketch on a bar napkin). Fidelity actually has several components: appearance, medium, and interactivity.

Low fidelity appearance is something that looks (or is) hand drawn. High fidelity looks like a finished UI. Low fidelity appearance can be advantageous because people don’t get distracted.

Low fidelity medium is paper; high fidelity medium is an actual user interface.

Low fidelity interactivity is static—a picture of the thing. Then, you have scripted interactivity, where you take people through a scripted, controlled sequence. Next is intervention…the user says what they would do and then the UX designer shows them the next result. This can be done with paper prototypes. Finally, you have functional interactivity, where the various UI components actually work.

Low fidelity advantages: Quick, easier, and cheaper to create and modify. More importantly, people are more willing to give feedback on something that looks finished. People are afraid to give feedback on something that looks polished because they don’t want to hurt your feelings, but if you provide a low-fidelity wireframe, you will get much more candid feedback.

Low fidelity disadvantages: You might get detailed feedback on irrelevant details (“this button should be square and not rectangular”). Limited ability to watch users interact. Some users cannot visualize the final product from a low-fidelity version.

High fidelity advantages: The prototype is more realistic. Easier to understand and less room for misinterpretations. You can watch the users interact with the design.

Low fidelity disadvantages: More expensive to create, less encouraging of feedback, people focus on minutiae, easy for designers to become emotionally involved.

(“You might throw in lorem ipsum text and then have people correct your Latin.”)

As you move farther into development, fidelity generally needs to increase.

Higher fidelity is important when you have higher usability risks due to lots of interactivity, complex UI, new interactions and content (for dev team or users), where in user task flow does UI occur (earlier is riskier).

Tools & their best uses

Bar napkins: Good for early conceptual designs, not so good for felt tip pens and putting a wet beer glass on.

Paper prototypes: Can create the various interfaces and do some paper-based flow testing. Not so good for a sense of scale or for assessing content.

PowerPoint: Can do hyperlinks and action buttons. Create each interface on a slide and then link them with PP features. Use slide sorter and rearrange to simulate various user workflows. For web design, put a browser window on the slide master to force you to stay in the browser space. Good for sense of physical navigation, planning layout, producing paper output, presenting look and feel for interactive web pages. Not so good for complex interactions and for look and feel of applications.

Visio: Pretty good set of widgets for making realistic-looking dialog boxes. Similar pluses and minuses as PowerPoint, but also good for look and feel of applications. Can use to incorporate wireframes with flowcharts, use case diagrams, and other macro-design tools.

Balsamiq Mockup: Presenter’s favorite tool (mine, too). Extended demo. If you’re interested, try it online for free. Realistic enough to help designer imagine what the user experience will be.

Pencil (Firefox plug-in): “they have the world’s worst online help”

Axure demo: Can build tooltips. Higher fidelity than Balsamiq. Lets you take note and annotate the fields and then print as a Word file. Use to lay out business rules, alternate text, and more. Suitable for Web 2.0 interactions, which are difficult or impossible in Visio.

Read More
Industry insights

Cardinal sin of blog (and technical) writing: making the reader feel stupid

Want to get me riled up? You can easily achieve that by making me feel stupid while reading your blog.

I read a lot of blogs about technology, and I’ll admit that I’m on the periphery of some of these blogs’ intended audiences. That being said, there is no excuse for starting a blog entry like this:

Everyone’s heard of Gordon Moore’s famous Law, but not everyone has noticed how what he said has gradually morphed into a marketing message so misleading I’m surprised Intel doesn’t require people to play the company jingle just for mentioning it.

Well, I must not be part of the “everyone” collective because I had to think long and hard about “Gordon Moore’s famous law,” and I drew a blank. (Here’s a link for those like me who can’t remember or don’t know what Moore’s Law is.)

Making a broad generalization such as “everyone knows x” is always dangerous. This is true in blog writing as well as in technical writing. In our style guide, we have a rule that writers should “not use simple or simply to describe a feature or step.” By labeling something as simple, it’s guaranteed you will offend someone who doesn’t understand the concept. For example, while editing one of our white papers about the DITA Open Toolkit, I saw the word “simply” and took great delight in striking through it. From where I stand, there is little that is “simple” about the toolkit, particularly when you’re talking about configuring output.

Don’t get me wrong: I’m not saying that a blog entry, white paper, or user manual about very technical subjects has to explain every little thing. You need to address the audience at the correct level, which can be a delicate balancing act with highly technical content: overexplaining can also offend the reader. For example, in a user manual, it’s probably wise to explain up front the prerequisite knowledge. Also consider offering resources where the reader can get that knowledge: that way, you get around explaining concepts but still offer assistance to readers who need it.

In the case of online content and blog entries, you can link to definitions of terms and concepts: readers who need help can click the links to get a quick refresher course on the information, and those who don’t can keep on reading. The author of the blog post in question could have inserted a link to Moore’s Law.  Granted, he does define the law in the second paragraph, but he lost me with the  “everyone has heard”  bit at the start.

That “everyone has heard” assumption brings me back to my primary point: don’t make your readers feel stupid, particularly by making sweeping assumptions about what “everyone” knows or by labeling something as “simple.” Insulted readers move on—and may never come back.


Read More
Industry insights

Information as a right

Bear with me in a post that’s going to be even less coherent than usual. (And that’s on the heels of the Great Graphic Debacle.)

Is access to information a right or a privilege?

In democracies, we believe that citizens have a right to their government’s information.

U.S. citizens are likely familiar with the Freedom of Information Act (FoIA) and the various sunshine and open meeting laws. In 2005, India passed a Right to Information Act, which “requires every public authority to computerise their records for wide dissemination and to proactively publish certain categories of information so that the citizens need minimum recourse to request for information formally.” Other countries have similar legislation; the Right2Info organization “brings together information on the constitutional and legal framework for the right of access to information as well case law from more than 80 countries, organized and analyzed by topic.”

In the absence of a compelling government interest (the FoIA has nine, which include national security and personnel privacy issues), governmental information should be available to citizens. (This does assume, of course, that we are talking about governments who acknowledge that they are accountable to their citizens.)

If governments have an obligation to make information accessible to their citizens, does that equate to a right to the information? What about equal access to information? Is that a right?

For example, if certain public information information is readily available only on the Internet, does it follow that a citizen has a general right to Internet access? This question was actually considered by the European Union parliament last year, in the context of a new French law that cuts off Internet access to repeat offenders who infringe on copyrights with file-sharing:

Opponents of the legislation have responded by suggesting that Internet access is fundamental to liberty, an argument that suffered a setback on Wednesday as the European Parliament voted against codifying Internet access as a basic human right. (Is Internet Access a Fundamental Right?, CNet.com, May 6, 2009)

There are also interesting developments in financial information. The U.S. Securities and Exchange Commission (SEC) requires publicly traded companies to make certain information available to the public. This information is delivered through the EDGAR (Electronic Data Gathering, Analysis, and Retrieval) system.

Currently, the official submission format for EDGAR data is plain text or HTML, but the SEC is phasing in the use of an XML vocabulary called XBRL (Extensible Business Reporting Language).

“The purpose of the XBRL mandate is to make corporate financial information more easily available to stockholder.” (The XBRL mandate is here: Is IT ready?, Ephraim Schwarz, InfoWorld, November 25, 2008)

So in addition to mandating disclosure of corporate financial information, the SEC is now mandating easier access to the disclosed information. (A simple implication of XBRL is that you could more easily find executive compensation numbers.)

But what about non-governmental, non-regulated information? Is there a right to access? The business model of analyst firms (Gartner Group), business research companies (Dun & Bradstreet, Hoover’s), and, for that matter, the entire publishing industry (!!) says no. If you want information, you pay.

But look at the evolution of government philosophies and with that, content disclosure requirements. A king who reigns by divine right discloses what he wants to. A democratically elected leader must justify a lack of disclosure. It seems clear that we have shifted to the idea that access to government information is a right.

Will commercial information evolve in the same direction? There are actually some developments that point toward information as a right. In particular, the idea that information must be accessible—that information presentation should not exclude those with visual impairments or other disabilities—begins to build a foundation for equal access to information as a right.

What do you think? Will the right to information access be considered a bedrock principle in 50 or 100 years?

Read More
Industry insights

Sleepless in Seattle—our agenda at WritersUA

Simon Bate and I will be attending WritersUA this year.

I will be mainly camped in Scriptorium’s exhibit booth. Hours for that are Monday 8:00 am – 6:00 pm and Tuesday 8:00 am – 5:30 pm. Please stop by when you get a chance. Simon will be joining me, but is also presenting on XSL Techniques for XML-to-XML Transformations on Monday at 3:25. Here’s a bit of the description:

In a recent project, we used XSL to correct markup and fix conversion errors in 55,000 XML files containing 2000-year-old Greek texts. The clean-up work included correcting errors in the Greek numbering system, converting text-based markup to XML, replacing or repairing missing markup, and ensuring the accuracy of our work in such a large document set. This session uses this work to illustrate how XML-to-XML transforms differ from XML-to-output transforms. Along the way we describe some XSL techniques we created for processing XML data in which there is a close relationship between the content and the markup.

This year, we’re bringing swag in the form of free copies of The Compass, a printed compilation of Scriptorium white papers. For WritersUA, we have two new white papers, and the book is now almost 200 pages long. (Our white papers are also available, for free, in HTML and PDF format.)

If that’s not a sufficiently sweet enticement, you can also expect local chocolates. The leading contender is currently Fran’s, but I’m open to suggestions, especially from Seattle locals. (We generally pick up chocolate once we arrive rather than attempting to ship it. Ask me some about the Great Truffle Shipping Debacle.)

Simon and I are both scheduling private meetings during the event. If you are a current or prospective client of ours, or if you just want to talk, let us know and we’ll set something up.

Read More
Industry insights

Conferences versus social media

The information you can get from a conference presentation is usually available online—in blogs, webcasts, forums, and/or white papers. So why should you invest the time and the money to attend an event in person? In the end, there’s something very powerful about eating and drinking with a group of people. (And no, alcohol is not required, although it doesn’t hurt. Until the next day, when it hurts a lot.)

The value of conferences, which is not (yet) replicated online is in the “hallway track”—the discussions that happen between the formal sessions:

“[B]eing able to establish a one-to-one personal connection with other professionals in your field is critical to being a success.” (Dave Taylor in The Critical Business Value of Attending Conferences)


“I’ve found that time and again, I’ll hear speakers or audience members or participate in conversations and lie awake that night jam-packed with new ideas (some that don’t even correspond remotely to the concepts discussed that day). Conferences are a brainstorming paradise and a terrific opportunity for new ideas to come bubbling to the surface.” (Rand Fishkin, The Secret Value of Attending Conferences)

Scriptorium has quite a few social media “features”:

  • This blog, started in 2005
  • Webcasts, 2006 (recordings available for recent events)
  • Forums, this week (currently in the “awkward silence” phase. Help us out by posting, please!)
  • Twitter

But there’s something missing. I’ve attended and presented quite a few webcasts, and I can tell you that it’s actually far more difficult to deliver a compelling webcast than a compelling conference presentation. As the presenter, you lose the ability to gauge your audience’s body language. As an attendee, you have the temptation of your email and other distractions. The audio coming through your computer or phone is somehow not real—it’s easy to forget that there’s an actual person on the other end giving the presentation online. (There’s also the problem that many webcasts are sales pitches rather than useful presentations, but let’s leave that for another time.)

In my experience, it’s much easier to sustain online friendships with people that I have met in real life. Even a brief meeting at a conference means that I will remember a person as “that red-haired woman with the funky scarf” rather than as an email ID or Twitter handle. So, I think it’s important to go to conferences, meet lots of people, and then sustain those new professional relationships via social media.

In other words, conferences and social media complement each other. Over time, I think we’ll see them merge until a new interaction model. For example, we are already seeing Twitter as a real-time feedback engine at conference events. (Here’s an excellent discussion of how presenters should handle this.) Joe Welinske’s WritersUA is experimenting with a community site tied to the conference.

What are your thoughts? How important are conferences to your career?

Read More
Food & fun Industry insights

Finding the blogging superhero in yourself

Power blogger.

That’s a new phrase to me, and it was new to Maria Langer, too, as she noted in her An Eclectic Mind blog. As part of a podcast panel, she was asked to offer advice on how to become a power blogger. Some of her fellow panelists mentioned the quantity of posts, but Maria’s focus was elsewhere:

The number of blog posts a blogger publishes should have nothing to do with whether he’s a power blogger. Instead, it should be the influence the blogger has over his readership and beyond. What’s important is whether a blog post makes a difference in the reader’s life. Does it teach? Make the reader think? Influence his decisions? If a blogger can consistently do any of that, he’s a power blogger.

I couldn’t agree more. I appreciate reading any blog that gives me useful information or analysis that hadn’t occurred to me. For example, I recently had issues with a new PC I’m using at home as a media center. It was not picking up all the channels in my area, and an excellent blog post helped me solve the problem with little fuss. To me, that author is a power blogger.

What I frankly find irritating—and certainly not my worth my time—are blogs that are basically what I’ll call “link farms”: posting links or excerpts from other blogs with no valuable information added. I’m quite the cynic, so when I stumble upon such a blog, I figure the blogger is merely trying to generate Google hits and ad revenue, is lazy, or both. Quantity—particularly when said quantity is composed of rehashed material from other bloggers—does not a power blogger make.

When it comes to contributing to this blog, I try to write posts that have a least one nugget of helpful information, analysis, or humor, and I think that’s true of the posts from my fellow coworkers. (At the risk of sounding like I’m bragging about my coworkers, I can’t tell you how many times I’ve read one of their posts and thought, “That’s smart!” or “That’s cool!”) Frankly, I’d rather not write anything at all than to publish something just because it’s been a few days since I posted. And I have a lot more respect for bloggers who write quality posts once in a while over those who put out lots of material that is borrowed from elsewhere.

And on that note, I’ll leave you with a short clip showing superheroes using their powers for a practical solution. (See, I’m trying to entertain you, too!)

Read More
Industry insights

Unedited content will get you deleted

flickr: Nics events

The abundance of information today forces content consumers to filter out redundant and unworthy information—much like an editor would. That, however, doesn’t mean content creators can throw up their hands and send out unreviewed content for readers to sort through. Instead, authors (and particularly their managers) need to understand how editing skills can ensure their information doesn’t get filtered out:

[A]re we getting any better at editing in a broader context, which is editing ourselves? Or to rephrase it, becoming a better critic of our own work? Penelope Trunk (again) lists the reasons why she works with an editor for whatever she writes in public:

  • Start strong – cut boring introduction
  • Be short – and be brave
  • Have a genuine connection – write stuff that matters to the readers
  • Be passionate – write stuff that matters to you
  • Have one good piece of research – back your idea up

They have one thing in common: difficult to do on our own.

Granted, some of those bullet points don’t completely apply to technical writing, but it is hard to edit your own work, regardless of the kind of content. For that very reason, folks at Scriptorium get someone else to review their writing. Whether the content is in a proposal, book, white paper, important email to a client, or a blog post, we understand that somebody else’s feedback is generally going to make that information better.

The same is true of technical content. A lot of documentation departments may no longer hire dedicated editors, so peer reviewers handle editing tasks. Electronic review tools also make it easier than ever to offer feedback: even a quick online review of content by another writer will likely catch some potentially embarrassing typos and yield suggestions to make information more accessible to the end user. (You can read more about the importance of editing in a PDF excerpt from the latest edition of Technical Writing 101.)

With so much competing information out on the Internet, companies can’t afford to have their official documentation ignored because it contains technical errors, misspellings, and other problems that damage the content’s credibility. Even if you don’t have the time or budget for a full-blown edit, take just a little time to have someone do a quick technical review of your work. Otherwise, end users seeking information about your product will likely do their own editing—in their minds, they’ll delete you as a source of reliable information. And that’s a deletion that’s hard to STET.

PS: Software that checks spelling and grammar is helpful, but it’s not enough: it won’t point out technical inaccuracies.

Read More
Industry insights

2010 predictions for technical communication

It’s time for my (apparently biennial) predictions post. For those of you keeping score at home, you can see the last round of predictions here. Executive summary: no clear leader for DITA editing, reuse analyzers, Web 2.0 integration, global business, Flash. In retrospect, I didn’t exactly stick my neck out on any of those. Let’s see if I can do better this year.

Desktop authoring begins to fade

Everyone else is talking about the cloud, but what about tech comm? Many content creation efforts will shift into the cloud and away from desktop applications and their monstrous footprints (I’m looking at you, Adobe). When your content lives in the cloud, you can edit from anywhere and be much less dependent on a specific computer loaded with specific applications.

I expect to see much more content creation migrate into web applications, such as wiki software and blogging software. I do not, at this point, see much potential for the various “online word processors,” such as Buzzword or Zoho Writer, for tech comm. Creating documents longer than four or five pages in these environments is painful.

In the ideal universe, I’d like to see more support for DITA and/or XML in these tools, but I’m not holding my breath for this in 2010.

The ends justify the means

From what we are seeing, the rate of XML adoption is steady or even accelerating. But the rationale for XML is shifting. In the past, the benefits of structured authoring—consistency, template enforcement, and content reuse—have been the primary drivers. But in several newer projects, XML is a means to an end rather than a goal—our customers want to extract information from databases, or transfer information between two otherwise incompatible applications. The project justifications reach beyond the issues of content quality and instead focus on integrating content from multiple information sources.

Social-ism

Is the hype about social media overblown? Actually, I don’t think so. I did a webcast (YouTube link) on this topic in December 2009. The short version: Technical communicators must now compete with information being generated by the user community. This requires greater transparency and better content.

My prediction is that a strategy for integrating social media and official tech comm will be critical in 2010 and beyond.

Collaboration

The days of the hermit tech writer are numbered. Close collaboration with product experts, the user community, and others will become the norm. This requires tools that are accessible to non-specialists and that offer easy ways to manage input from collaborators.

Language shifts

There are a couple of interesting changes in language:

  • Content strategy rather than documentation plan
  • Decision engine (such as Hunch, Wolfram Alpha, and Aardvark) rather than search engine

What are your predictions for 2010?

Other interesting prediction posts:

Read More
Industry insights

Friend or foe? Web 2.0 in technical communication

The rise of Web 2.0 technology provides a platform for user-generated content. Publishing is no longer restricted to a few technical writers—any user can now contribute information. But the information coming from users tends to be highly specific, whereas technical documentation is comprehensive but less specific. The two types of information can coexist and improve the overall user experience.

Read More
Industry insights Structured content

The State of Structure

In early 2009, Scriptorium Publishing conducted a survey to measure how and why technical communicators are adopting structured authoring.

Of the 616 responses:

  • 29 percent of respondents indicated that they had already implemented structured authoring.
  • 16 percent indicated that they do not plan to implement structured authoring.
  • 14 percent were in the process of implementing structured authoring.
  • 20 percent were planning to do so.
  • 21 percent were considering it.
  • This report summarizes our findings on topics including the reasons for implementing structure, the adoption rate for DITA and other standards, and the selection of authoring tools.

    Download PDF file (2 MB, 56 pages)

    Discuss this document in our forum

Read More
Industry insights

Are you ready for mobile content?

A report from Morgan Stanley states that mobile Internet use will be twice that of desktop Internet and that the iPhone/smartphone “may prove to be the fastest ramping and most disruptive technology product / service launch the world has ever seen.” That “disruption” is already affecting the methods for distributing technical content.

With users having Internet access at their fingertips anywhere they go, Internet searches will continue to drive how people find product information. Desktop Internet use has greatly reshaped how technical communicators distribute information, and having twice as many people using mobile Internet will only push us toward more online delivery—and in formats (some yet to be developed, I’d guess) that are compatible with smaller smartphone screens.

The growing number of people with mobile Internet access underscores the importance of high Internet search rankings and a social media strategy for your information. If you haven’t already investigated optimizing your content for search engines and integrating social media as part of your development and distribution efforts, it’s probably wise to do that sooner rather than later. Also, have you looked at how your web site is displayed on a smartphone?

If you don’t consider the impact of the mobile Internet, your documentation may be relegated to the Island of Misfit Manuals, where change pages and manuals in three-ring binders spend their days yellowing away.

Read More
Industry insights

Fear the peer

(This post is late. In my defense, I had the flu and the glow of the computer monitor was painful. Also, neurons were having trouble firing across the congestion in my head. At least, that’s my medical explanation for it. PS I don’t recommend the flu. Avoid if possible.)

Which of these scenarios do you think is most intimidating?

  1. Giving a presentation to a dozen executives at a prospective client, which will decide whether we get a project or not
  2. Giving a presentation to 50 people, including half a dozen supportive fellow consultants
  3. Giving a presentation to 400 people at a major conference

I’ve faced all three of these, and while each scenario presents its own set of stressors, the most intimidating, by far, is option #2.

In general, I’m fairly confident in my ability to get up in front of a group of people and deliver some useful information in a reasonably interesting fashion. But there is something uniquely terrifying about presenting in front of your peers.

Torches // Flickr: dantaylor

At LavaCon, I faced the nightmare—a murderers’ row of consultants in the back of the room, fondling various tweeting implements.

Here are some of the worst-case scenarios:

  • No new information. I have nothing to say that my colleagues haven’t heard before, and they could have said it better.
  • Disagreement. My peers think that my point of view is incorrect or, worse, my facts are wrong.
  • Boring. I have nothing new to say, my information is wrong, and I’m not interesting.

Of course, my peers were gracious, participated in the session in a constructive way, and said nice things afterwards. I didn’t even see any cheeky tweets. (I’m looking at you, @scottabel.)

All in all, I’d have to say that it’s a lot more fun to sit in the back of someone else’s presentation, though. Neil Perlin handled his peanut gallery deftly, asking questions like, “With the exception of the back row, how many of you enjoy writing XSLT code?”

Rahel Bailie said it best, I think. After completing her excellent presentation, she noted that presenting in front of peers is terribly stressful because, “I really want you to like it.”

Read More
Industry insights

Free the webcast!

In addition to our November event on localization, we are adding another webcast in December. I’ll be presenting Strategies for coping with user-generated content on December 8 at 11 a.m. Eastern time via GoToWebinar. This event is free but registration is required.

Here’s the description:

The rise of Web 2.0 technology provides a platform for user-generated content. Publishing is no longer restricted to a few technical writers—any user can now contribute information. But the information coming from users tends to be highly specific.

The two types of information can coexist and improve the overall user experience. User-generated content also offers an opportunity for technical writers to participate as “curators”—by evaluating and organizing the information provided by end users.

Remember, there’s no charge to attend, but you do need to register.

Date: December 8, 2009
Time: 11 a.m. Eastern
Topic: Strategies for coping with user-generated content
Registration: https://www2.gotomeeting.com/register/583647346

PS Depending on the response to this event, we are going to consider additional free events.

Read More
DITA Industry insights

Coming attractions for October and November

October 22nd, join Simon Bate for a session on delivering multiple versions of a help set without making multiple copies of the help:

We needed to generate a help set from DITA sources that applied to multiple products. However, serious space constraints prevent us from using standard DITA conditional processing to create multiple, product-specific versions of the help; there was only room for one copy of the help. Our solution was to create a single help set in which select content would be displayed when the help was opened.
In this webcast, we’ll show you how we used the DITA Open Toolkit to create a help set with dynamic text display. The webcast introduces some minor DITA Open Toolkit modifications and several client-side JavaScript techniques that you can use to implement dynamic text display in HTML files. Minimal programming skills necessary.

Register for dynamic text display webcast

I will be visiting New Orleans for LavaCon. This event, organized by Jack Molisani, is always a highlight of the conference year. I will be offering sessions on XML and on user-generated content. You can see the complete program here. In addition to my sessions, I will be bringing along a limited number of copies of our newest publication, The Compass. Find me at the event to get your free copy while supplies last. (Otherwise, you can order online Real Soon Now for $15.95.)

Register for LavaCon (note, early registration has been extended until October 12)

And last but certainly not least, we have our much-anticipated session on translation workflows. Nick Rosenthal, Managing Director, Salford Translations Ltd., will deliver a webcast on cost-effective document design for a translation workflow on November 19 at 11 a.m . Eastern time:

In this webcast, Nick Rosenthal discusses the challenges companies face when translating their content and offers some best practices to managing your localization budget effectively, including XML-based workflows and ways to integrate localized screen shots into translated user guides or help systems.

Register for the translation workflow webcast

As always, webcasts are $20. LavaCon is just a bit more. Hope to see you at all of these events.

Read More
Industry insights

Liberated type

(or should that be “Liberated typoes?”)

We have opened up free access to two of our white papers:

  • Hacking the DITA Open Toolkit, available in HTML or PDF (435 KB, 19 pages)
  • FrameMaker 8 and DITA Technical Reference, available in PDF (5 MB, 55 pages)

These used to be paid downloads.

Why the change of heart? Most of our business is consulting. To get consulting, we have to show competence. These white papers are one way to demonstrate our technical expertise.

(By this logic, our webcasts should also be free, but I’m not ready to go there. Why? We have fixed costs associated with the webcast hosting platform. Plus, once we schedule a webcast, we have to deliver it at the scheduled time, even if we’d rather be doing paying work. By contrast, we can squeeze in white paper development at our convenience.)

What are your thoughts? We are obviously not the only organization dealing with this issue…

Read More
Industry insights

Is this thing on?

If you are reading this, then we have succeeded in migrating our web site over to WordPress.

Of course, the process of managing our own content always takes a back seat to working with our customers’ content, so the process took longer than you might expect. 

We did learn a couple of things, most of which should sound awfully familiar if you are working on your own content strategy:

  • It’s not until you try to move into a new system that you recognize all the mistakes you made the previous system.
  • PHP stands for Picky Hypochondriac Programming. I had several cases where code absolutely refused to work for no apparent reason. I had the resident PHP expert (Simon) look it over. Eventually, I gave up and retyped the code, and then it worked.
  • Learn to work with the tool and not against it. I have to credit a former coworker, Bruce Bicknell, for this little gem, which he originally applied to Word versus FrameMaker. When moving from Dreamweaver-based HTML to WordPress, take some time to learn best practices for WordPress. Don’t try to impose your existing  Way of Doing Things onto the new system. It’s inefficient and it probably won’t work.
  • Content migration is always awful. To transfer our blog, I found a blogger-to-WordPress converter. That worked pretty well, except that a couple of posts now have my name on them even though I didn’t write them. Transferring comments was a travesty that involved the support people at Haloscan (helpful) and cleaning out random comment triplication (gross manual labor).

But I hope you like the new site and blog. Please poke around and leave us feedback.

Read More