Artificial neural networks as cognitive tools for professional writing

  • Authors:
  • Patricia A. Carlson

  • Affiliations:
  • Center for Advanced Media, International Centers for Telecommunication Technologies, Terre Haute, Indiana

  • Venue:
  • SIGDOC '90 Proceedings of the 8th annual international conference on Systems documentation
  • Year:
  • 1990

Quantified Score

Hi-index 0.00

Visualization

Abstract

Computers are cognitive tools — they extend the capabilities of the human mind. Paper and pencil are also cognitive tools — they enhance human memory by acting as a permanent record, and they mediate the formation of thought by serving as a scratchpad or rehearsal device. However, there is a qualitative difference between these cognitive tools: the computer as a writing environment can become an active participant in the process while paper and pencil must remain passive instruments. Because this fundamental difference seems obvious to me, I'm surprised that most computer-aided writing (CAW) software available today is based on a model derived from writing with traditional tools.Such computer tools as spelling checkers, “style” checkers (actually, they check usage) and outliners, have been around in one form or another for a long time now. Yet they have not really had much of an impact. Most of these packages share three major drawbacks: their analysis is based on statistical measurements of simple surface features of writing; they provide “after-the-fact” profiling; and, in general, they treat all text as equal.As Shoshana Zuboff points out (In the Age of the Smart Machine), the computer — unlike the tools of the Industrial Revolution — not only automates, it also informates. When it comes to text technology, we've done a great deal to automate the process — as illustrated by word processing and desktop publishing. But we're only starting to use the computer's capacity to informate the process.A quick review of four categories of CAW tools indicates the state-of-the-art.Statistical Text Analysis: I'll use Bell Lab's Writer's Workbenchtm to cover a broad category of software which makes use of patterns to analyze prose. The Workbench is a collection of small programs to measure surface features of writing. For example, the user can find out readability level, average sentence length, word length, percentage of sentence types, percentage of passive-voice verbs, jargon, wordiness, sexist language, spelling errors, and improper usage. Other programs are intended to analyze rhetoric structures. For example, a program displays only the first and last sentence in each paragraph, with the idea that this representation will allow the user to check for logical transitions.Though the package has been around for some time now, it never really caught on. Its UNIX requirement and relatively high cost lessened the likelihood of widespread use. Additionally, the forty-or-more programs are all discrete — meaning that a truly cumulative analysis of a passage, taking into account the interaction of stylistic features, isn't possible. And, as a third limitation, because the analysis works at such a low level in the composing process, writers sometimes become obsessed with the accidents (surface errors) of their prose rather than the essence (strengthening the logic and content).Prewriting: Planning what to say and how to say it takes time. Professional writers have developed strategies for making this “prewriting” stage of composing more efficient. The journalist's “who, what, when, where, and why” litany is an example of a heuristic intended to help focus thoughts. There are many such heuristics, some dating back to Aristotle.Typically, software in this category automates an established strategy. The earliest invention software was modeled on Joseph Weizenbaum's ELIZA. Even today, implementation frequently takes the form of a dialogue, with the program asking significant questions and making appropriate comments on the writer's responses. The writer then uses the recorded information as the raw materials for the paper.Two drawbacks show up in current systems. First, many strategies when used by professional writers are comparable to a rule-of-thumb; thus, the effectiveness and appropriateness of a heuristic varies with the task. They lose most of their spontaneity and flexibility when automated. Second, the dialogue metaphor — which attributes a personality to the computer — is an embarrassing affectation in most programs.Outliners: This category has generated more commercial interest than the other three. Though frequently called “idea processors,” the label seems more honorific than earned. Most use a top-down (general-to-specific) knowledge representation as their bases. In other words, the writer is encouraged to find hierarchical relationships in her raw material by filling in an open-ended tree-structure. On some systems, levels can be hidden, thus focusing attention and reducing the cognitive load inherent in the writing process.Writing Environments: The more interesting of these programs are still in their infancy, and can be represented by WE (Writing Environment developed at the University of North Carolina—Chapel Hill) and CICILE (developed at the Center for Applied Cognitive Science in Toronto, Ontario). In something like the Writer's Workbenchtm, analytical programs are separate entities, and the writer is free to pick and choose among them. On the other hand, the suite of tools in a “writing environment” is integrated and part of a rigorously structured cognitive model of the writing process. In essence, a well-designed writing environment orchestrates the writing process by emulating stages of thinking. Few, if any, writing environments include AI applications as we normally define them (e.g. expert systems). Nevertheless, because the whole system supports and guides the activities of thinking, these knowledge-making habitats should be characterized as “intelligent.”I like the concept, but I am just a bit uneasy with the implementation. First, all of the examples I am aware of are theory-laden and exit as heavily-funded projects at large research universities or government-sponsored laboratories. In fact, these systems seem to be testbeds for doing high-powered research on the writing process more than practical tools for professionals. Second, because of the amount of “scaffolding” each system provides for the writer, they seem more appropriate as a learning environment. In short, they are more tutors than tools. And third, their heavy commitment to a definitive cognitive model of writing seems to ignore what historians of technology have taught us: new tools engender new habits of mind, and the tool — over time — can change the nature of the task.In summary then, word processing precipitated interest in computer-aided writing (CAW). Once text could be represented as bits and bytes, computational software for analyzing prose patterns became feasible. My objection is that the patterns used are too fine-grained and that the evaluation is too rigorous to qualify as a comfortable cognitive tool. I have never used a CAW product that, eventually, didn't pinch and constrain my writing process by being distractingly intrusive, nit-pickingly atomistic, or down-right tedious and misleading in the advice it returned.I view writing as one manifestation of the controlled creativity we call design. The cognitive activities of design take place in a cyclical rather than linear fashion. First, we decompose or partition the task into its components to get an idea of what we are trying to do. Then, we work on the pieces for a while, step back to compare interim results with higher-level goals, consolidate gains, jettison unrealistic expectations or excessive constraints, reorder plans, and move back to working on the pieces again. The cycle takes place over and over during the writing session. Good writers excel where poor writers fail because of this flexibility, this ability to move smoothly between top-down and bottom-up strategies. To my mind, a good cognitive tool for composing has two functions: (1) to serve as a peripheral brain that helps with the cognitive overload inherent in a complex task, and (2) to act as an expert associate that provides counsel in the iterative, “prototype and feedback” process of design.