One Department's Guidelines for Evaluating
Computer-Related Work

Seth R. Katz
Department of English, Bradley University
1501 West Bradley, Peoria, IL 61625
(309) 677-2479

In October 1996 my department revised its guidelines for Tenure, Promotion, and Renewal (TP&R) to incorporate language on evaluating computer-related work. I served on the committee responsible for writing the revisions. My position on the committee was an odd one: though I teach composition and literature with computers and do research on the use of computers in teaching, I am also a pre-tenure assistant professor. It was reasonable for me to be on the committee because of the kinds of work I do; however, it was strange to be in the position of writing the guidelines by which I will be evaluated for tenure and promotion. In serving on the committee, I found it hard to negotiate between my own interest and investment in working with computers and the requirements and standards of fair evaluation of academic work--a dilemma that Janice Walker addresses in her contribution to this CoverWeb. I felt that I was on the committee to represent the interests of technology-users, and therefore that I had to take some kind of strong stand in support of valuing the technology. At the same time whatever policy I advocated had to have continuity with traditional standards and practices: the committee, and the department as a whole, would not agree to a radical revision. I learned quickly that, given the constraints of a traditional department and the traditional tenuring process, it is difficult to create language that will serve as an effective basis for evaluating new kinds of work.

There are also external, professional constraints on creating guidelines for evaluating computer-related work in English language, literature, and composition studies. The most significant of these is that, despite numerous discussions of how computer related activity should be evaluated, on such online discussion groups as HUMANIST, WebRights, and MBU-L, none of the discussions I have seen has led to strong proposals for actual language for guidelines. Most departments have not yet added official language concerning computer-related activity to their evaluation guidelines. The NCTE and CCCC are still working on their guidelines; and though the MLA guidelines are now available, they fail to specify or even suggest in any detail how specific kinds of computer-related activity should be evaluated within the traditional categories of Research, Teaching, and Service--or how the traditional categories might best be revised to accomodate work with new technologies.

In serving on a committee to actually draft language for evaluating computer-related work, I thus learned that my department was doing a remarkably progressive thing. Of course, we had particular reasons for taking this action now. At the same time I was forced to realize that, despite all the burgeoning computer-related activity in the profession, no one as yet seems to have arrived at satisfactory categories under which to organize that activity for evaluation. Part of the problem is that the technology and its applications are still evolving rapidly: it is hard to get a firm grip on what the shape of "computer-related activity" is. Evaluation, it seems, has to be based on some stable ground consisting of categories of activity and criteria for understanding what constitutes a strong example of each category. Faced with having to propose categories, and faced with the double bind of making the activity fit the traditional categories or proposing remaking the tradition, the committee realized that we simply did not yet know enough to do either.

We therefore opted to propose language that, while it would be included in a revised version of our TP&R Guidelines, would also be recognized by all members of the department as provisional, with all the virtues and faults that attend on a partial solution. In settling for a temporary revision, we have recognized the reality of our situation: we are caught between tradition and transition, attempting to evaluate a technology and practice with which we have inadequate experience, and which keeps evolving as we watch. Our department has accepted that the best we can do is to openly recognize, first, that many fine teachers and bright researchers are doing good and interesting academic work with computers; second, that in the course of time, as members of our department and colleagues, friends, and acquaintances at other institutions do more work with computers, the whole field of computer-related activity in English studies will take shape for us; and, third, that through argument, conversation, and compromise, a consensus will develop as to how that activity is to be evaluated and rewarded.

Having recognized these realities, we have taken the first steps towards creating guidelines for evaluating computer-related work: without defensiveness or acrimony, deliberately beginning a dialogue; getting a committee working on drafting language for evaluation of computer-related activity; and repeatedly admitting at each step of the process that our language is provisional, to be tested case by case, and to be revised towards a consensus only through collaboration, revision, and ongoing discussion. Bradley University's English Department may be unusual in being able to take such an attitude and such an approach to creating standards for evaluating new kinds of academic activity. But, at least for the first steps, this seems to be the approach that will work.

Other nodes in this hypertext address or present:

Last revised February 22, 1997