Wednesday, March 16, 2016

Fischer’s Prescience in 1970 on the Use of Computers for Textual Criticism

7
I just read an article by Bonifatius Fischer on the use of computers in New Testament studies. What makes it interesting is that it was written in 1970. Here are a couple of points that stood out to me. I especially liked this quote near the beginning:
It is strange in general that the use of a computer is taken in the public mind as a proof of scholarly thoroughness. Why does the same not hold for the use of a fountain-pen or a typewriter, especially an electric one? 
Fischer does not think computers hold much promise for questions of authorship. But he is enthusiastic about their use for textual criticism.
After so much pessimism we come at last to a field where the computer is of great importance to the student of the New Testament, indeed where it opens up a new dimension and makes possible what hitherto the scholar had not even dared to dream of: that is, in textual criticism.
He distinguishes between manuscripts “with all their peculiarities” and the “purely abstract sequence of readings” that can be fed into computers.
In textual criticism a strict distinction must be made between the logical, abstract, order and the concrete, historical, order: one might say, between the abstract textual criticism of variants and the concrete study of the tradition which is rooted in the historical environment. The various manuscripts with all their peculiarities and casual errors belong to the concrete, historical, order, and with them the whole indirect tradition in quotations, translations, etc. In the logical order there corresponds to every manuscript a particular series or combination of readings, which are quite abstracted from space and time, from the question of what is true or false, original or derived, given or received. This is not the current distinction between the manuscript and the text it transmits: the text itself is here a purely abstract sequence of readings, not a historical object. So in the logical order we have only sequences of readings, not real but only nominal manuscripts. But these and all their mutual relationships can be represented in quantitative, mathematical, terms in set-theory by means of Venn diagrams. The same holds good for all the groups or sub-groups of these ‘nominal’ manuscripts. And since they can be mathematically represented, they can also be grasped and processed by a computer. 
Especially interesting is that near the end of the article he anticipates the basic procedure both of the CBGM and Stephen Carlson’s use of cladistic software for Galatians: the computer provides the basic structure of the textual forms and the human editor gives it direction by making judgments about the “truth or falsity of the readings.” In hindsight, it may seem fairly obvious, but this was 1970 and computers were using punch-cards.
Two stages must be distinguished. In the first the relations between the manuscripts and the texts are defined on the basis of all their readings, irrespective of whether these readings are true or false: this stage is a purely mathematical process which can be done by a computer—indeed in so complicated a case as the New Testament it should be done by a computer. Then follows the second stage, the proper task of the textual critic, the judgement of the truth or falsity of the readings, the recension of the original text and perhaps also of its more important subsequent forms, and the reconstruction of the history of its transmission. This is a task that only a man can perform: it is beyond the capacities of a computer. But it rests on the firm basis that the computer supplies.  
Source: Fischer, Bonifatius. “The Use of Computers in New Testament Studies, with Special Reference to Textual Criticism.” JTS 21.2 (1970): 297–308.

7 comments

  1. Nice post. Dom Froger had anticipated these points two years earlier.

    ReplyDelete
  2. Nice post. Dom Froger had anticipated these points two years earlier.

    ReplyDelete
  3. Yep. He cites Froger in that line about Venn diagrams.

    ReplyDelete
  4. I'm pretty sure there is a logical flaw here somewhere. Fischer in 1970 said a load of things about computers. Much of which is obviously complete tosh (computers will be pretty useless for NT outside of textual criticism; people using computers are percived as smart etc.). A few things he said remain valid/important. But it is not prescient if there are lots of misses and a few hits. It is selective reasoning, survivor bias, or somesuch problem.

    ReplyDelete
    Replies
    1. I was only thinking of the hit as prescient. Tosh is a good word though.

      Delete
    2. yes, very English - you can be rude about people/ideas/things while sounding polite

      Delete
    3. Peter (Head), I'm not sure that you're being fair. As Peter (Gurry) notes, the article is from 1970, and it is hedged round with 'at present' etc. It seems to me that the basic processes which Fischer describes are pretty accurate. The logic hasn't changed even though computers have come on a huge way since then.
      It's also worth remembering that Fischer himself was responsible, a few years later, for one of the biggest ever computer collation projects in the NT (almost 500 Latin manuscripts in around 16 chapters from the Gospels), which is of lasting value.
      Mike Holmes and Bart Ehrman noted with regard to their Status Quaestionis volume that the chapter about computers was the first to go out of date. Reading staging posts like this article (particularly with its anticipation of the CBGM, as Peter notes in his post) is fascinating for the history of scholarship. Yes, prescience may be a step too far, but in deflating contemporary hype with carefully-constructed observations, I think much of this is spot on!

      Delete