Wednesday, August 06, 2008

Münster Colloquium on the Textual History of the Greek New Testament Day 1-2

Unfortunately, I am now under time pressure, since I have a train to catch, and it will take me over 20 hours to get home, so I will not be able to give extensive reviews of the other papers at this point. [PMH: I shall add some comments in brackets]

The next session treated “Causes and forms of variation” and two papers were presented by David Trobisch and Ulrich Schmid. In Trobish’s paper, “What is there in a picture? Analyzing scribal practices of structuring the text,” he mainly pointed out that the documentation of the evidence has to come first, not the authoritative interpretation.

[He also argued that manuscripts are produced not only by scribes, but by a combination of author, scribe, editor, publisher and reader/corrector. Variants can originate at any of these levels - but they ought to be carefully distinguished, especially in the construction of stemma. He argued that a critical edition ought to facilitate the reconstruction of ‘the text of the first edition’ (other methods would be required for determining the ‘authorial’ text). He urged that transparency, of method, display and evaluation of material was very important so that errors would not be hidden, but would be identifiable. He began to ask the question: what if there is more than one textual archetype, e.g. the DFG text in Paul alongside the 01 ABC text - how should a critical edition handle this? But unfortunately he ran out of time and we never heard the answer.]

He posed a number of critical questions and heartily welcomed the new initiative of the Virtual Manuscript Room under development at the INTF in Münster (in fact, I think Trobisch was involved in the idea of the VMR). Read more on the INTF homepage.

Then it was time for Ulrich Schmid, ”Conceptualising ‘scribal’ performances.” Schimd started with describing the default assumption: was we physically find in a MS is the work of a scribe. Everything can be used to describe the scribe. Everything is a kind of scribal performance. The variants produced are the most obvious traces of scribal performance. However, not everyone who left prints in the MS were acting as scribe. There are other roles. Schmid pointed to the need for criteria for distinguishing scribal and non-scribal activity. One important criteria is to distinguish between different scripts, book-hand (readability, regular letter forms, few abbreviations) vs. documentary hand (speed of writing, effective use of space, varying letter forms and ligatures, more abbreviations). Marginal comments and readers’ notes would probably be written by a more casual informal hand. Then Schmid presented some compelling examples of readers’ notes. This phenomenon would also provide some explanation for the introduction of certain variants into the textual tradition, which may have been interpreted as theological interpretations and creations of the scribes (e.g., acting as “orthodox corruptors” - my remark).

[So Ulrich also outlined the process of literary production/reproduction in antiquity:
1. the authorial stage
2. the editorial stage (which places authored material into the public domain)
3. the manufacturing stage (the primary scribal stage)
4. the users stage]

[Some of this material is also covered in Ulrich’s paper at the 2007 Birmingham Colloquium, for our report, see here]

The next two papers were presented by Michael Holmes: ”Working with an open textual tradition: Challenges in theory and practice.” Holmes described the nature of an closed textual tradition (without contamination due to cross-colonization) and an open tradition. The Greek New Testament textual tradition is an open tradition. Then Gerd Mink read his paper, ”What does coherence tell us about contamination and coincidental emergence of variants.” This was a description of the CBGM method. Here I may refer the readers to a detailed account available on-line here. There is also a discussion about the concepts of “initial text” as opposed to archetype and the original text.

In the final session there were two more papers treating the criteria used in textual criticism; Eldon J. Epp, ”Traditional ‘canons’ of New Testament textual criticism: Their value, validity, and viability (or the lack thereof)” and J. K. Elliott: ”What should be in the apparatus criticus to a Greek New Testament?” [Epp ran through a survey of the development of lists of criteria and then distributed a sheet with his proposed summary of the key criteria. In his paper he did propose that in the final analysis, exegesis would be the final arbiter, citing a study of Rom 11.31 which dealt with Paul’s logic throughout the passage as a vital contribution to the old textual problem of the NUN there.] I will only cite Elliott’s opening sentences, “We text-critics are greedy people. We want all evidence in an apparatus criticus. That is the ideal.” [The selectivity of the NA edition could be misleading, e.g. you can’t reconstruct the reading of a particular manuscript through a passage; some types of variants, e.g. spelling, are excluded, and rigorous eclecticism needs more variants — since they tend to think that the genuine reading could be preserved in any manuscript.]

I am sorry that I have no time to summarize all the papers from the first day (perhaps some of my co-bloggers could help out?). Let me just say that the conference was of major interest and importance, and very well organized. One of the most important objectives was to present the CBGM method, developed by Gerd Mink. This was the main event of day two, and I think that this presentation was successful, although many questions of course remain. Unfortunately, there was little time or opportunity for me to take notes during day two, because all my braincells were very busy trying to understanding more about the method.

1 comment :

  1. The link to the detailed account of Gerd Mink's paper, ”What does coherence tell us about contamination and coincidental emergence of variants.” does not work (anymore). Is it possible to fix this?