Thursday, November 30, 2023

Where should textual criticism be discussed in systematic theology?

16

This is a question I have been pondering since giving my paper at ETS this year on textual criticism in the Reformation. Note carefully that the question is not how or if to discuss textual criticism, but where. (For more on how, see here.)

Source
The obvious answer, of course, is under the heading of bibliology. But where within that? Most modern theologies—certainly evangelical ones—discuss textual criticism under the subtopic of inerrancy. It makes sense given that textual criticism deals with errors and so does inerrancy. (For more on that below)

But this has not always been the place where textual criticism is treated. In my reading of Reformation writers, textual criticism is almost always addressed under the heading of authenticity. There are good historical reasons for this that were the subject of my paper but they are not my focus here. What interests me here is the other place I found textual criticism discussed and that is under the topic of the clarity of Scripture. Let me give some examples.

Ussher (1581-1656)

James Ussher in his Explication of the Body of Christian Religion, written in Q&A format, starts with a large section on Scripture or the “grounds of the Christian religion.” After discussing things like inspiration and canonicity he moves to objections. Textual criticism is discussed under objections, not to inspiration or infallibility, but perspicuity. The broader question is, “Are the Scriptures then plain and easy to be understood?” (p. 18). The specific question that leads directly to textual criticism is this:

How can the certain understanding of the Scriptures be taken out of the Original Tongues, considering the difference of Reading, which is in diverse Copies both of Hebrew and Greek; as also the difficulty of some Words and Phrases upon which the best Translators cannot agree?

I’ve discussed his answer before on the blog here. What I want to draw attention to here is the question itself where What about difficult word meanings? is treated right along with What about textual variants? For Ussher, both present problems for perspicuity. His answer, in part, is that not all the Scriptures are equally clear in themselves or to all readers. Just before this, Ussher had given no less than six reasons why God leaves some places in Scripture “obscure.” But my point here is to draw attention to the fact that Ussher puts uncertainties resulting from textual variation in the same category as uncertainties from interpretation.

Baxter (1615-1691)

Writing a little later, Richard Baxter does the same. In discussing the “many parts” of the Scriptures that have uncertainty, his first example comes from the fact that “many hundred texts are uncertain, through various readings in several copies of the original.” (By original, he means original language.) And what does Baxter discuss in the next paragraph? “Many hundred words in the Scripture that are ambiguous, signifying more things than one.” So, again, we have text-critical uncertainty dealt with in the same context as lexical or interpretive uncertainty. (For more on Baxter and textual criticism, see here.)

KJV Preface (1611)

One final and perhaps unexpected place one finds these two types of uncertainty combined is in the preface the King James Bible. Among the things Miles Smith needed to accomplish in the preface was to defend its use of marginal notes. The problem with notes is that they showed the reader where the meaning was uncertain and that might seem to undermine the Bible’s authority. But such uncertainty has a theological justification in that “it hath pleased God in his divine providence, here and there to scatter words and sentences of that difficulty and doubtfulness, not in doctrinal points that concern salvation ... but in matters of less moment” (source). 

It’s in this context that Smith mentions how Pope Sixtus V expressly forbid “that any variety of readings of their Vulgar edition should be put in the margin; (which though it be not altogether the same thing to that we have in hand, yet it looketh that way;) but we think he hath not all of his own side his favors for this conceit.” It’s that parenthetical remark (emphasis mine) that shows that Smith draws only a slight distinction between textual and lexical ambiguity. The KJV marginal notes themselves do even less to distinguish them. (For more on that, see my chapter here.) Again, then, we see textual uncertainty treated in the same category as semantic ambiguity.

So what?

Having surveyed these three examples, let’s tweak our initial question by asking instead What benefits might we gain by a return to this older way of treating textual criticism? Instead of treating it as a problem for inerrancy or inspiration, what if we treated it as a problem for the clarity of Scripture? Here are some initial thoughts:

  1. It avoids the problem of conflating textual errors with theological ones. As noted above, one reason people naturally put textual criticism together with inerrancy is that both trade in the language of “error.” The problem is that most scribal errors do not produce a text that is in theological or historical error. If we discussed textual criticism as a problem for the clarity of Scripture, this would probably be more obvious to people. Textual variants do affect the meaning of Scripture even if the meaning they affect is not a matter of larger theological consequence.
  2. It makes the problem ours rather than God’s. What I mean here is that treating textual criticism in relation to the clarity of Scripture connects the issue to hermeneutics (our role) more than inspiration (God’s role). This follows from the previous point. We have to decide if Goliath is either four or six cubits, and that has some affect on our interpretation of the narrative. But that is just one among several interpretive questions that affect our reading of the story. We can still understand the story, still appreciate it, still apply it even when we have unanswered questions leftover. Just as we can all live with uncertainty in interpretation, so we can live with uncertainty in textual criticism. Hence the final point:
  3. It reminds us that the clarity of Scripture has never required absolute certainty or total clarity. If you read the full context from the three examples I gave above, you will see that they all go into detail about how not everything in Scripture is equally plain to every reader at all times and in all places. One thing that still surprises me in reading the Reformers is just how readily they admit that there are “dark” or uncertain places in the Bible. They never hide from this. And yet, they were equally clear that the existence of such dark places doesn’t require a magisterium because it doesn’t nullify the Bible’s magisterial authority. If that’s true in interpretation, surely it’s true in textual criticism too.

Objections?

If these are some benefits, are there any drawbacks to treating textual and interpretive uncertainty together? Here’s two I can think of:

  1. It obscures the knock-on effect of textual uncertainty. Someone might object that, since interpretation depends on textual decisions, uncertainty about the text has greater ramifications than uncertainties of interpretation. Treating them together obscures this. Perhaps, but I would venture that uncertainties of interpretation (and translation with them) are far more common and typically more serious than textual ones. The places where commentators disagree about the meaning has to be far greater than where they disagree about the original text. That seems true across time—it’s not just a function of our modern manuscript discoveries. Interpretive disagreements have always outweighed and outnumbered textual ones. Whatever uncertainties we can tolerate when it comes to interpretation have to be more than enough to encompass the textual uncertainties.
  2. Because inerrancy is tied to the autographs we should treat textual criticism in relation to inerrancy. I’m sensitive to this one since this is usually how textual criticism comes up in theological contexts. When an evangelical student learns about textual criticism, his next question is never “But what about perspicuity?” It’s usually “What about inerrancy?” But, while this is the case, should it be? Must it be? Maybe we wouldn’t field so many concerned theological questions if we were in the habit of treating textual criticism as a question for the clarity of Scripture. We could still talk about the distinction between the autographs and the copies, but that distinction itself might not need to bear so much weight (see point one above under So what?).
Well, these are my initial thoughts and I’d be very happy to hear yours.

Wednesday, November 29, 2023

Call for Papers: 2024 CSNTM Text & Manuscript Conference

0

From Dan Wallace and CSNTM:

The Center for the Study of New Testament Manuscripts (Plano, Texas) welcomes proposals for the second CSNTM Text & Manuscript Conference, scheduled to take place on 30–31 May 2024 in Plano.

The theme of the conference will be “Intersection.”

In keeping with this theme, we invite papers that explore ways in which New Testament Textual Criticism interacts with other disciplines (paleography, art history, exegesis, paratext, linguistics, conservation, etc.) to inform our understanding of the New Testament transmission in its broad context.

Each paper will be slotted 35 minutes, including time for Q&A. Titles and 300-word abstracts should be submitted via email to Denis Salgado at dsalgado@csntm.org and Mark Gaither at mgaither@csntm.org. Deadline: 31 January 2024

This call for papers is also available at TextAndManuscript.org.

Selected papers from the previous conference (Pen, Print, & Pixels, 2022) have been published by Hendrickson and are available for purchase here.

For more information about the conference, including venue and registration, visit the "Intersection" registration page here.

You may also secure your hotel accommodations at the conference discount rate here.
I spoke at their inaugural conference and really enjoyed it. It was the perfect size for a conference like that. If you're interested, make sure you apply.

Friday, November 24, 2023

Distigmai Revisited: When Material Analysis Catches Up with Common Sense

41

Fourteen years ago, on this blog, I summarized co-blogger Peter Head's paper at the SBL in New Orleans in 2009: ”The Marginalia of Codex Vaticanus: Putting the Distigmai (Formerly known as 'Umlauts') in Their Place" in two blogposts here and here in which he basically argued that the double dots now known as distigmai, marking textual variation in Codex Vaticanus, belong to one unified system that was added some time in the 16th century contra Philip Payne, who discovered these distigmai in the first place and had published several articles since 1995 arguing that they, or most of them, are are original to the scribe working in the 4th century. 

Right after the publication of my summaries on the blog Philip Payne contacted me and asked if he could post a full response on this blog, to which I and Peter Head agreed. That rather long response was published in five parts but then made available in full here. The main pillar of Payne's theory was that a group of approximately 50 (now 54) had the same ink colour described as "apricot," which had not been reinked – most scholars had assumed that the reinking of the codex took place in the 10-11th century. So the "chocolat"-colored distigmai were presumably reinked but the apricot-colored distigmai proved that these signs were from the fourth century.

Already in 1995 – fourteen years before Head's paper – Payne had noted the distigmai (then "umlauts") in his article "Fuldensis, Sigla for Variants in Vaticanus and 1 Cor 14.34-5, NTS 41 (1995): 240-62, but there he took a step further by arguing that there were not only distigmai (umlauts) but a combination which he called "bar-umlauts," subsequently changed to "distigme-obelos" (distigme in combination with a horizontal bar, which in reality is a paragraphos sign marking out a new paragraph). This argument reached a climax in Payne's article "Vaticanus Distigme-obelos Symbols Marking Added Text, Including 1 Corinthians 14.34–5," NTS 63 (2017): 604–25. I together with many colleagues wondered how this article could have passed peer review at the time. Peter Gurry wrote this blogpost in reaction to the article.

Subsequently, Jan Krans published a response in the same journal, "Paragraphos, Not Obelos, in Codex Vaticanus, in NTS 65 (2019): 252–57. In his article, Krans concludes that the "distigme-obelos" does not exist, and whereas Payne "seems to be correct on the text-critical status of the distigmai . . . their date and the identification of variant readings are clouded in uncertainty."

As of 21 November 2023, there is no longer any cloud of uncertainty regarding the date of the distigmai. On that final day, at the SBL in San Antonio, Ira Rabin of BAM Federal Institute of Materials Research and Testing presented her paper "Beyond Chocolate and Apricot: Using Scientific Techniques to Determine the Relationship of the Inks of Codex Vaticanus" followed by the presentation by Nehemia Gordon of École Pratique des Hautes Études (Paris), on "The Scribes of Codex Vaticanus." These two papers marked a climax of a wonderful conference for those interested in New Testament textual criticism.

In her paper Ira Rabin explained how the application of micro X-ray fluorescence (µXRF) has revolutionized the study of ancient and medieval cultural artifacts, making it possible to examine them using objective criteria.  She demonstrated how the application of "such objective scientific measurements to ancient and medieval manuscripts shows how color, hue, and appearance of ink to the human eye can be highly misleading" (cited from the abstract). I do not remember all the details, but there is a highly instructive chapter on this topic (open access) by Rabin, "Material Studies of Historic Inks: Transition from Carbon to Iron-Gall Inks" (2021).

 
Nehemia Gordon (on the image – photo taken from Rabin's presentation) had spent four weeks, during three trips to the Vatican, examining the inks of the codex using ultraviolet-visible-near-infrared (UV-vis-NIR) reflectography and micro X-ray fluorescence (µXRF) in order to test as many samples of the original ink, re-inked, corrections and paratextual notations as possible. He made scans of whole lines in order to avoid unreliable measurements. All in all 587 µXRF line scans and 3730 micrograph images were captured. Then it was possible to measure the intensity of copper and iron in the unreinked and reinked text.

 

For example, on folio 1390 in column A line 5 there is an  omicron on the line which was left unreinked and could be compared to the darker omega above it.

 Without going into too many details, the result of the scan gives a fingerprint of the ink in the ratio of copper/iron and zinc/iron. The very surprising result was that the unreinked text had the same fingerprint as the reinked text – the inks were based on a similar (but not identical) recipe. This led Rabin to conclude that the reinking was made already in antiquity, because the original ink had soon begun to degrade (apparently a bad mix). I asked in the Q/A how soon this degrading could have happened and Rabin answered that it could be as soon as after 10 years (!). Rabin had not yet said anything about the distigmai, which added to the drama – could both the "apricot" and the "chocolate"-colored distigmai be dated to the fourth century? Jan Krans, who now had to leave for the airport to catch his flight, and others in the room were getting nervous – and this drama paved the way for Nehemiah Gordon's presentation.


 

 

 

 

 

 

 

 Gordon went through several examples from Payne's publications including an interesting case of Payne's apricot "distigme-obelos" in the left margin at the end of the Lord's prayer in Matthew 6:13 (folio 1241, column B, line 9) which very likely was related to the doxology.

If this "distigme-obelos" went back to the fourth century, there must have been ancient manuscripts that included it (Gordon seemed to be unaware of the fact that versions of the doxology are attested in the Didache, many important Greek minuscules, in the Old Latin, Syriac, Coptic et al., which shows that it is indeed early).

What made this example so interesting was that there is a part of the paragraphos sign which was not reinked so here we can measure at several points, the unreinked and reinked horizontal bar, and the apricot distigme and compare it to the unreinked and reinked main text from another place (like the o/ω above). This test revealed not two but three fingerprints at the crime scene!

Ink 1 (Original ink)
Unreinked main text: Cu 0,12 
Unreinked horizontal line: Cu 0.12

Ink 2 (Reinker)
Reinked main text: Cu 0,17
Reinked horizontal line: Cu 0.20

Ink 3 (Apricot Distigme): Cu 0.02 (under 3% is reaching limit of detection)

Then Gordon turned to an example of Payne's "chocoloate distigme+original obelos" (folio 1241 1243, column A, line 12):


Ink 1 (Original ink)
Unreinked main text: Cu 0.10
Unreinked horizontal line: Cu 0.09-010

Ink 2 (Reinker, p. 1244)
Reinked main text: Cu 0.16-0.17

Ink 3 (Chocolate Distigme): Cu 0.00-0.01

In sum, the distigmai, whether apricot or chocolate brown, had the same very distinct fingerprint which showed that they had an ink-composition with far less copper than the unreinked and reinked text and horizontal line. This demonstrated clearly that the "distigme-obelos" has existed only in fantasy, in spite of Payne's hard attempts to show with advanced statistical method that they must exist.

Rabin and Gordon explained that the ink-composition used for the distigmai, original and reinked, could be assigned to the 16th century at the earliest. This speaks in favor of Pietro Versace's proposal, in his masterful examination of the marginalia of Codex Vaticanus, that in the final phase in the 16th century, Arabic numerals were added to mark Vulgate chapters as well as the distigmai to mark out textual variation in the NT. In this connection, Versace also observed that certain marginalia including distigmai occur on (supplement) pages written in minuscule in the 15th century (I Marginalia del Codex Vaticanus [2018]: 8-9). The new analysis of the ink confirms the date of the marginalia but cannot prove that one and the same scribe who added the Vulgate chapters also added the distigmai. However, it is indeed the most economical hypothesis.

The dating of the distigmai to the 16th century further confirms the proposals by Curt Niccum and Peter Head. In his article "The Voice of the Manuscripts on the Silence of Women: The External Evidence for 1 Cor 14.34–5," NTS 43 (1997): 242–55, Curt Niccum had suggested that the distigmai were added in the 16th century by Juan Ginés de Sepulveda (1490-1574) who had access to the codex and in a letter exchange supplied Erasmus with 365 readings to show that these readings agreed with the Vulgate against the TR, and that Erasmus should revise his edition. (As Jan Krans  has pointed out to me, Erasmus prefered to go with the pope’s opinion and refused to carry through this revision [– for clarification, see Krans's comment to this blogpost].) 

Following Niccum (and Head), James Snapp has more recently suggested that the 365 variations in Vaticanus, should be reread as 765, changing just one roman numeral (CCCLXV → DCCLXV) and thus better matching the actual number of distigmai (see Peter Gurry's blogpost here and Snapp's post here).

To come full circle, we are back to Peter Head's paper from SBL in 2009, in which he presented a comparison of the location of the distigmai with the published text of Erasmus reflecting MSS available in his time and he had found that in the gospels there was a 92% match between Erasmus edition and the distigmai. If one included the notes in Erasmus the rate went up tp 98%! Unfortunately, Head never published this paper. However, as I posted the breaking news from SBL 2023 on Facebook here, Head made just one comment in his characteristic fashion, which I also decided to include in the title of this blogpost:

 

"When material analysis catches up with common sense."

 

Epilogue: As we await the full publication of the results by Ira Rabin, Nehemia Gordon and the rest of the team (P. Andrist, P. Vasileiadis, N. Calvillo, O. Hahn), which will actually contain more suprises (implied by the presenters), I sent some follow-up questions and comments to Gordon who had contacted me (I am still waiting for his reply):

Did you take any samples of the accents in Codex Vaticanus? They must be later.
Why did the reinking not degrade as the original… what was wrong with the original ink? 
Also, it is a pity that one could not have discussed some other interesting points where to test the ink (I guess now it is too late).
 
Update: In reference to this blogpost, Stephen Carlson made a astute comment on X: "Payne is to be commended to bringing the double dots to scholarly attention, but they are not as old as he claimed. Rather, they tell us something important about the reception of Vaticanus in the 1500s."
 
Update 2: Philip Payne has posted long comments to this blogpost which reveals that he is not going to admit that he was wrong. I will not respond to any of these since I consider it a waste of time. The theories that any distigme is from the fourth century, whether apricot in color or not, and that there is a combination of signs, "distigme-obelos," are stone dead.