Tuesday, November 25, 2014

Conjectural Emendation: Yea or Nay?

31
I am testing the polling function on the new design and thought I would start with something that came up last night at the ETC blog dinner: conjectural emendation (discussed before on this blog).

Do you approve of conjectural emendation for the New Testament?


31 comments

  1. Shouldn't there be a "no in principle and yes in some cases" options as well? I know some scholars who would fit that description ...

    ReplyDelete
    Replies
    1. Jan, that seems nonsensical. Can you explain?

      Delete
    2. Well, if someone explicitly states that conjectures are ruled out, but in reality adopts a fair number of them ... But I agree: it sounds nonsensical, and no one would admit to it.

      Delete
    3. I think the language of "in principle" is causing the problem. If one remains highly skeptical of conjectures as they depart from the manuscript traditions, but is willing to acknowledge specific conjectures should they meet a high bar, then one does in principle approve of conjectural emendation. The only difference between options 1 and 2 in the survey then are whether we have or have not been convinced by any, not what one's attitude is toward conjectures beyond merely allowing that they may be valid.

      Delete
    4. (For me, it's Bruce's omission of χωρις/χαριτι θεου in Hebrews, for the record.)

      Delete
  2. Joel Wildberger11/25/2014 6:37 am

    Perhaps there could be an undecided? My thoughts on the subject are up in the air after Dr. Krans' presentation this weekend.

    ReplyDelete
  3. It depends on the available evidence and the text. In the case of the NT, I feel that very, very rarely is conjectural emendation really necessary. In the OT, with the rather more limited ms data, there is a bit more need/room for conjecture. In some classical texts, conjectures are almost necessary to make sense of some thorny issues for which there are 3 or 4 extant texts. (In the case of one-off ancient texts, such as inscriptions, there is even more need for some light conjectures in the case of scribal foibles.)

    ReplyDelete
  4. I find the discussion of CE for the NT extremely curious, in light of the abundance of manuscript evidence available. I am sure that I have not seen even the majority of CE's that have been proposed, yet I have not been convinced by any individually that I have seen or a particular passage that the available evidence does not already provide a reasonable solution.
    I answered the survey as no in principle, but more accurately might be descrbed as yes in principle but don't see any need in case of NT because of available evidence.

    Tim

    ReplyDelete
  5. I agree with Jan Krans. In principle, I'm against it, but (especially in ANE and Classical texts and to a lesser degree the HB/OT) there are times when it seems required. But in principle, I'm against it.

    James

    ReplyDelete
  6. Why do we use 'emendation' only in the expression 'conjectural emendation'? For one or the other reason editing a text does not feel as if we are 'emending' the text, though perhaps that is what we do.

    ReplyDelete
    Replies
    1. I can't remember what I was reading the other day but the author made the point that all textual criticism involves emending the evidence of manuscripts.

      Delete
  7. Obviously a subject near and dear to my heart!

    One thought, I am a bit intrigued by the "in principle" stance. To me that sounds like it has been sworn off a priori. The heart of the scientific method is the commitment to follow the evidence wherever it leads. Isn't ruling out one possible conclusion in advance antithetical to that? On the other hand, we cannot remain infinitely open to all possibilities: as the knowledge base of a field is built, we do need to settle on some conclusions simply to be able to move the field forward. No scientist, for example, debates gravity any more; now they simply accept it and work on building that conclusion. Accepting one conclusion, however, usually means ruling out - in principle? - the competing options. So I guess my question is this: where is the line? At what point do we get to settle on a conclusion and move on? How do we decide when a conclusion is probable enough that it's not worth debating anymore?

    I'm wondering, of course, because the "in principle" option seems to assume that we have reached that point in regards to the rejection of conjectural emendation. Since I disagree with that, I am curious as to how we reached that point, and how those who chose that option know that we reached that point.

    My second thought is this: I think it might be helpful to make a distinction between the need for conjecture and the conjecture itself. Most people, for example, can identify when a car is broken down. But usually only a trained mechanic is capable of fixing it. In a similar way, I should think that a lot more people should be able to spot and identify a deficiency in the received text that likely identifies a point of corruption, I should think that there will be more people should be able to identify that likely point of corruption than there will be people clever and skilled enough to propose a solution. I myself once presented a paper where I argued that a certain part of the text was nonsensical and therefore likely corrupt. I spent the whole paper arguing that singular point: that the text there was corrupt. After the presentation, some people criticised me for not offering a conjecture to solve the problem. But I didn't have a solution to offer! I could see that the car was broken, but I was not clever enough to fix it. And I don't think I'm unique in that regard.

    So I guess this thought applies to those who would say that they reject CE because they haven't been persuaded by any of the proposals yet. That may be, but a lack of good solutions shouldn't keep us from admitting the problem, yes?

    ReplyDelete
    Replies
    1. “The heart of the scientific method is the commitment to follow the evidence wherever it leads.” Does that include theological evidence?

      Delete
    2. I guess that would depend on how we define "theology". Is it evidence itself, or is it a collection of conclusions properly based on evidence? I haven't put a lot of thought into it yet, but I think I would tend towards the latter. Maybe that makes me a modernist or something. Good question though.

      Delete
  8. I think we should distinguish between entertaining a conjecture within scholarly discourse (within the sandpit that scholars play in) and approving it for dissemination presented as scripture amongst lay people in a printed edition which will reach lay people or in a translation.

    If scripture is by definition sacred text handed down in writing, then the conjecture would probably not qualify until confirmed by an ancient source.

    Of course, scholarly conjectures, such as those of the brilliant Hugo Grotius, can sometimes be vindicated by discovery. This shows that conjectures have value within scholarly discourse (the sandpit). However, scholars do have some ethical responsibilities either to filter (or pasteurize?) their ideas or to label their ideas correctly when disseminating them to lay people. Unfortunately many conjectures in Bible translations foisted on the general public are not labelled as such. I personally believe that there is a large space for editions and translations which purposely eschew conjecture.



    ReplyDelete
    Replies
    1. Peter, I think that is an excellent distinction, and reminds us that, in my opinion anyway, the ultimate purpose of scholarship should always be the service of the church.

      I agree that there are some things that we can and should profitably explore in the academic sandbox that are not suited for wider disemination. On the other hand, I think we will always have to be careful that by simplifying or omtting something for public consumption, we do not mislead the other way.

      What I mean is, yes, I agree with you that it would be misleading to print a conjecture in a popular bible translation without labeling it as such. But I think it would also be just as misleading to print an unblemished received text without noting that (some) scholars have serious reservations about the text at that point. That kind of thing simply fosters an idealsed notion of the status of the text, which then enables the construction of overly-tidy theologies of the text, which are then in turn used as a reason for rejecting the evidence about the real state of the text when they are eventually exposed to it!
      I think a better option would be to incorporate a *very* simplified apparatus in popular bibles. These could show not just strong variant readings drawn from extant mss, but strong conjectural proposals as well, and in either case would communicate to the reader the points where scholarship does think there may be some textual uncertainty. In otherwords, it would give the reader a more accurate and realistic impression of the state of the biblical text. Some may initially find that theologically troubling (just as they did when critical greek editions were first published) but I think that goes to my point above about theology following the evidence rather than influencing it.

      Delete
  9. The NET bible notes are a great example of bridging the scholarly "sandpit" and the lay Christian community. If conjectures are implemented into GNT editions and this transmitted to the public through translations, they should be noted in footnotes like the NET edition. I agree with Dr. Williams that readings supported by the manuscript tradition should be represented in a text ment for the Christian community.

    ReplyDelete
    Replies
    1. I think it goes back to the purpose of a critical Greek edition. Is it to present the manuscript tradition and the earliest representation of the text (as supported by the manuscript transmission history). Or is a completely theoretical construct? Arguably, any critical edition, with or without conjecture, is a theoretical construct. But is it fair to present a conjecture as representing a hypothetical initial text without manuscript evidence? It seems that conjectures are better dealt with in commentaries where arguments for the conjecture can be made.

      Delete
    2. But doesn't that assume that the conjectural proposal isn't ultimately correct? Because, of course, if a conjecture is correct, then by definition it isn't "not supported by mss" but rather "no longer supported by mss" (or, you might say, no longer written...). I'm just asking, because to me your distinction demands that we make a judgement on the ultimate correctness (or lack thereof) of those conjectures, and since mss discoveries are ongoing, I wonder if that wouldn't be a little premature?

      Delete
  10. Ryan, I would be happy with conjectures in translations for lay people, provided they are correctly labelled, e.g. 'scholarly conjecture' or 'none of the 600 surviving manuscripts read this way, but several scholars think that this reading is better'. The key is transparent labelling. This also means that subjective notes like 'Hebrew uncertain' need to be avoided, because these are really comments about the scholars' own level of certainty about something.

    However, I don't see any need for translators of ecclesial editions to suggest that the text (i.e. 'wording') is not pristine. Many ecclesial editions are seeking to represent the text which is in God's mind, as testified by the manuscripts. To demonstrate that the text in God's mind is not pristine is somewhat difficult and is likely to be unscholarly because it will not be based on evidence.

    Translators may also work with a rational presumption that the wording in God's mind is extant in some manuscript. I do not think that this can be deduced from scriptural texts, but a presumption can be rational (based on rational inferences from evidence) without being provable.

    So we have to start with the purpose of an edition before we decide whether it should contain conjectures and how they should be represented.

    I think that some of the best cases for conjecture 2 Peter 3:10 (NT) and 1 Samuel 13:1 (OT), but I actually do not think that any of the proposed conjectures have greater probability than the extant wording.

    Therefore I think that placing a conjectural reading in the main text is probably not appropriate, except for in works whose express purpose is to serve the sandpit!

    ReplyDelete
    Replies
    1. Ryan noted (with a convenient plug at the end:

      "If a conjecture is correct, then by definition it isn't "not supported by mss" but rather "no longer supported by mss" (or, you might say, no longer written...)."

      Aye, but there's the rub . . . .for if a conjecture is not correct, then it may never have been supported by mss (which could lead to a new book: "Never having been Written -- Why Conjectural Emendations tend not to become Widely Accepted").

      Also, what to do in cases where a conjecture does finally show up in a particular ms or two, but still fails to gain approbation from most scholars and textual critics? Such certainly would become a "validated conjecture" in such instances (think p66* at Jn 7:52), but still not one that garners a level of acceptance equal to autograph (or Ausgangstext) integrity. So what then?

      Perhaps there actually might be good reasons to reject conjecture in principle within a widely disseminated textual tradition (as opposed to a tradition with only a limited degree of textual support). And if so . . . .

      Delete
  11. Maurice,

    Besides providing me with a *most excellent* title for a follow-up volume, I think you do cut right to one of the key - of not the key - question: does the quality of the surviving ms base justify the rejection of conjectural emendation in principle?

    I don't think we'll answer that here, of course. Already in this thread we've seen reference to the great quantity of surviving mss. I discussed this at length in the book, of course, but I do think that the - admittedly impressive - high number of surviving mss does tend to blind us to the reality of the situation: the greater number that were lost.

    We have almost certainly lost at least half of all the mss that were ever produced, and I think there is reason to think that we may have lost as many as two-thirds.

    And yet, despite that loss of the majority of the evidence, so many of us look at the surviving minority and optimistically conclude not just that the original must have survived in there somewhere, but conclude that much with such certainty that they are willing to reject primitive corruption (and the consequent conjectural emendation) in principle.

    That does surprise me, it really does. Is there any other field of study where you could lose more than half of the evidence but still - with no concrete proof for doing so - claim that you certainly hadn't lost anything significant? Was the redundancy of the earliest textual dissemination really that great that it could endure the loss of the majority of its strains without losing even a single reading at a single point? Do we really think such preservation is likely? I realise most of us here are people of faith, but I don't think it follows that we should be scholars with leaps of faith.

    ReplyDelete
    Replies
    1. Ryan, I would probably put the percentage of lost witnesses closer to 90% which is near the number that some classicists and medievalists offer in their field. (I think Tim Finney has something on this in his dissertation.)

      In any case, an argument about conjecture (whether yea or nay) which is based on percentages is still the old problem of counting rather than weighing MSS. Shouldn't the argument for conjecture be that we have lost so many early witnesses?

      Delete
    2. Peter,

      I think that is an excellent point. Of course, I would think that though!

      Delete
  12. Whether conjectures are valid or not, the reason to include them in a critical edition would depend on the purpose of a critical edition. Is the critical edition meant to represent the text as represented by the MSS? Or is a critical edition meant to represent a purely hypothetical text where readings are taken from the MSS and from conjectures?

    ReplyDelete
    Replies
    1. Timothy, I agree with the principle - that our answer should be determined by what we see as the goal of the edition.

      Myself, I am old fashioned enough to believe that the goal of the edition - and indeed, the primary goal of our field - is the restoration of the authorial text.

      That is my main motivation for pushing for conjectures: because I am convinced, as were Westcott & Hort, that in the extant ms base there are points of primitive corruption which pervade all surviving mss, and at those points, I therefore see conjecture as our best hope for restoration.

      At such points, by definition, all the variant readings offered by the extant mss have been found to be - in all likelihood - deficient. Almost any good and reasonable conjecture, therefore, would automatically have a greater likelihood of reflecting the authorial text than any of the extant offerings. In an edition whose purpose was to present our best attempt at a restoration of the authorial text, printing conjectures at those points - noted as such - would, in my opinion, be completely appropriate.

      Delete
  13. A related question came up during one of the SBL NTTC sessions concerning conjectures and the NA apparatus. Should conjectures be included in a critical apparatus? Well the answer to the question depends on the purpose of the apparatus. If the apparatus is meant to show readings within the MSS tradition, then it would follow that conjectures should not be included in the apparatus.

    ReplyDelete
  14. To continue the conversation, Ryan asked, “Does the quality of the surviving ms base justify the rejection of conjectural emendation in principle?” At the very least the answer in regard to the NT documents ought to be “perhaps” — but a divergent illustration might demonstrate the point more clearly:

    As a limited experiment, assuming that our existing witnesses were limited to only the uncials Aleph B C D L, would the skilled conjecturists (1) even choose to approach the variant units listed below; and if so, (2) would they determine that the readings of all five existing MSS were faulty; and (3) as a result, would they conjecture the reading that actually appears as the NA/UBS critical text? (The variant units in question at this point, taken only from the Gospels, are the following: Mk 7:28; Lk 23:34 [klhrous]; Jn 6:17; 11:20; 18:1). Thus, given a smaller and textually more limited base where conjecture normally would be considered necessary, the question then becomes whether any such conjectures, if made at the points cited, and even if they might concur with the current NA/UBS text — would receive any general or widespread scholarly acceptance? Somehow I think not, even though with the far more complete evidence available the NA/UBS editors obviously thought differently.

    I also don’t very much buy into the argument ex silentio; even granting that at least 2/3 of all MSS that ever existed are no longer extant (I would even accept 3/4 as quite likely, but not Peter Gurry’s 90% assumption in view of the theological nature of the NT documents and their consequent tendency to be more frequently and carefully preserved than various classical works) — to me there would seem little reason to expect that some “original” readings supposedly may have existed within that silent hypothetical mass that somehow were not simultaneously preserved in the actual tangible quantity of evidence that remains. I don’t consider this an “optimistic conclusion”, but one that coincides with the level of evidence we actually do possess.

    Certainly (as Ryan points out in his book), if the NA/UBS editors choose at points to follow some conjectures (Ac 16.12; 1Pe 3.10) and also some readings supported by only a single MS or two or three MSS — then for them there is no reason to oppose conjectural emendation on principle. Others of us who do not follow such a procedure, however, would not share the logic of such an assertion.

    As Ryan asked, “Was the redundancy of the earliest textual dissemination really that great?” To which I would answer, “Yes”, given that the same existing MSS in their overall redundancy already reproduce the vast bulk of the text (ca. 94%) without any serious variation or question, and that for almost every remaining unit of variation a reading acceptable to virtually all scholars remains present in our apparatuses.

    So again, in principle . . . .

    ReplyDelete
    Replies
    1. Maurice, as usual, I so much enjoy reading your responses! Your thought exercise in your first paragraph looks very interesting to me, I'm going to save it for tomorrow morning and work through it over tea.

      Till then, I was thinking about your last point, and so far I have two questions in return, the first of which is an actual question, the second of which is one of those questions which are really more of an argument disguised as an innocent sounding question.

      But the first one, how would we know that the level of redundancy was that high? I'm operating on the presumption that it was not, you're suggesting that you think it was. My question is what method could we use to answer that question? What kind of test could we devise? Interested to hear your thoughts on that.

      Second, wouldn't we have to first agree on what level of redundancy was high enough?
      You mention 94%. That sounds great. But - and stop me if my thinking has gone sideways here - isn't such a high sounding percentage suddenly much less impressive when we consider the vast number of potential variants to which it is applying? If I remember right, there's about 138,000 words in the NT, yes? Even if I granted you 99% redundancy - even more impressive yet than 94% - wouldn't that still leave 1% of text not supported? Because 1% of 138,000 is still an awful lot. If I can still do math, it's 1380 possible points of primitive corruption. You could reduce that yet again by another factor of 10, to say only 138 conjectures, and to me that would still be an awful lot - more than enough to justify the field of study.
      In other words, if I'm doing that math right, 99.99% redundancy would still not be high enough to justify rejecting conjecture "in principle," not even enough to reject it with one of those partial rejections you often read such as "oh, in the NT, it's rarely necessary" since 138 occurrences is far greater than "rarely", yes?

      Delete
  15. Ryan,

    Your two questions of course are challenging; perhaps more so to those within the reasoned or thoroughgoing eclecticism than to my own position.

    >1) "How would we know that the level of redundancy was that high?"

    To which I would counter, how would we know that it was not? . . . But again, I think the key issue in NT textual criticism is whether we appeal to the evidence we actually possess as opposed to hypothetical assumptions regarding evidence that no longer exists.

    >I'm operating on the presumption that it was not, you're suggesting that you think it was.

    That is correct.

    >My question is what method could we use to answer that question? What kind of test could we devise? Interested to hear your thoughts on that.

    I don't think that anyone could devise an empirical method whereby to "test" material that once may have existed but which exists no longer (another reason why I don't become overly engaged with arguments from silence or data presumed hypothetically). I think the existing MS material does provide sufficient data whereby to assume a reasonable level of textual perpetuation and preservation of variants — the "test" for such basically being a careful extended examination of those data we do possess.

    >2) Second, wouldn't we have to first agree on what level of redundancy was high enough?

    To some degree, yes. Obviously, were our extant MS witnesses in hopeless disarray, with no two MSS sharing even 50% support, there would be major difficulties within NT textual transmission, leaving it in an unsecured state. But when the level of redundancy is 94% or greater, matters become quite different, demonstrating a generally secure transmission of the bulk of the NT text, the primary issue remaining being the detection of and removal of error among our extant MS base.

    (continued on another post due to length)

    ReplyDelete
  16. Continuing...

    Ryan said,

    >You mention 94%. That sounds great. But - and stop me if my thinking has gone sideways here - isn't such a high sounding percentage suddenly much less impressive when we consider the vast number of potential variants to which it is applying?

    I would say no, and for a reason similar to what has been noted previously: this "vast number of potential variants" once more becomes primarily an appeal to silence, since it extends beyond the currently existing variant base into the unknown world of what might have been.

    >If I remember right, there's about 138,000 words in the NT, yes? Even if I granted you 99% redundancy - even more impressive yet than 94% - wouldn't that still leave 1% of text not supported? Because 1% of 138,000 is still an awful lot.

    But the analogy is false, and the statistics unnecessarily bloated. By far the majority of variants among our extant MS base are either totally inconsequential, weakly supported, or can be rejected almost immediately (e.g., the presence of KAI for DE in a single 13th century MS or whether a particular orthographic spelling is preferable). If numerous inconsequential "variants" are not seriously considered to be "original" and can readily be rejected, the total number of serious variant readings drops dramatically.

    Nor is there any need to overly reduce the number of variant units. Certainly, the several thousand units provided in the NA27/28 apparatus tend to represent most significant instances of variation (though they also include many other units that are primarily trivial); von Soden and Tischendorf also can be consulted for the remaining bulk of most variant units. None of this alters the primary issue: establishing the basic NT text based on the material actually possessed as opposed to speculation as to what we do not possess (and which may never have existed in the first place).

    >If I can still do math, it's 1380 possible points of primitive corruption. You could reduce that yet again by another factor of 10, to say only 138 conjectures, and to me that would still be an awful lot - more than enough to justify the field of study.

    The leap of faith is questionable —all such places of variation cannot be considered locations where "primitive corruption" has occurred, particularly when most of these can satisfactorily be resolved by appeal to the existing MS base. Even among those variant units that are more substantial, the majority of them do not suggest "primitive corruption" (even if they might demonstrate "early variation"), and do not require conjectural resolution. For the most part, the problem of conjectural emendation is non-existent.

    So, for me, your query ("more of an argument disguised as an innocent sounding question") fails to convince, even though I agree that — for those practicing certain varieties of eclecticism — conjecture becomes at least a theoretical part of those processes, even if not necessarily requisite or practiced.

    ReplyDelete