Tuesday, January 03, 2017

Thinking about the Implications of the CBGM with Greg Lanier

31
Greg Lanier, a former compatriot at Cambridge, has two recent essays in the journal Reformed Faith & Practice (part 1 and part 2)  doing one of the things he does best: explaining what’s happening in NT scholarship to students and pastors. Here his focus is mostly on Greek grammar and lexicography, but he also touches on textual criticism in part two.

What I want to highlight here are Greg’s closing comments on the CBGM. (I’ve added the numbering and clipped these slightly.) I’m especially interested in helping to address Greg’s fourth point in future work because seminarians are going to need a lot of help using the CBGM. I hope we can give it to them. But he asks other questions about method and theology which seminary teachers will want to think about as they teach their students about the CBGM.

I’d like to hear what blog readers think of these. I’ve given some of my own thoughts in brackets.
  1. There are a lot of positives with the CBGM. The data set alone is a substantial improvement over what we had previously. The project has made great strides towards the previously unicorn-like dream of having thousands of manuscripts digitized, collated, and analyzable.... Moreover, the results for the Catholic Epistles indicate just how high-quality prior editions of the GNT (going back to Westcott and Hort and their contemporaries) have been. I would argue that our confidence in the text has, in the end, gone up with the ECM’s findings. [Generally agreed. I would just say that there is greater uncertainty overall in the results, judging by a comparison of brackets to diamonds in NA27 and NA28. But uncertainty can be better than unwarranted confidence.]
  2. The ECM project began with the Catholic Epistles in part due to their relatively more stable textual tradition. [Not actually true from what I’ve read, but many seem to think so. See the essay mentioned here from Aland.] Additionally, one could argue that the implications of modifying the critical text (which had been unchanged for nearly forty years) in this section of the GNT poses the least risk of ruffling feathers. One wonders, however, just how substantial the revisions may be in the ECM for Acts, the Gospels, and Paul — which, for most in the evangelical world, tend to harbor more emotional/theological investment. We can only wait to find out. [True, the Catholics don’t get much attention. Klaus Wachtel said at SBL that there are about 40 changes in Acts so far. Just remember, textual changes may be the easiest way to measure progress in TC, but they are not the only way.]
  3. Most contemporary English translations (outside the KJV-tradition) have used NA-26 or NA-27 as their base text. Presumably at some point the English translation committees will update their volumes, and when they do so, how will they approach the changes made to NA-28 (or NA-29 and beyond)? Will they embrace them? How will they signal the ◆ readings in the English text and footnotes, if at all? [This remains to be seen. Hopefully, they will find that their responsibility includes weighing the NA28’s decisions and rejecting them where appropriate. As for diamonds, we should note how few of them are even given space in the UBS5 apparatus. Clearly the UBS is showing their opinion that most are not relevant to translators.]
  4. How will (or should) students learn to do textual criticism in the future? This issue is particularly challenging. As outlined above, for decades students have been taught a fairly straightforward method for weighing major manuscripts and internal evidence to determine whether they agree with the NA/UBS critical text. However, the CGBM producing the critical text that future Greek students will purchase is operating according to an entirely different method. This method is, as all readily admit, rather complex to understand, let alone teach. More importantly, one would need to have access to significant analytical tools — and abandon a manuscript-focused mentality (and text-types) in favor of the more abstract text-focused mentality — in order to reproduce the thought process behind a given judgment on a textual variant in the ECM/NA-28/UBS-5. Take the 2 Pet 3:10 example shown above. The old-school approach would look at the various options, weight א, B, papyri, minuscules, and Byzantine witnesses (most of which disagree) and come to some conclusion. However, this conclusion is quite unlikely to be that the lone attested witness for +οὐκ (sa in NA-27; the Syriac is not even mentioned) offers the best reading. Yet that is precisely what NA-28/UBS-5 print in the main text! The student is at a loss, then, for explaining why that reading is preferred when, on the traditional approach, it seems to be the least preferred! .... In short, we are facing a situation in which the method currently being taught to students (and taught to scholars/pastors in the past) will no longer correspond to the method underlying the new editions of the critical GNT they are/will be working with! It is encouraging that the total number of changes to the text itself, at least for the Catholic Epistles, was fairly small; however, the underlying method is, nevertheless, changing substantially. [I agree completely that the use of the CBGM will change how we as scholars and students interact with and critique the NA text. No longer can we engage that text on its own methodological terms with just the print edition. You now need a laptop. Pete Head and I discussed this a number of times during our supervisions. As for helping students, see my recent JETS article for a starting point and stay tuned for more.]
  5. Related to the prior point, one wonders what use Metzger’s justly famous Textual Commentary will have in the future. It constitutes, in essence, the editorial committee’s notes from how they decided among variations in the 1970s and 1980s; its A-B-C ratings (in the UBS volumes only) have also been a helpful data point for years. However, as Elliott rightly notes, for those portions of the NA/UBS editions that incorporate the outcome of the CBGM/ECM project, “the tried and trusted vade mecum of old, Metzger’s Commentary … is only partially useful.” It may have helpful things to say about the internal evidence that might have impacted the ECM team’s decision for a given local stemma, but any appeal it makes to specific manuscripts is, now, almost entirely outdated. [For my part, I don’t see this as a bad thing. Metzger is great but he too easily becomes a crutch and an excuse to avoid TC rather than engage in it. But it would be nice to have a commentary on the new changes.]
  6. Finally, how will the shift in goal, from “original” text to “initial” text impact the way Reformed/evangelical folks who hold to biblical inspiration approach the critical GNT? Majority-text/KJV-only debates aside, most inerrantists who make use of the NA/UBS volumes have functionally equated the eclectic text found therein with, for all intents and purposes, the inspired autographs. Yes, we know that the critical edition is not itself inerrant or infallible — thus necessitating the need to make one’s own text-critical judgments — but we have embraced it as the next-best-thing we have (much like our approach to the Masoretic Text). The philosophical shift underlying the ECM project, however, is meaningful. The goal is no longer positioned as “getting back to what Mark wrote” but, rather, “getting back as early as possible, given the extant data, to what the early church received as coming from Mark.” Much effort needs to be devoted to thinking through the epistemological and doctrine-of-Scripture implications of such a change with respect to the GNT text coming out of the project. [I have some thoughts on this but will save them for another time. I would only add that this is a concern that has emerged among some American Lutherans. See the recent debate on TC between Jeff Kloha and John Warwick Montgomery.]
Read the rest here

31 comments

  1. Thanks, Peter. I appreciate the clarification regarding the Catholics. The point about diamond readings is a good one; I was trying to avoid being overly alarmist while still raising the issue, since for folks in my neck of he woods, precision in ETs is pretty important regardless of how minor a variant may be (and, as you know, points of uncertainty about readings is perfect fodder for those wanting to throw evangelicals under the bus re: Scripture, whether Muslim apologists or the Ehrmans of the world). Also good point about confidence; I was mainly suggesting (at a high level—given the audience) that our confidence in a stable text endures even with the relatively comprehensive new crack at it. If the ECM produced hundreds/thousands of changes, that would be a different story.

    (The RF&P blog format of the article leaves a lot to be desired; a cleaner PDF is available here—https://drive.google.com/file/d/0B0PKVajrrOv9T0VWUXV0Si03YTQ/view?usp=sharing)

    ReplyDelete
    Replies
    1. Thanks, Greg. The interesting thing about the change to diamonds is that there are more of them than there were brackets AND they now marked a heightened type of uncertainty in comparison to the brackets. But your basic point is right, I think. There is a tendency for people to feel a bit insecure when people start messing with their text and its good to remind them of the overall stability of the edition from 27-28.

      Delete
  2. I cannot avoid the impression that also in Greg's analysis there is a confusion between the CBGM as a method, and the text-critical decisions made by the editors of the ECM. Things such as the increased data set are part and parcel of how the CBGM works, but individual decisions are made on the basis of considerations that are or are not or are partly informed by the CBGM. This applies likewise to the use of the diamond. One should not avoid the resultant ECM text with a 'result' produced by the CBGM, they are different things.

    ReplyDelete
    Replies
    1. Dirk, by "resultant ECM text" do you mean the edition as a whole, including the apparatus or do you mean the primary line text only? If the latter, why wouldn't you equate this with the result of the CBGM (as used by humans, obviously)?

      Delete
    2. The latter, the text itself, not the sum total of the edition.

      At its best, the CBGM tells the practitioner to what extent their reconstruction of the inner relations in a variation unit relate (i.e. the level of consistency or lack thereof) to the ones already made. Does a rice cooker 'produce' cooked rice? I use it as a tool to produce cooked rice, but the end result is based on decisions I make as to the amount of water and rice I put in.
      The misdirection in allocating results to the CBGM instead of the editors is that of misdirecting responsibility, which muddles the situation when discussing some of the more contentious decisions, such as 2 Pt 3:10 (referred to above by Greg). The reading printed there is not a result of the CBGM (or 'suggested' by the CBGM) in the normal sense of the word. Note that I do not deny that the CBGM may have provided an argument in rejecting any existing Greek reading as the source of the others. It is just that the CBGM didn't produce the printed text.

      Delete
    3. This comment has been removed by the author.

      Delete
    4. Good points. While I was trying to distinguish the CBGM as the “engine under the hood” (bonnet?) providing data for the iterative process involving stemmata etc. from the work of the text critics/editors in making actual local decisions (in the feedback loop with the CBGM), my summary (“the CGBM producing the critical text”) definitely muddled things. I certainly didn’t want to present the CBGM as some sort of magical machine that does all the work and spits out an answer that then was printed and bound in prohibitively-expensive book form.

      (I suppose, if anything, this perhaps reaffirms how challenging it is to explain the inner-workings *clearly,* *simply,* and *accurately* to those outside the confines of TC [which was my target audience]. Getting two of those right seems to come at the expense of one of them!)

      Delete
    5. I think we would do well to refer to the CBGM as providing a set of evidence among other evidence. For example, we should think of coherence just like we might think of transcriptional probability. It's information we take into account but which needs to be assessed in any given case.

      Delete
    6. Good, that is something I suggested in my article on criteria in The Text of the NT in Contemporary Research volume.

      Delete
    7. Dirk: "... individual decisions are made on the basis of considerations that are or are not or are partly informed by the CBGM." The individual decisions are integral to the CBGM, but we could say that the genealogical coherence informs the decisions, in particular in the final stage where decisions are made in difficult cases.

      Delete
  3. Thanks Peter, I read your JETS article today and found it very helpful and thought provoking. I think CBGM may well be a step forward in textual criticism, but my concern is that it dis-empowers readers from understanding why one reading has been preferred over another. Following standard textual criticism methods, one could assess all the arguments and texts used and decide whether the arguments are good or if something has been missed or not given enough weight. Now with CBGM it appears that we are told this is what the algorithm concluded, and there is no basis to interact with how the algorithm was set up or why it reached that conclusion. Furthermore, if I understood your article correctly, the connectivity may be different in one case to another which would impact the confidence in selecting one reading over another. I was wondering if CBGM could be extended to provide more information for the reader without access to the algorithms used. Could statistical analysis be brought in to provide a statistical confidence indicator in the various alternative readings? This information could then be used in addition to standard text critical methods to augment them rather than replace them?

    ReplyDelete
    Replies
    1. Tim, ALL the data behind the CBGM and ALL the tools used by the editors to work with those data are available online at http://intf.uni-muenster.de/cbgm/index_en.html. So that's the place to start. But I do recognize your larger concern about dis-empowerment and I hope to help address the problem in future work.

      Delete
    2. Thanks Peter, I'll be interested in your future work on the matter. I'm new to this, but I fail to understand how the algorithm can arrive at a reading not attested by any known Greek witnesses in 2 Pet 3:10? What is happening here? Is the algorithm proposing a conjectural reading that happens to be attested in some early translations, or is the algorithm judging that the text behind those translations has a better genealogy than all the other known readings? Either way seems an extraordinary result. Another question is whether the algorithm takes any account of the dating of a manuscript? It must be at least conceivable that a new improved algorithm will come on the scene with 'better' results causing us to change our thinking yet again on some of these variants.

      Delete
    3. Tim, first off, there isn't really an algorithm per se. There are statistical comparisons and relationships based on these, but it's not like the computer does some crunching and then tells you the reading to adopt. Instead, the computer plows through a lot of data including much of it determined by the editors' own decisions. With some simple addition and division, the system can begin to tell you how witnesses are most likely related based partly on your own decisions. So it is more accurate to talk about the computer showing you the implications of your own decisions and where some of them may need revision in light of the totality of your decisions.

      At 2 Pet 3.10, it would be better to say that the conjecture fit best with their decisions throughout the Catholic Epistles. Of course, there were all the typical text critical considerations in play as well (what makes most sense in context, whether the conjecture explains the other reading, what versions attest it, etc.). The computer played one part in the decision but it wasn't a decision devoid of all the standard text critical know-how.

      Whether it was a good decision or not is another matter!

      Delete
    4. Thanks Peter, that's helpful. I don't think it would be wrong to describe it as an algorithm even with the caveats you describe. How is one to judge the editor's decisions of what they input to the computer programme? To what extent is the final decision based on the result of a computer algorithm with various inputs vs. the standard text-critical methods? Lots of questions arise which are not easy to explain or answer. This doesn't mean it is wrong to use it. It just means it will be unlikely to gain widespread acceptance outside a narrow academic community.

      Delete
    5. To give you some perspective, this is information concerning the editorial work on ECM Acts (to be publised this year), and I am citing the conclusion of my response to a paper presented by Klaus Wachtel at the SNTS meeting in Szeged (2014):

      In the first phase local stemmata were drawn up on the basis of pre-genalogical evidence – according to generally accepted text-critical principles and the dominant view of the history of the text, admittedly with a strong bias in favor of the traditional Nestle-Aland text (from NA26 and onwards), in variation-units generated by a pool of manuscripts deemed textually significant important (7600+ variation-units).

      In the first phase of the process, the database showing quantitative relationships (objective measurements) between all witnesses to be included in the edition provides guidance for finding out which reading is derived from which (following the basic assumptions about the textual tradition). This phase gives a significant starting point and generates genealogical evidence for phase 2.

      Already in phase 1, preliminary decisions are taken about Ausgangstext in as many as appr. 95% of the passages in Acts(!), whereas the question is left open in ca. 5% of the passages. The passages are revisited in subsequent phases (2-3) on the basis of genealogical evidence.

      Delete
  4. I like that ricecooker analogy, Dirk.

    Meanwhile, I was thinking about this post from a different angle.

    You mention about how textual criticism is taught in class, introduced to new students. And of course these days, as news of it percolates down to the masses, all the fuss is about CBGM.

    Now, I'm on record as being a big fan of CBGM. I think it has a certain genius to it. At the same time, I do wonder if part of the reason that so many pastors are fussing about CBGM now isn't the way in which textual criticism was introduced to them - how they were taught in class.

    I remember back to my first introductory classes, and I've talked with others about their experience. Most of it seems to be the same standard fare. You learn about internal and external evidence. For the former, you learn a list of "canons" like "the harder reading." For the latter you learn about how many manuscripts we have, the names and dates of a select few, and which ones are the "good ones". That last feature might be couched in more or less subtle terms, but that's basically the truth of it.

    Next step is to be taught how to "balance" the two types of evidence. Students are then usually told that there's three options for this. On one side we're told about those strict stemmatics people who favour external evidence, following one manuscript or group of manuscripts over all other evidence. And by that, of course, we're not talking about westcott & hort with vaticanus, but rather, those crazy cooks of the king james only movement. Then we jump to the other extreme - those airy-fairy hippie leftists who follow something called "rigorous" or "thorough-going eclecticism" - which sounds great but really means they favour internal evidence over all else. "Gazooks!" the student says, "these two extremes are so extreme! Isn't there some fair and balanced via media?" Why, of course there is, the instructor comforts: welcome to reasoned eclecticism. Well, that already sounds better. I mean, who wouldn't want to be "reasoned"? And what does "reasoned eclecticism" mean? Why, of course, it means we use reason and common sense to balance both internal and external evidence fairly in each individual case. Well, gosh, that sounds so stable and balanced. So it's no wonder, really, that all the students go home from class that day declaring themselves to be reasoned eclectics.

    And there'd be nothing wrong with that, really, if it really was what it is. But it's not. In common practice - as it's practiced across north america by most people in my experience, anyway - the balancing act between internal and external is not nearly so fair or balanced. In practice, we see a serious imbalance in favour of external evidence. And you can't blame them, really. They're told that this is supposed to be a science, and so everyone wants to make sure they are looking all good and sciency. That means working with hard evidence, not soft subjective things like opinions or intuitions. Those are for airy-fairy hippies.

    External evidence has things like solid artifacts (i.e. the actual manuscripts), and other things that look like real solid objective evidence, like dates, hand-writing classifications, and statistics about word counts and frequencies. Statistics are very sciency.

    Internal evidence, on the other hand, has all these loosey-goosey "canons" which are really just ideas, and people can't even agree on how they're applied. Which reading really is the harder one anyway? And come on, that grammar is sooo "awkward"! Internal evidence is replete with subjective opinions, like "that doesn't sound like Paul's style to me."

    And so in that way, internal evidence gets subtly and subconsciously classified as "subjective" while external is lovingly thought of as "objective."

    ReplyDelete
  5. egads, another two-parter. Anyway...


    Now, of course, all of that is bunk. There's plenty of subjective involved with external - such as whether that handwriting style really is if this type or that type, not to mention that the entire status of a given manuscript as "good" or not is ultimately based on the accumulation of "good" readings it has, which themselves are individually subjective judgements - and plenty of objective to be found in internal evidence - such as whether Paul has or has not used that word before, or whether this letter-set is visually similar to that letter-set, or whether this or that reading is shorter. So it's bunk, but in my experience it's pervasive all the same. Subconsciously, the average pastor "trusts" the hard facts of external evidence more than the theoretical arguments of internal evidence.

    And that's why, in practice, I'd say that most common "rigorous eclectics" are actually employing a ratio of about 3:1. Namely, their conclusions are about 75% based on external evidence, and about 25% influenced by internal concerns (but only when the external alone is not clear!). In other words, we have generations of scholars trained to think that external evidence is most of the ball game.

    Well, given that, no wonder CBGM is causing such a fuss. Somewhere in all the fuss, they miss that CBGM really only concerns external evidence. As Dirk points out, it's a tool. A tool that is used for tracking and sorting external evidence. That's a big deal, for sure, but it shouldn't be any bigger of a deal than external evidence itself should be. Unfortunately, I don't think the classrooms (of north america anyway) are doing a good job of having a substantive conversation on just how big that deal should be. I mean, two parodied extremes with a token JK Elliot quote is not the conversation that, in my opinion, needs to be happening.

    ReplyDelete
  6. I would actually argue that, at least in the circles where I spend my time now, these pastors you mention as making a fuss about CBGM do not even exist (since, even in the Reformed world where they should know better, many lose their Greek pretty quickly and in the early days simply rely on Metzger or a commentator...and then a few years after graduation they just resort to some sort of lightweight expository commentary [not that they are bad] that is not even dealing with the Greek anyhow). Even the educated pastors who at least try to work with the Greek at some level when they prepare a sermon are largely oblivious to what has been going on (and if they are over 40, they're still using a tattered UBS3). Moreover, even at the Bible college/seminary level, many instructors are not even aware there is such a thing as the ECM or CBGM (or new hand editions). And if they are teaching TC at all, it is a small side lecture (that assumes they are even teaching Greek, but many N. American institutions are not even doing that anymore, but simply doing a crash course in "Here is how to use Logos or Bibleworks to do fun word studies").

    So part of my goal (and Peter's, in JETS) was to try, however inadequately, to move the conversation out of the halls of SBL sessions, TC journal, and the ETC Blog into the real world where, for starters, many seminarians and pastors may not even know what the word stemma means (I didn't when I graduated from seminary).

    ReplyDelete
    Replies
    1. Wait, there is a world beyond this blog?!

      Delete
  7. As PeterG craftily understated, "seminarians are going to need a lot of help using the CBGM." Regarding the following comment, "But he asks other questions about method and theology which seminary teachers will want to think about as they teach their students about the CBGM" -- Greg Lanier's subsequent comment that "these pastors you mention as making a fuss about CBGM do not even exist" is far nearer the mark than anything else.

    Even at our seminary, the CBGM is basically a non-entity, being mentioned only in passing as one possible method within the discipline of NT textual criticism. Most of our emphasis remains on the various models of reasoned eclecticism, with other approaches such as that of Sturz, recognition of scribal habits, and even Byzantine priority taking a far larger role than anything related to the CBGM.

    ReplyDelete
  8. well, as they say, the plural of anecdote isn't data, but, my experiences appear to have had a higher CBGM content than yours.

    At the seminary I attended (and I wouldn't necessarily construe this as an endorsement of that school...) I would say that most of the students took or had Greek. At least when I was there, you could not be admitted into a thesis program without having the exegetical methods class, and you could not take the exegetical methods class without passing both your greek and hebrew. I'm pretty sure I first started learning about CBGM there.

    But forget seminary, I got an email last year from a baptist layperson who did not even have an undergraduate degree, but was self-taught and was studying CBGM on his own. And he wasn't even a boring old fogey either, he was my age!

    I did say it was "percolating" down to the masses, and percolation takes time. It isn't a flood, but I do think it's happening.

    ReplyDelete
  9. Judging by the emails I’ve received in the last week I would say there is real interest out there about the CBGM, both from pastors and teachers. But it is a big world so I’m sure there are plenty that are still unaware. Whether a good pastor needs to be aware of the CBGM or use it is another matter. Obviously, you can be a great pastor without this! But, I do think current seminarians need to know why the NT they’re using in class looks the way it does.

    ReplyDelete
  10. PeterG: "I do think current seminarians need to know why the NT they’re using in class looks the way it does."

    Indeed, but the same applies also to NA26/27 versus NA25, then WH, Tregelles, Tischendorf, the various TR editions, and even the SBLGNT or the Tyndale House GNT; so the CBGM again is not exclusive or necessarily primary in that regard.

    ReplyDelete
    Replies
    1. MAR: do you know seminarians who use these editions? My point was that those who use NA28 need to know what the CBGM is. Those that don't use NA28 probably should ;)

      Delete
    2. Peter,

      I know seminarians who have used NA/UBS, SBLGNT, TR, or Byz as their main text for reading and exegesis; so also Tischendorf, Tregelles, ECM, TuT, and von Soden for apparatus-related purposes. Certainly, those using even one particular edition should be well aware of the principles underlying the establishment of the main text of not only that edition but also of the others.

      So yes, one should be aware of CBGM to the extent that it involves and has affected portions of NA28; whether one should then presume to comprehend the esoteric nature of the CBGM or to see it as a superior solution to other theories and methodologies remains a separate matter.

      Delete
    3. Dr. R.,
      As a pastor-teacher who uses the tools you mention for exegesis, I say a hearty amen.
      I will admit to trying to understand the CBGM, and thanks to Peter, Peter, Tommy et al, in this regard. Yet, for all the explanations, I for one am not convinced that the CBGM is indeed a superior methodology! For all the denial, the purveyors of the CBGM do believe the end product is the 'final word', check out the Muenster website.

      Tim

      Delete
  11. I share Dirk J's concern. Mapping the MSS' texts and forming ideas about how they might be related to each other is one thing; deciding textual contests is something else.

    ReplyDelete
  12. I just wanted to write a brief note of thanks to all of you, whether you have participated in the discussion of this particular article or not, who are attempting to explain what the CBGM is and how the text editors are making use of it. Thank you! I hope that you will continue your efforts despite the perils.

    ReplyDelete
  13. No one is talking about the software itself. Only the methodology. Software by nature is buggy. As someone who worked in the software industry for decades, I recognize the tell tale signs of a rogue cowbow project. The code is on github, but no acquisition instructions, no design or functional specs, no test plan or tests, and no attempt to try to develop an open source community to dogfood the tools, enter and fix bugs, and contribute features, let alone, allow Christians to check their work as good Bereans. Seems like many are just willing to bow down to the magisterium; the elites who are the gatekeepers of the holy books and just take Gerd Mink's word for it. Until I see that, I will stick with the bible I have.

    ReplyDelete
  14. I worked on the Visual C++ Team for 20 years. My experience has produced a healthy skepticism when it comes to the claims of software. Frankly, the CBGM has tell tale signs of being a "cowboy" project. All the talk here is about the validity of the methodology, no one is talking about the software itself. Software by nature is buggy. You can download the code from Yes, even scholars are concerned that a new black art is too inaccessible to the uninitiated. The proponents of this method would quiet your concerns by assuring you that your suspicion is really not founded. Once you learn the principals, you will understand that there is really nothing new here; it is merely leveraging the power of the computer (and graph theory) that allows vast amount of data to be processed; something that Westcott and Hort would have used if it would have been at their disposal.

    While I do not doubt that CBGM “evangelists” will succeed in setting many at ease by making the methodology more accessible, the issue of the software development process is being totally ignored.

    The software is available on GitHub. It comes with absolutely no documentation on how to set it up, nothing that resembles a design spec, functional spec, test plan, nothing. Just “guides” that are nothing more than “readme” quality documentation.

    As someone who worked on the Microsoft Visual C++ Team at Microsoft for 20 years, I can smell the tell-tale signs of a “cowboy” software project. I really get the sense that the software is being developed by a very small team (like 2-3 guys).

    I cannot sign off on this method at this time. I need to see real documentation, including design and functional specifications with exit criteria, test plans written by the engineers along with acquisition, usage manuals and samples written by professional technical writers, not to mention, an intentional effort to develop an open source community to dogfood the tools, run and develop tests, enter and fix bugs, contribute features, documentation, blog posts, books, etc. But that is not really happening.

    I the meanwhile, I am fine with the bible what we have. If you cannot make your methodologies accessible, then I will not blindly follow you. I don’t are how many letters you have after your name or how much academia and the world look to you as the magisterium of scholars; the gatekeepers of holy books.

    I am sure that some of you are among those who balk at the idea that with all the tooling available, ministers really do not need to master Greek and Hebrew anymore. You just know how to run the tools. Well you better apply the same benchmark to yourself with this game changing textual critical method. Take your own advice: go back to college and get computer science degrees so that you can defend the tools yourself.

    Oh and by the way, software is by nature, buggy. Again, just putting undocumented code on GitHub does not really make it open source. Until they develop a community of users who can dogfood the software, contribute to the source, run and contribute to tests, file and fix bugs, do code reviews, etc., it IS just a black box.

    Bottomline: I suspect that there are like 2-3 coding cowboys that know how the code works and that if they got hit by a bus, the whole thing would be on the floor.

    Frankly, I really do not doubt that the software probably does what they say it does. But that is not the point. I personally refuse to totally delegate to a magisterium of elites until I can check their work like a good Berean. Just understanding the methodology at a high level does not cut it if you call yourself a minister of the Word. Just running queries to evaluate the results is analogous to just running Accordance, Bibleworks, or Logos rather than master Greek and Hebrew. So on that same principal, we need to understand how the software works and be able to defend each query on our own, not depend on a Metzger like commentary that would not scale with this methodology anyway.

    ReplyDelete