Algorithmic Exegesis, Franken-Bibles, and Reading the Text

I just read a recent essay in Christianity Today (March 2014) entitled “The Bible in the Original Geek” (http://www.christianitytoday.com/ct/2014/march/bible-in-original-geek.html?paging=off; paywall present). The essay begins by looking in on BibleTech, highlighting the incredible advances smart people like Stephen Smith have made by applying development tools to the Biblical text, using computers to find the patterns and potentials in this relatively small corpus of texts that Biblical scholars are so interested in. The early part of the essay focuses on what Smith calls a “Franken-Bible maker.” Using the computer, Smith’s tool gives users a “choose your own adventure” experience of Biblical translation, offering multiple options per phrase of the Greek New Testament (presumably pulling from several major English translations). For each phrase, the user chooses his or her favorite, resulting in a translation tailored to the whims of those untrained in the ancient languages. The essay moves from BibleTech to focus primarily on Logos software, detailing its expansion and the exciting potential of software for translation of, access to, and interpretation of the Biblical text.

The essay is interesting, and I highly recommend it, particularly to those who may be a little surprised to know that computers are doing so much for exegetes. I write, though, because the essay reminds me of some of the concerns I have had for some time about the enthusiasm regarding the use of software on the Bible. Now, don’t get me wrong. The combination of software and ancient texts, particularly the Bible, is incredibly interesting to me. As a Biblical scholar originally trained as a software developer, nothing could be more interesting. Like many interviewed for the Christianity Today piece, I am enthusiastic about the potential of mining and visualization tools for the Bible. I think we’ve (well, they’ve) just begun to scratch the surface of the potential of using computers to mine ancient texts. Projects like Perseus and the TLG provide a glimpse of what is possible. Computers allow us to see a lot of things in large sets of texts that may not be apparent to the analog eye.

My discomfort, though, comes from the almost-utopian rhetoric often used to describe the near future envisioned with computer tools helping us read the Bible. I think Smith’s Franken-Bible is cool, and like many of the other tools Smith has created, it shows incredible ingenuity and technical skill. At a minimum it gives users a small sense of the many justifiable-choices translators face, and the countless permutations possible for reading the text. However, I’m not sure what value a tool like this adds. That is, does this help us to read the Biblical text? To be fair, Smith himself agrees that the new reading experience created by his tool is not revolutionary: “I’m not saying that I think this development is a particularly great one for the church, and it’s definitely not good for existing Bible translations.” However, the author of the essay speaks as though the Franken-Bible is just one part of a larger revolution in Biblical reading, the result of digital technologies’ freeing the text and its readers: “Networked code has made us all small-scale publishers, travel agents, critics, and a hundred other job titles once left to trained professionals. Now technology is promising–or threatening–to turn all of us into Bible translators and expositors, too.” The article extends this into a somewhat-predictable slander of the established guild of Biblical scholars, seemingly hoping that new tools like Smith’s will cut out the scholarly middle man, creating for us all the experience of engaging the text without the need for formal training. The author extols the utopian potential of a text finally freed from the hands of “trained professionals” as the continuation of something started with Gutenberg, Luther, and others: “It takes the Protestant claim that we don’t need priests to interpret the Bible for us and says we don’t need academics and other experts to translate it for us, either. It thereby significantly undermines the authority of scholars and their convening institutions (translation committees and publishers).” Smith and his tech-savvy colleagues have freed us, so the article suggests, from the need of all those silly seminary and classics courses. Algorithms have finally delivered the potential of living, to use the the absurdly-misunderstood reformation slogan, sola scriptura.

Such a utopian vision of what new Bible software can do for us, I believe, reflects a significant misunderstanding of the process of reading and translation. As I see it (and it seems Smith does as well), tools like the Franken-Bible do very little to help us “read” the Biblical text. Let’s consider an example. If I am interested in understanding Paul’s (very opaque) phrase in Galatians 3:1, the Franken-Bible presents me with the option of reading Paul as saying that Jesus Christ was publicly portrayed as crucified, clearly portrayed as crucified, or vividly exhibited as crucified. Which is the “right” answer? There’s great value in seeing that all are options, something that BlueLetterBible will show. But how is the uninformed to choose between the three options? Presumably the power of this tool is that the user can make his or her own choice and construct a custom translation. But is it based on what sounds good? On what the reader wants the text to say?

This power of the computer only masks for us the real complex beauty of actually reading Paul’s phrase. There are several elements of reading that Smith’s tool can’t help with, and in fact might distract us from. If we were to ramp up Smith’s tool and allow the Franken-Bible to present a reader with every legitimate translation of the Greek phrase προεγράφη ἐσταυρωμένος, would we be any closer to understanding what Paul is trying to say about the Galatians’ past experience? No, and the reason is that understanding this phrase comes not from understanding the possibilities for this phrase. Most obviously, the meaning of the phrase in Galatians depends on its context in Galatians. A word means, to use the phrase of a wise Greek teacher of mine, not by itself, not in a sentence, but at a minimum in a paragraph. The relevant data set for understanding προεγράφη ἐσταυρωμένος is not all the potentials for the phrase in Greek literature (the direction Smith’s tool leads us in). A purely lexicographical search for the meaning of a term or phrase is often referred to as the “root fallacy.” We don’t learn what a word means by reading all the other uses of that word across time. Rather, we read the term in context. So, to understand προεγράφη ἐσταυρωμένος, we need to understand something about Paul’s understanding of Christ’s crucifixion, the Galatian community, visual experience, etc. That is, to understand what Paul is saying, we need to read more Paul! Smith’s tool, though, doesn’t help us do that. In fact, it suggests that the meaning can be captured by one of the options the Franken-Bible presents. Context is still king, and so I don’t hold the enthusiasm for a tool like Smith’s that the article clearly does, because I’m not sure it helps us read what Paul means/meant, but it might help us read what particular phrases in Paul’s letters may have meant.

This avoidance of the root fallacy, though, is not the primary reason I am less enthused about the potential of these tools than the author of this article. The excitement for computers making easier the process of reading the Bible misunderstands the process of reading and the real value of reading the text in its original language. For me, the beauty of reading the text in its original language is not that it endows the scholar with some special “authority,” as the anti-academe slant of the article would suggest. Rather, encountering the Bible in the original Hebrew or Greek is a reminder of how strange and foreign a text necessarily is for any reader. It is the “strangeness” of a text that is essential for the process of reading. In order for meaning to be created in the process of reading, the reader’s horizon, that is his/her prior understanding and experience, must merge/melt with the horizon pressed forward by the text itself (HT: Gadamer); that is, the reader and the message of the text must be different. The construction of meaning is a conversation between the voice(s) of the text and the voice(s) of the reader. Without a conversation, the text will function merely as a mirror, reflecting what the reader wants or expects the meaning to be. Devoid of a reason for choosing between the Franken-Bible options for a particular phrase, my concern is that the reader’s expectation and/or desire for meaning will determine translation. This article is another example of the many who speak as if the computer is at long last going to allow us to see things in this text that our unfortunately-analog forebears failed to see these last 1900 years. What is often missing in these conversations, though, is the recognition of the messy and all-too-human process of reading. The computer can do a lot for us. It can present texts in new ways. It can find patterns in texts we may not see. It can parse all the nouns, verbs, and adjectives in the text. What the computer cannot do, though, is read the text. Reading as encounter is a uniquely human process that the machines, no matter how fast or clever, will never duplicate. This is because the process of reading is a process of meaning construction, wherein the reader plays as central (if not more central) a role as the text. If that balance between text and reader is not maintained, then we can invent all the tools we want, but we’re getting no closer to “reading” than was Jerome sitting alone in his study with his quite-analog manuscripts. Without an encounter with, challenge from, and conversation resulting from a text, we’re not reading, we’re simply scanning, a text.

So, I welcome all the textual innovation manifest at conferences like BibleTech. There is no reason to stop innovating. However, let us not equate fancy tools with the deceptively-complex process of reading. No matter what form the text takes (digital, analog, Greek, English), if we do not let it encounter us then we function not as readers, but as scanners of a text. This, I suppose, is the great irony of praising digital tools like the Franken-Bible. If we see these tools as the great deliverers of meaning from the ivory tower of academe, then we reflect hermeneutical assumptions that are rather machine-like. That is, if we think reading is search process, a journey for a message encoded in Greek and Hebrew, then we are functioning not as readers, but as machines. We function as decoders, programmed to search for a static meaning in the text. Indeed, if this is the understanding of “reading,” then technologies like Smith’s are really exciting, for they make that process much easier. I don’t see reading like that, though. I don’t think there’s a static “meaning” that I’m looking for in my imperfect, analog way, a way that can be improved by the efficiencies of digital technologies. I don’t think there’s something that resides in a text that I’m looking to find, that a machine can radically help me find. I see reading as an encounter. And so while the machine can help create this encounter, nothing it does can replace it. So I encourage development, but even more so, I encourage reading. Whether one is reading in Greek or Hebrew, in the NRSV, or even in the Franken-Bible, the danger is thinking the job is done when we’ve found the perfect algorithm. No matter what form the text comes in, if we don’t let our experience and its message fight one another, I don’t think we’ve read it yet.

I’ve much more to say about how technology can help us in our process of reading, but that will have to wait for further posts. For now, let me close by reiterating that I’m in awe by the technical prowess of Smith, but I don’t want us to think the computers are going to do all our work for us…