Shifting Paradigms of Knowledge, or a Defense of Wikipedia as a Pedagogical Tool

As librarians, it is almost second nature for us to devalue Wikipedia. Frequently we remind students that Wikipedia is not a tool for research. We harp on students not to cite Wikipedia in their academic work. We cite famous examples of Wikipedia’s inaccuracy.  We prop up the straw man of an undergraduate turning in a paper full of Wikipedia citations and the sea of red ink from the professor that awaits. All of this is well-intentioned and important, and I agree that Wikipedia should not be used in formal academic work. In my work in an academic library, I often warn students against the danger of relying on Wikipedia, and I stringently refuse to allow them to cite it in academic papers.

However, I’m concerned that we librarians (and many academics in general) are missing the point in our all out war against the crowdsourced encyclopedia. First of all, it should be recognized that we are likely overstating the case about the inaccuracy of Wikipedia. Countless studies have been conducted asking about the accuracy of Wikipedia, and for the most part, researchers have concluded that it’s far more accurate than our public decrial would suggest, with some suggesting that it’s just as accurate as other reference works. This is particularly the case for heavily-edited articles, where one sees the value of a lot of eyes on a small amount of content. That Wikipedia is not so inaccurate as we might suggest is even clearer when one considers the false standard to which we hold it. The rhetoric about Wikipedia as inferior to more traditional reference sources suggests that those traditional sources are themselves infallible, given that they are produced by the credentialed experts. However, anyone who has worked closely with reference materials in a particular field will admit how uneven and at times inaccurate even these print resources can be.

But it’s not the under-appreciated accuracy of the tool that suggests to me that we may be doing a disservice to students by dissuading them from considering it a tool of the academy. Rather, in leading people away from Wikipedia, we are shielding them from seeing the shifting paradigm of knowledge that a tool like Wikipedia represents. By understanding just how different the philosophical underpinnings of a crowdsourced encyclopedia is from our traditional reference sources, we miss a real opportunity to teach our students something about the construction of meaning.

I want to outline two reasons I believe Wikipedia can be an effective pedagogical tool. First, it crowdsources our knowledge base and allows the community to serve as a check on individuals’ arguments. The primary reason so many of us are against Wikipedia (and other tools like it) is that we are operating from a top-down understanding of data and information. That is, we understand information as something that is in the realm of the credentialed expert (there’s a bit of job security behind this idea, don’t you think?). In the traditional mind of the academy, reflected in the decrying of tools like Wikipedia, information is something that the rest of you must only passively accept, because you really don’t know what you’re talking about. Leave it to us, the people writing reference articles based on our long years of study to report what you need to know. For example, if I want to know who fought in the Spanish-American War, I should turn to a reference work written by someone with the appropriate degrees in 19th-century American history and ask him or her (though traditionally it has been “him”) what happened in that war. But who is to say that this particular historian that I chose knows what he is talking about? Or who is to say that he’s telling me what actually happened and not his idiosyncratic take on things? Where’s the check on his information? Perhaps it comes in reviews from other scholars, but those can be difficult to track down, difficult to follow, and sparse in their coverage. What if there were a place where the description of the war was under the scrutiny of everyone who knew something about that war? Wouldn’t we tend to trust the explanation of the war that had been vetted by millions, rather than one that had been vetted by one (or one and a team of editors), even if most of those millions didn’t have advanced degrees? Surely we are not there yet with Wikipedia, but you can see how important the democratization-of-knowledge model it is built upon can be. Just as open-source software has the benefit of a powerful user and editorial community, so open-source information like that found on Wikipedia has the power of millions of checks upon it. We are in an incunabulum period with this paradigm of knowledge, but it is certainly where we are heading. A roomful of opinions is going to be more accurate than a single opinion, so long as we can bring enough well-intended voices into the room to cancel out all the crazies. We’re not there yet with Wikipedia (hence the famous inaccuracies), but I think we’re headed in the right direction. The fact that the more heavily-read articles on Wikipedia tend to be the more accurate ones suggests that this crowd-sourced information world is the way to go. Wikipedia is not perfect, but I am concerned that in decrying it we are missing the boat on the paradigm shift that it represents.

A second benefit, closely related, is far more interesting to me, and it is one that I’m only beginning to appreciate. A tool like Wikipedia, because it is crowdsourced, reveals as problematic the traditional hierarchy of data, information, knowledge and wisdom. In the traditional view (and this is far more complex than I can represent here; check out the Wikipedia article on it!), there is objective data that can be gathered by the experts. Information is a set of inferences made based on that data, or the organization of that data into meaningful conclusions. Knowledge is the synthesis of information in a given context; the subjective interpretation of information in particular situations for particular purposes. Wisdom, then is the future-looking application of knowledge, an even more subjective rendering of the data, filtered through several subjective levels. In a traditional view, reference sources like encyclopedias are understood to function as data organized into information. That is, reference works present a somewhat objective set of data, organized and described. In traditional encyclopedias, this information was presented as objective fact, verified by the credentialed individual who was asked (and often paid) for his/her expert analysis. Because there was very little check on the information (essentially an editorial check by someone who was not as specialized the expert writing the article), the reference work presents an article as the objective information with which a reader can do whatever he or she wants. That is, knowledge and wisdom can be created from the information provided, given the context in which the reader lives and works or the particular purpose for which he/she reads. A crowdsourced encyclopedia, though, pulls back the curtain on this process and shows that there is no such distinction between the “objective” information and the subjective construction of knowledge and wisdom based on that information. Because the information is constructed socially, and because the process of that construction is transparent (we can see the editorial history and read the dialogue surrounding that history), we recognize that there is no “objective” data, regardless of the credentials of an expert providing it. Rather, all data and information is subjective, the product of the perspective and context of the data collector. In fact, if we operate under the DIKW hierarchy, we might ask whether the first few layers of the hierarchy are even possible. If the post modern turn has taught us anything, it has taught us to be suspect of truth claims made, particularly those made by individuals. The presence of a community, explicitly operating from a particular (and different!) contexts, shows this messy process, even when trying to report the most “objective” data. It is revealing, for example, that basic “facts” about the life and presidency of George W. Bush are so vigorously debated. This shows us that even trying to present “data” is a subjective act. So I welcome Wikipedia, because it does the hard work of showing us what many late 20th and early 21st century philosophers have been telling us: all is subjectivity.

So, am I going to start suggesting that people use Wikipedia in research? Well, no, but yes. That is, I stand by the informal librarians’ creed that Wikipedia is not a scholarly source. There is a distinction between the article on “St. Paul” on Wikipedia and in the Dictionary of Biblical Interpretation. I recognize that citing the Wikipedia article is going to be met with much red ink. So, don’t do it. However, I do think using Wikipedia as a pedagogical tool is important, and I’m concerned that our refrain of “don’t use Wikipedia” is costing our students a valuable opportunity to catch a clear glimpse of the shifting paradigms of knowledge that are reflected in such a tool. There are tangible benefits of using Wikipedia in the classroom. First, it gives students a forum to present their work and make real contributions. The ideals of Web 2.0 are noble ones; we are all creators and we all have something to say. This is difficult for students to recognize, as they spend much of their time working on artificial assignments that will rarely be seen by a set of eyes other than their professor’s. Using Wikipedia in the classroom can give students an opportunity to contribute, to be part of the world’s construction of knowledge. Second, though, is it shows students how meaning is created. It cautions against the far-too-common assumption that a) there is objective data and b) only the experts really know it. Though Wikipedia is not perfect in any way, it moves in the right direction. It is built on the post-modern understanding that all meaning is constructed. It allows us all, therefore, to serve as the arbiters of that meaning. I don’t feel qualified to be a sole arbiter on much beyond my narrow field of training (and even then I’d be a bit cautious). However, I do believe that alongside millions of others, I can come to a pretty fair consensus on what happened, what it meant, and why it’s important. So I encourage teachers to use Wikipedia. Don’t use it as a singular source of information (though again, it’s not a bad one). Rather, use it as a publishing platform and a teaching platform. Encourage students to edit and create entries, hold an edit-a-thon, make a student assignment the assessment of the history of a wikipedia article, or do anything else to get students to see how this paradigm is shifting. For goodness’ sake, at least read this prescient article by Roy Rosenweig (from 2006!) about the benefits of this tool.

The millions in the room will never agree on even the most mundane piece of data, but that’s the beauty of it. We can see the conversation as it happens and see before our eyes the truth of ideas like that of Paul Ricouer, who reminds us that no event has meaning until it is put into context with other events, and that process is a subjective one dependent upon the whims and desires of the one doing the contextualizing (not sure who Ricoeur was? Look him up on Wikipedia! [but check some of the sources in refereed publications!])

Schweitzer, Strauss, and Finding the Courage to be Intellectually Consistent in Faith

Because of an ongoing conversation with a colleague, I’ve been revisiting Albert Schweitzer’s The Quest of the Historical Jesus (a poor English translation of the original German title: Von Reimarus zu Wrede: Eine Geschichte der Leben-Jesu-Forschung [Something like “From Reimarus to Wrede: A History of ‘Life of Jesus’ Research”]). As is the case with anything I read from Schweitzer, there is much to talk about here. The man not only has keen insight into the (in his view troubled) search for a historical Jesus, but he has keen insight into the human mindset, particularly the mindset of humans (like him) not content to accept answers provided to him as faith claims. He is always wanting to know “why?” or “says who?” (in addition, he writes beautifully). He also recognizes, though, that this desire to want to know the truth is often dulled by the voices of tradition or authority.

I didn’t have to read far in the opening chapter to find something that I think is worth spending a lot of time considering (and, not surprisingly, it has very little to do with a quest for the historical Jesus). In outlining what he plans to do in this seminal work, Schweitzer is quick to introduce the ground-breaking work of David Friedrich Strauss, who is, in Schweitzer’s mind, one of the few great heroes of the so-called “quests” (in fact, in the chapter detailing Strauss’ work, Schweitzer will refer to Strauss as a prophetic figure, pre-figuring Schweitzer’s own work on Jesus). What Schweitzer so admires about Strauss, more so than his shocking conclusions about the historicity of the events recorded in the New Testament gospels, is Strauss’ courage to carry through his intellectual program, to fight against the spirit of his time that was urging him to find a Jesus who looked a lot like the Jesus the church had always found. Unlike so many before and after him, Strauss maintains intellectual consistency in reporting what he finds when he looks carefully and critically at the gospels.

Some context of Strauss’ work and career may be necessary to invite appreciation for the depth of Schweitzer’s admiration. Strauss’ work on Jesus, The Life of Jesus Christ, Critically Examined (Das Leben Jesu, kritisch bearbeite, 1835, ET 1846) was a bombshell, not only for Biblical scholars, but for Christians around the world. Strauss walks slowly and systematically through the gospel accounts of Jesus’ life, showing that at every turn (in his mind), it is more likely that that gospel narrative is the invention of a later church, written as apologetic history to give legs to the burgeoning church. For example, Strauss reads Matthew’s narrative of Jesus’ birth in Bethlehem, flight to Egypt, and return to Nazareth alongside Luke’s account of Jesus’ birth in Bethlehem, dedication in the temple, and then return to Nazareth, concluding that it is not plausible that both gospels are recording events “as they happened.” Strauss shows in great detail how the chronologies simply don’t work, given what we know about geography, travel times, etc. He finds, in most places in the gospels, a clear motivation by a later church to tell stories about Jesus to fit a prophetic hope. The gospels are, for him, fantastic retellings/inventions, rather than “history” in the sense his positivist contemporaries (and most of ours, for that matter), were thinking. Strauss’ book is an impressive work of historical method, but it was met with great resistance and outcry. In many churches today, of course, such rigid historical analysis would be “welcomed” as blasphemy. So much more was the case, though, in Strauss’ time and context. The book made him infamous across the continent, and it cost him his job and in many ways his entire academic career. Seminarians today worry about their professor “taking my Jesus from me.” Well, the reaction to Strauss was far more severe. He lost his job, and he struggled for most of his life to find one.

Now, there’s much more to say about Strauss, but let me return to Schweitzer, my reason for writing. Schweitzer recognizes the brilliance of Strauss’ account, and he praises him throughout the book as a good reader of history. However, his reason for praise so early in the book is Strauss’ intellectual honesty, consistency, and, above all, courage. What makes Schweitzer the most angry in his summary of the many quests is not bad history (though he doesn’t like that, certainly), but timid history. That is, Schweitzer’s famous conclusion that the 19th century “questers” are guilty of writing themselves into the Jesus they “find” (“There is no historical task which so reveals a man’s true self as the writing of a Life of Jesus”) is really an indictment of his predecessors’ fear that keeps them from writing about the Jesus they really found. Instead, he accuses them of being influenced by the “spirit of the times” and shaping their Jesus to be more warmly received. For this reason, Strauss stands alone (well, he stands above a few others like him, notably Reimarus, who did much the same, but only allowed his name to be put on his work after his death). Strauss did not do pull any punches; he wrote about the Jesus he discovered when he read the gospels as carefully and critically as he would any other work of history. For his courage, he garners praise from Schweitzer, even more so because he also garnered so much criticism from his contemporaries.

Let me share a quote from Strauss that Schweitzer includes in his introduction. The quote is from an older Strauss, as he looks back at his decision to write that book and his assessment of the notoriety (infamy?) that resulted from his decision to publish his work on Jesus (at age 27!):

“I might well bear a grudge against my book for it has done me much evil (‘And rightly so!’ the pious will exclaim). It has excluded me from public teaching in which I took pleasure and for which I had perhaps some talent; it has torn me from natural relationships and driven me into unnatural ones; it has made my life a lonely one. And yet when I consider what it would have meant if I had refused to utter the word which lay upon my soul, if I had suppressed the doubts which were at work in my mind-then I bless the book which has doubtless done me grievous harm outwardly, but which preserved the inward health of my mind and heart, and, I doubt not, has done the same for many others also.”

Schweitzer is amazed by the courage this man showed, as am I. Regardless of how one feels about the Jesus that Strauss uncovers in his work (and trust me, there’s much to criticize about this method and his conclusions), I, like Schweitzer, stand in awe at the boldness in his publication. Far too often in the church (and even the academy), readers feel pressure to temper “what I really think” with “what I’m supposed to think,” and I think we all suffer because of it. It is a shame that so many people who are critical and creative thinkers Monday through Saturday feel the need to turn that mode off on Sunday. The result is that we deny “the inward health of [our] mind and heart” for the sake of church or academic tradition. We may feel like the narratives and explanations we’ve been fed wouldn’t stand up to the type of logical or critical analysis we apply to other parts of our life, but we back away from applying such analyses, fearful of how the traditions we’ve been raised on might fare in light of taking the initiative to read and study carefully. A good and thoughtful pastor reiterated this for me this morning when he decried our collective unwillingness to let the open-endedness, contradictions, and confusions in the text lead us where they might. I hope for the type of boldness that Strauss showed, and I hope others will exhibit it as well. Jesus encourages a love for God with heart, soul, strength, and, yes, mind (Lk 10:27). I think we often forget that last part. Let us remember, that if the heart is the right place, the mind engaged in the text is free to wander, to (gasp!) play. We might begin a bit scared about where that will lead, but if we let that fear dominate, leaving us only with a blind, uncritical acceptance of what we’ve been told, I think the consequences are far more dangerous.

So, I take my cue from one many might consider an arch-heretic. Thank you David Friedrich Strauss, and thank you Albert Schweitzer (certainly no heretic in the popular perception, but reading him carefully suggests he might should be considered one!), for prompting us to intellectual consistency and boldness. I think Strauss would probably encourage us in the words of his German compatriot Martin Luther: pecca fortiter (sin boldly).

Algorithmic Exegesis, Franken-Bibles, and Reading the Text

I just read a recent essay in Christianity Today (March 2014) entitled “The Bible in the Original Geek” (http://www.christianitytoday.com/ct/2014/march/bible-in-original-geek.html?paging=off; paywall present). The essay begins by looking in on BibleTech, highlighting the incredible advances smart people like Stephen Smith have made by applying development tools to the Biblical text, using computers to find the patterns and potentials in this relatively small corpus of texts that Biblical scholars are so interested in. The early part of the essay focuses on what Smith calls a “Franken-Bible maker.” Using the computer, Smith’s tool gives users a “choose your own adventure” experience of Biblical translation, offering multiple options per phrase of the Greek New Testament (presumably pulling from several major English translations). For each phrase, the user chooses his or her favorite, resulting in a translation tailored to the whims of those untrained in the ancient languages. The essay moves from BibleTech to focus primarily on Logos software, detailing its expansion and the exciting potential of software for translation of, access to, and interpretation of the Biblical text.

The essay is interesting, and I highly recommend it, particularly to those who may be a little surprised to know that computers are doing so much for exegetes. I write, though, because the essay reminds me of some of the concerns I have had for some time about the enthusiasm regarding the use of software on the Bible. Now, don’t get me wrong. The combination of software and ancient texts, particularly the Bible, is incredibly interesting to me. As a Biblical scholar originally trained as a software developer, nothing could be more interesting. Like many interviewed for the Christianity Today piece, I am enthusiastic about the potential of mining and visualization tools for the Bible. I think we’ve (well, they’ve) just begun to scratch the surface of the potential of using computers to mine ancient texts. Projects like Perseus and the TLG provide a glimpse of what is possible. Computers allow us to see a lot of things in large sets of texts that may not be apparent to the analog eye.

My discomfort, though, comes from the almost-utopian rhetoric often used to describe the near future envisioned with computer tools helping us read the Bible. I think Smith’s Franken-Bible is cool, and like many of the other tools Smith has created, it shows incredible ingenuity and technical skill. At a minimum it gives users a small sense of the many justifiable-choices translators face, and the countless permutations possible for reading the text. However, I’m not sure what value a tool like this adds. That is, does this help us to read the Biblical text? To be fair, Smith himself agrees that the new reading experience created by his tool is not revolutionary: “I’m not saying that I think this development is a particularly great one for the church, and it’s definitely not good for existing Bible translations.” However, the author of the essay speaks as though the Franken-Bible is just one part of a larger revolution in Biblical reading, the result of digital technologies’ freeing the text and its readers: “Networked code has made us all small-scale publishers, travel agents, critics, and a hundred other job titles once left to trained professionals. Now technology is promising–or threatening–to turn all of us into Bible translators and expositors, too.” The article extends this into a somewhat-predictable slander of the established guild of Biblical scholars, seemingly hoping that new tools like Smith’s will cut out the scholarly middle man, creating for us all the experience of engaging the text without the need for formal training. The author extols the utopian potential of a text finally freed from the hands of “trained professionals” as the continuation of something started with Gutenberg, Luther, and others: “It takes the Protestant claim that we don’t need priests to interpret the Bible for us and says we don’t need academics and other experts to translate it for us, either. It thereby significantly undermines the authority of scholars and their convening institutions (translation committees and publishers).” Smith and his tech-savvy colleagues have freed us, so the article suggests, from the need of all those silly seminary and classics courses. Algorithms have finally delivered the potential of living, to use the the absurdly-misunderstood reformation slogan, sola scriptura.

Such a utopian vision of what new Bible software can do for us, I believe, reflects a significant misunderstanding of the process of reading and translation. As I see it (and it seems Smith does as well), tools like the Franken-Bible do very little to help us “read” the Biblical text. Let’s consider an example. If I am interested in understanding Paul’s (very opaque) phrase in Galatians 3:1, the Franken-Bible presents me with the option of reading Paul as saying that Jesus Christ was publicly portrayed as crucified, clearly portrayed as crucified, or vividly exhibited as crucified. Which is the “right” answer? There’s great value in seeing that all are options, something that BlueLetterBible will show. But how is the uninformed to choose between the three options? Presumably the power of this tool is that the user can make his or her own choice and construct a custom translation. But is it based on what sounds good? On what the reader wants the text to say?

This power of the computer only masks for us the real complex beauty of actually reading Paul’s phrase. There are several elements of reading that Smith’s tool can’t help with, and in fact might distract us from. If we were to ramp up Smith’s tool and allow the Franken-Bible to present a reader with every legitimate translation of the Greek phrase προεγράφη ἐσταυρωμένος, would we be any closer to understanding what Paul is trying to say about the Galatians’ past experience? No, and the reason is that understanding this phrase comes not from understanding the possibilities for this phrase. Most obviously, the meaning of the phrase in Galatians depends on its context in Galatians. A word means, to use the phrase of a wise Greek teacher of mine, not by itself, not in a sentence, but at a minimum in a paragraph. The relevant data set for understanding προεγράφη ἐσταυρωμένος is not all the potentials for the phrase in Greek literature (the direction Smith’s tool leads us in). A purely lexicographical search for the meaning of a term or phrase is often referred to as the “root fallacy.” We don’t learn what a word means by reading all the other uses of that word across time. Rather, we read the term in context. So, to understand προεγράφη ἐσταυρωμένος, we need to understand something about Paul’s understanding of Christ’s crucifixion, the Galatian community, visual experience, etc. That is, to understand what Paul is saying, we need to read more Paul! Smith’s tool, though, doesn’t help us do that. In fact, it suggests that the meaning can be captured by one of the options the Franken-Bible presents. Context is still king, and so I don’t hold the enthusiasm for a tool like Smith’s that the article clearly does, because I’m not sure it helps us read what Paul means/meant, but it might help us read what particular phrases in Paul’s letters may have meant.

This avoidance of the root fallacy, though, is not the primary reason I am less enthused about the potential of these tools than the author of this article. The excitement for computers making easier the process of reading the Bible misunderstands the process of reading and the real value of reading the text in its original language. For me, the beauty of reading the text in its original language is not that it endows the scholar with some special “authority,” as the anti-academe slant of the article would suggest. Rather, encountering the Bible in the original Hebrew or Greek is a reminder of how strange and foreign a text necessarily is for any reader. It is the “strangeness” of a text that is essential for the process of reading. In order for meaning to be created in the process of reading, the reader’s horizon, that is his/her prior understanding and experience, must merge/melt with the horizon pressed forward by the text itself (HT: Gadamer); that is, the reader and the message of the text must be different. The construction of meaning is a conversation between the voice(s) of the text and the voice(s) of the reader. Without a conversation, the text will function merely as a mirror, reflecting what the reader wants or expects the meaning to be. Devoid of a reason for choosing between the Franken-Bible options for a particular phrase, my concern is that the reader’s expectation and/or desire for meaning will determine translation. This article is another example of the many who speak as if the computer is at long last going to allow us to see things in this text that our unfortunately-analog forebears failed to see these last 1900 years. What is often missing in these conversations, though, is the recognition of the messy and all-too-human process of reading. The computer can do a lot for us. It can present texts in new ways. It can find patterns in texts we may not see. It can parse all the nouns, verbs, and adjectives in the text. What the computer cannot do, though, is read the text. Reading as encounter is a uniquely human process that the machines, no matter how fast or clever, will never duplicate. This is because the process of reading is a process of meaning construction, wherein the reader plays as central (if not more central) a role as the text. If that balance between text and reader is not maintained, then we can invent all the tools we want, but we’re getting no closer to “reading” than was Jerome sitting alone in his study with his quite-analog manuscripts. Without an encounter with, challenge from, and conversation resulting from a text, we’re not reading, we’re simply scanning, a text.

So, I welcome all the textual innovation manifest at conferences like BibleTech. There is no reason to stop innovating. However, let us not equate fancy tools with the deceptively-complex process of reading. No matter what form the text takes (digital, analog, Greek, English), if we do not let it encounter us then we function not as readers, but as scanners of a text. This, I suppose, is the great irony of praising digital tools like the Franken-Bible. If we see these tools as the great deliverers of meaning from the ivory tower of academe, then we reflect hermeneutical assumptions that are rather machine-like. That is, if we think reading is search process, a journey for a message encoded in Greek and Hebrew, then we are functioning not as readers, but as machines. We function as decoders, programmed to search for a static meaning in the text. Indeed, if this is the understanding of “reading,” then technologies like Smith’s are really exciting, for they make that process much easier. I don’t see reading like that, though. I don’t think there’s a static “meaning” that I’m looking for in my imperfect, analog way, a way that can be improved by the efficiencies of digital technologies. I don’t think there’s something that resides in a text that I’m looking to find, that a machine can radically help me find. I see reading as an encounter. And so while the machine can help create this encounter, nothing it does can replace it. So I encourage development, but even more so, I encourage reading. Whether one is reading in Greek or Hebrew, in the NRSV, or even in the Franken-Bible, the danger is thinking the job is done when we’ve found the perfect algorithm. No matter what form the text comes in, if we don’t let our experience and its message fight one another, I don’t think we’ve read it yet.

I’ve much more to say about how technology can help us in our process of reading, but that will have to wait for further posts. For now, let me close by reiterating that I’m in awe by the technical prowess of Smith, but I don’t want us to think the computers are going to do all our work for us…