Discover millions of ebooks, audiobooks, and so much more with a free trial

From $11.99/month after trial. Cancel anytime.

Linguistics and the Bible: Retrospects and Prospects
Linguistics and the Bible: Retrospects and Prospects
Linguistics and the Bible: Retrospects and Prospects
Ebook726 pages8 hours

Linguistics and the Bible: Retrospects and Prospects

Rating: 0 out of 5 stars

()

Read preview

About this ebook

In 2016, the Centre for Biblical Linguistics, Translation, and Exegesis (CBLTE), a research center located at McMaster Divinity College, hosted the annual Bingham Colloquium. Scholars from around North America were invited to participate in a collegial and collaborative dialogue on what is currently happening (or could happen) at the intersection of linguistics and biblical studies, particularly in regards to the linguistic study of biblical languages, their translation, and the way that linguistic methods can contribute to the interpretation of the biblical texts. This volume of essays publishes many of the presentations that took place at the Colloquium.
LanguageEnglish
Release dateJul 12, 2019
ISBN9781532659126
Linguistics and the Bible: Retrospects and Prospects

Related to Linguistics and the Bible

Titles in the series (9)

View More

Related ebooks

Christianity For You

View More

Related articles

Related categories

Reviews for Linguistics and the Bible

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Linguistics and the Bible - Stanley E. Porter

    Introduction

    A Retrospect and Some Prospects

    Stanley E. Porter, Christopher D. Land, and Francis G. H. Pang

    Human beings have been thinking about language for a long time, at least as far back as Plato and Aristotle in the western intellectual tradition, and language continues to be an object of detailed study within the diverse and ever-changing field of linguistics. Similarly, people have been describing, translating, and interpreting the biblical languages and biblical texts for a very long time, more recently with help from linguistically-informed biblical scholars who apply various linguistic approaches to the text of the Bible and its languages.

    The Centre for Biblical Linguistics, Translation, and Exegesis operates at McMaster Divinity College in Hamilton, Ontario, Canada, in order to support the linguistic exploration of ancient languages and texts, in particular the Greek of the New Testament. By various means, the Centre supports individuals and projects who are applying linguistic methods to the Bible for the purposes of linguistic analysis, translation, and exegesis. It also hosts events, with the goal of fostering collegial, collaborative dialogue regarding the biblical languages, their translation, and the ways that linguistic methods can contribute to the interpretation of biblical texts.

    On June 17, 2016, the Centre hosted the annual Bingham Colloquium at McMaster Divinity College in Hamilton, Ontario. This event, which was preceded the previous afternoon by more informal discussion about the activities of the Centre and the possibilities for linguistics in biblical studies, allowed for faculty and students from McMaster Divinity College to interact with one another and with other interested scholars from other institutions. It included some plenary papers, some parallel papers, and a great many casual conversations during coffee breaks and mealtimes. It allowed new relationships to form and led to productive reflection on some of the ways in which linguistics has already entered into the discourse of biblical studies and some of the ways in which it might play a greater role in the future. This volume includes a number of the papers presented at that conference. We were unfortunately not able to publish them all, nor can we publish the stimulating conversations that were held between papers. It is our hope that the CBLTE will continue to host the Bingham Colloquium periodically, so that these unpublished conversations can not only continue but also include more voices. The prospects surrounding linguistics and biblical studies are promising, but the past has shown that the road to progress is slow and that more workers are needed. It has also shown that progress happens more quickly through collaborative exploration. At the Centre, we do not seek to advocate only one way of describing the biblical languages, but to encourage the generation of new descriptions that can move our understanding of Hebrew, Aramaic, and Greek out of the past and present and into the future.

    This volume is divided into two major sections: Linguistics, and Translation and Exegesis. The first major section of this volume contains essays that discuss the application of linguistics in biblical studies from a variety of different perspectives. Randall Tan opens this section with an essay detailing the views he has developed over the course of his career about the ways modern linguistics should be integrated into the wider enterprise of biblical studies. His essay focuses on four main topics: (1) the reason why biblical studies needs the insights of modern linguistics, (2) the most productive aspects of modern linguistics that biblical scholars can utilize, (3) the kinds of corpora, tools, and communities needed to facilitate the incorporation of modern linguistics into biblical studies, and (4) the kinds of free use requirements necessary to facilitate this work. Many of the points made in Tan’s essay are then exemplified in the next essay, where Christopher Land and Francis Pang narrate the story of OpenText.org, a freely accessible, web-based initiative to develop annotated Greek texts and tools for linguistic analysis. In their essay, they chronicle the developments of the collaborative OpenText.org project, beginning with the initial efforts of Stanley E. Porter, Matthew Brook O’Donnell, and Jeffrey T. Reed over fifteen years ago and continuing up until the present. They then discuss recent developments and future plans for the project, plans that are even now being brought to fruition. Following this essay, Chris Stevens also addresses the OpenText.org project, but with a focus on its usability in linguistic analysis of the New Testament. In Stevens’s essay, he evaluates the current abilities and future applications of OpenText.org for biblical studies by applying a method that analyzes the clause structuring and transitivity patterns in Phil 2. Stevens employs the search capabilities of OpenText.org as they have been integrated into Logos Bible Software to collect data for his analysis, but finds that many improvements are required to yield accurate results.

    Another area in which linguistics can make a contribution to biblical studies is the use of quantitative approaches to lexicography, which are now facilitated by computational tools that can manage large amounts of data—much larger than the relatively small corpus of the New Testament. The fourth essay of this volume, written by Ryder Wishart, addresses this topic from the perspective of neostructuralist lexical semantics. Wishart makes the argument that New Testament approaches to lexicography should be quantitative in orientation, following a framework of distributional corpus analysis within which both componential analysis and relational semantics can be remodeled in a way that upholds the motivating values of qualitative structuralist semantics. In the final essay of this section, David Fuller engages the relationship between linguistics and hermeneutics, a relationship he sees as an underexplored area in biblical studies because linguistics has traditionally been oriented towards the goals of historical criticism. Fuller relates the hermeneutical critiques raised against the traditional subject/object distinction to the current state of linguistics in biblical studies, engaging in dialogue along the way with Gadamer, Heidegger, and Ricoeur. Fuller’s thesis is that, if linguistic analysis is to make advances in biblical studies, the hermeneutical critiques from continental philosophy need to be heeded.

    The second major section of this volume, also comprising five essays, addresses the two other components of the colloquium: Translation and Exegesis. In the first essay of this section, Scott Berthiaume investigates the notion of key terms in translation, where key terms are defined according to the topical foci that a speech community uses to signify cultural themes or messages. He bases his work on several years of observation as a Bible translator of the Northern Pame language, an Otomanguean language located in the eastern part of San Luis Potosí, Mexico. His essay is concerned primarily with the question of how and why some terms used in translation gain traction in the wider use of the target language while others do not, finding that terms already used in a wider language context have the advantage of being well known, but are more susceptible to semantic shift or loss of meaning, and that prescribing terms, while having the advantage of being more precise, may or may not be successful because their incorporation is ultimately dependent upon the community’s use of them over many years. In the second essay of this section, Cynthia Long Westfall also addresses matters of Bible translation, namely the decision made by the editors of the Common English Bible (CEB) to render the idiom ὁ υἱὸς τοῦ ἀνθρώπου as either The Human One or the human being (rather than the traditional son of man). To assess this decision, around which lies some controversy, Westfall evaluates several interpretive variables, including the semantics of the idiom in the New Testament in relation to its corresponding idioms in Hebrew and Aramaic in the Old Testament, the semantic distinctions between the CEB’s translation and the formal equivalent phrase The Son of Man, and the CEB’s translation in light of its own goals for translation and modern translation theory.

    The final essays of the volume are oriented to matters of exegesis. In the third essay of this section, Stanley Porter explores the contents of a number of recent or well-known books on exegesis to assess the role they give to the Greek language in the exegetical process. Based on his evaluation, Porter then examines the contrast between traditional grammar, which is found in most of the exegesis books, and modern linguistic approaches to the Greek language, in particular Cognitive-Functional Linguistics and Systemic Functional Linguistics. In the next essay, Mark Proctor challenges the common interpretation of Mark 4:12 that takes the ἵνα clause as expressing purpose, arguing instead that the ἵνα clause should be understood as epexegetical—that is, elaborating on the unfortunate situation of those outside (ἐκείνοις τοῖς ἔξω) God’s kingdom. This entails that Jesus’s parabolic instruction should be understood as a pedagogical concession where he explains to outsiders the nature of God’s rule. In support, Proctor provides a detailed account of the ἵνα-clause functionality throughout the Gospel of Mark to show that its use in Mark 4:12 would have been comfortably understood by Mark’s original readers as epexegetical. In the final essay, Esther Cen makes use of discourse analysis modeled through a Systemic Functional Linguistic perspective to address the subject matter of 1 Cor 5—that is, what the discourse is about. Cen concludes that the focus of this part of Paul’s letter is not on the immoral man, but rather on what the church has failed to do, how the church should view this issue, and what the church should do in moving forward.

    The range of essays found within this volume attests to the progress that has been made in linguistically-informed study of the Bible, in particular the Greek of the New Testament. However, if these essays show anything, they also show that there is much further opportunity for creative engagement between the study of Greek and the implementation of various linguistic models such as is attested and promised here.

    Part 1

    Linguistics

    1

    Linguistics and Biblical Studies

    An Ongoing Journey

    Randall K. J. Tan
    Introduction

    Around fifteen years ago, early in my doctoral studies, I encountered an article called Studying Ancient Languages from a Modern Linguistic Perspective, written by Stanley Porter. ¹ After reading that article and Porter’s other writings, I became deeply troubled that much of New Testament studies continued to rely on traditional grammar and pre-modern linguistic work. I became convinced that the scientific study of language is critical if we want to better understand the language and texts of the New Testament. This realization started me on a journey that would radically change the direction of my PhD studies, a journey I have continued on for my entire scholarly career.

    Over the course of my career I have developed a number of views on how modern linguistics should factor into the wider enterprise of biblical studies. In this essay, I will focus on four areas in particular. First, I will briefly examine why the wider field of biblical studies needs the insights of modern linguistics. Second, I will make some modest suggestions about which aspects of modern linguistics give the most bang for their buck. Third, I will provide some recommendations concerning what kinds of corpora, tools, and communities are needed to facilitate smoother incorporation of modern linguistics into biblical studies. Finally, I will discuss what kinds of freedom for reuse are required to facilitate this type of work.

    Explaining the Need for Modern Linguistics

    The first topic of discussion is the need for applying modern linguistics to the wider field of biblical studies. For a well-rounded consideration it is necessary both to count the costs and to demonstrate the benefits of modern linguistics. On the cost side of the ledger, modern linguistics uses vocabulary and concepts quite different from those found in traditional Greek grammars, and these are foreign to many biblical scholars.² To make matters worse, because advocates of integrating modern linguistics into biblical studies draw from different theories, the use of the same terms are often used in multiple and often contrasting ways.³ To use a well-known example, once someone learns a term like verbal aspect, they are immediately confronted by prolonged and complicated debates about how aspect is to be understood. And it can be difficult to find objective criteria to determine who is right. Confronted with this steep learning curve and uncertainty, many scholars are tempted to retreat to their comfort zone, the traditional grammars they were trained to use.

    Despite the difficulties, these scholars should be prompted in the other direction. Traditional biblical scholars recognize that the so-called traditional grammar that continues to take pride of place in some circles did not pass down from the time of the apostles to the present fully formed and unchanged.⁴ They understand some of the weaknesses in traditional lexicons and grammars. They may even acknowledge that the great grammarians and scholars of the nineteenth century would not ignore a century of advances in theory and tools. However, many scholars still neglect the insights of modern linguistics because they are not yet confident that they can be relied on.⁵ Part of the challenge is that biblical studies is still in the early phases of understanding biblical languages through these newer paradigms, and it is simply hard to be certain about drawing conclusions using either traditional or newer paradigms.

    Fortunately, scholars generally understand the inevitability of at least some uncertainty in scholarship. The task thus becomes making a convincing case for the strengths of using modern linguistic theory in biblical studies, but also proactively admitting the unresolved ambiguities and uncertainties that remain. Scholars also generally understand the importance of evidence. Often, however, traditional scholars do not understand what new kinds of evidence need to be considered, why the insights of modern linguistics improve understanding of language, or how these insights can be shown to be true. Traditional grammars often observed the same phenomena that some scholars are now looking at through the lens of modern linguistics. Thus, those advocating for the use of modern linguistic models need to explore where the evidence used in traditional grammars and commentaries was more or less accurately judged according to modern linguistic criteria. To do this well, the similarities and distinctions between modern and classical paradigms need to be thoroughly mapped, respecting the wisdom of many who have gone before while still scrutinizing the work of past generations.

    Admittedly, such a task has several daunting challenges before it, but the fact that all biblical scholars share a reliance on the biblical texts ensures some common ground in the efforts for progress. While no native speaker of Hellenistic Greek is alive today, we still have the texts that native speakers have written. These are the only real evidence available for researching the texts and language of the New Testament.⁷ Porter makes this point clear in the following quotation:

    The study of the New Testament is essentially a language-based discipline. That is, the primary body of data for examination is a text, or, better yet, a collection of many texts written in the Hellenistic variety of the Greek language of the first century CE. Whatever else may be involved in the study of the New Testament . . . to remain a study of the New Testament it must always remain textually based, since the only direct access that we have into the world of the New Testament is through the text of the Greek New Testament.

    Since the study of the New Testament is essentially a language-based discipline, it is necessary to highlight next how modern linguistics contributes to a fundamentally empirical approach.⁹ This is particularly true in modern computer-assisted corpus-based linguistics, where any theorizing is dependent on the facts and patterns that are empirically discovered in the text. For example, each judgment found in traditional grammars and lexicons can be examined, querying the corpus to see if a given judgment matches the available data. Thus, I will now turn attention to what I believe to be the fundamental benefits of using modern linguistics as an empirical science—namely, that it is based on systematic observation of the data, where the data and analyses are made publicly available, and where the data and analyses are open to testing and are potentially replicable or falsifiable.

    The Most Bang for the Buck

    Since learning new linguistic theories and their associated esoteric language demands a heavy investment, it is important to communicate to the larger biblical studies guild the aspects of modern linguistics that give the most bang for the buck. The most important benefit of modern linguistics is perhaps its ability to help us systematically and comprehensively describe what the texts and language say—no more, no less. Considering the complexity of language and texts, three things stand out to me. First, language is multi-faceted and requires a multi-angled, multi-disciplinary approach. For example, language can at least be viewed as a social fact, as a psychological state, as a set of structures, or as a collection of outputs.¹⁰ Because language has both formal and functional dimensions, functional linguistic theories like Halliday’s Systemic Functional Grammar are likely to provide more satisfactory and holistic descriptions than formal theories of grammar, like the various versions of Chomskyan generative grammar.¹¹ Nevertheless, no theory comprehensively describes every aspect of language and so different theories potentially contribute valuable insights from different perspectives.¹² If individual scholars practice principled eclecticism and communities of researchers collaborate in multi-perspectival research, then vastness and diversity can be an asset rather than a weakness.¹³ It is important then for scholars to not work in isolation; instead, there needs to be an effort to seek out dialogue, invite multi-disciplinary research collaboration, and build long-term, sustainable communities with colleagues who come from diverse disciplines and perspectives.

    Second, in principle, different levels of language can be analyzed separately. However, because the primary body of data we analyze is corpus instances that belong to an ancient text of which there are no native speakers, we often need to be more careful not to make claims about meaning that are too sweeping or distinctions that are too narrow beyond what the available evidence clearly supports. Neatly dividing up the respective contributions of syntax and semantics in particular can be problematic.¹⁴ Consequently, caution and humble restraint need to be taken concerning the degree of comprehensiveness and certainty claimed in individual studies. Then, researchers also will be better situated to help colleagues exercise more caution and restraint in their claims.

    Third, out of context, linguistic forms are under-specified. They are bustling with a range of potential meanings. However, in context, they become much more specific, with bundles of meaning selected by lexical and grammatical combination patterns.¹⁵ The whole linguistic utterance in context thus means both more and less than the sum of its parts. Half a century ago, James Barr famously debunked several kinds of linguistic mistakes that result from failure to adequately recognize how function and context narrow down meaning potential and specify meaning in actual usage in text.¹⁶ Similar problems persist in contemporary biblical studies.¹⁷ This is why better empirical methods and tools need to be developed that can clearly demonstrate why interpreters should not try to make a word, phrase, or clause mean every possible meaning it might have in isolation. More systematic attention needs to be paid to the phraseology, the syntax, the rest of the surrounding co-text, and any other contextual information to disambiguate meaning as clearly and accurately as the evidence will allow.

    A Common Infrastructure for Studying Texts and Language

    In order to better understand the language and texts of the New Testament, there needs to be an infrastructure that supports (a) systematic observation of the data, (b) complete and intelligible accounting for all data and analysis, (c) testing of research hypotheses, and (d) reuse of prior work to create new work. Once established, this infrastructure will make linguistic research much easier to explain to the wider guild of New Testament studies and also easier to involve other scholars in doing new work together in collaboration.

    Biblical scholars are helped by the fact that biblical studies revolves around a relatively small, specialized corpus of the biblical texts as the primary source data.¹⁸ So, the logical first target is to build an open corpus of these texts. The OpenText.org project, started by Stanley Porter, Jeffrey Reed, and Matthew Brook O’Donnell, was a pioneering project in this arena. It was a web-based initiative to collaborate with and serve the scholarly community by developing annotated Greek texts and tools for their analysis.¹⁹ It built an infrastructure to make systematic observation of the syntax of the Greek New Testament. It offered a complete and intelligible accounting of the Greek text at the clause and word group levels. I had the privilege of playing a part in its early development, which planted a seed of open scholarship that continues to flourish in me. My appreciation for the need to use cutting-edge technology was likewise sparked by watching the technological marvels that my then-close-collaborator on the OpenText.org project, Matthew Brook O’Donnell, came up with. It is a great joy to me to know that Christopher Land and Francis Pang are now spearheading OpenText.org 2.0. The OpenText.org project’s contributions as a trailblazer should not be overlooked. I expect OpenText.org 2.0 to further expand its contributions.

    To support the work of groups like OpenText.org 2.0, Jonathan Robie and I co-founded a group called biblicalhumanities.org, based on the idea of facilitation and enabling. We have started to bring together a community of computer scientists, biblical scholars, and digital humanists who are already working on creating open digital resources for biblical studies. These are people and groups doing similar things to OpenText.org 2.0 in different areas of biblical studies. Our aim is to facilitate collaboration and the building of an enabling infrastructure to use those resources more fully and effectively.

    We think the first step is to build a machine-readable annotated corpus of the Greek New Testament that empowers flexible expansion and is freely licensed. By freely licensed, I mean there is no need to ask permission to reuse, remix, build on, or otherwise work with the corpus in any way without any restrictions, except to give credit where credit is due.²⁰ As a starting point, the corpus could consist of a base Greek New Testament text, textual variants, morphology, syntax treebanks, and a lexicon. Because users can reuse, remix, and build freely, we expect to generate a virtuous circle where more use of the material made available will lead to their improvement and augmentation as well as the creation of new resources as additional modules. Most of these basic ingredients are already available and can be viewed on biblicalhumanities.org’s dashboard.²¹ For Greek syntax alone, besides OpenText.org’s annotations, there are multiple versions of the Global Bible Initiative’s Greek syntax trees and PROIEL’s dependency trees. Through the efforts of scholars working in the Classics, especially various groups working within the Perseus orbit, there is no shortage of digital Greek texts from the Hellenistic and other periods that can eventually be incorporated as well—that is to say if the volume of work can be handled. On biblicalhumanities.org’s dashboard, various references and links are provided to resources, such as OpenGreekandLatin’s machine-corrected versions of Swete’s Septuagint, Migne’s Patrologia Graeca, Cramer’s Catenae, and LACE’s conversions of a massive number of other Greek texts from scanned images into machine encoded texts.²²

    A second vital step is still largely missing. So, we are talking with our existing and potential partners, including OpenText.org 2.0, about building open source user-friendly software tools and interfaces. The idea is that open data still needs well-designed open source software to enable any individual or team of scholars to add to, delete, replace, or reorganize any of the existing modules of annotation data. The tools should allow for maximum flexibility for users to define what they want to annotate, their own framework, and labels. For users without the ability to take the open source code and modify it, the tools would be designed to be as extensible as possible to meet most foreseeable basic needs. For users able to build on the open source code, we likewise expect to start a virtuous circle where more use of the tools will lead to their improvement and augmentation, as well as the creation of new user-friendly software tools and interfaces. By setting up this infrastructure, we will be able to study the texts and language of the Greek New Testament on firmer empirical grounding, whatever the method or perspective.

    A simultaneously needed third step is to cultivate a worldwide community of scholars who use this annotated corpus not only to conduct and publish their own research, but also to collaborate with one another to improve and build on the corpus.²³ As participants and resources increase, the corpus for New Testament studies can be extended to include other Hellenistic texts, and other levels and kinds of linguistic annotation can be added as modules. The growing community will also serve as a natural source of both collaboration and peer review for any research based on the corpus. Open scholarship with open data on open-source software augments both the ability to collaborate and to conduct thorough peer review because both the data and analyses are fully furnished. Together the community can help its members to validate their theories by testing and evaluating (a) whether proposed theories come up with compatible and convincing answers to many or all questions, (b) whether they apply to related sorts of data (i.e., range of coverage), and (c) how tightly they account for the linguistic details. The strength and diversity of support (or lack thereof) from members of the community would validate or cast doubt on research hypotheses.²⁴

    Why Is a Triple Open Approach Optimal?

    Even if one were to agree mostly with the approach I have suggested so far, one may still have reservations about allowing reuse of their research. Some may need more convincing that the optimal approach is to practice corpus-based linguistic study within the framework of open scholarship with open data on open source software (which I have nicknamed a triple open approach). In what follows I will unpack the thought process that led me to this conclusion.

    Writing is a technology.²⁵ It was a tremendous technological achievement that enabled humans to keep records and communicate over distance and time.²⁶ Gutenberg’s mechanical movable type printing press vastly extended the reach of writing and introduced the era of mass communication. Run-of-the-mill use of digital technology to serve up digitized books and articles further extends the reach of writing and print. These are the technologies that empower scholarship and spread knowledge today. However, they appear ill-equipped to continue to support the information explosion all fields are experiencing.

    What are the typical challenges of doing research? The challenges that come to my mind first are data gathering (for both primary and secondary sources), developing and documenting of methods, and making sure my results are replicable or falsifiable. When publication is taken into consideration, I think about spatial limitations (e.g., the word or page limit), temporal limitations (e.g., the length of the publication process and shelf life of the publication), and format limitations (e.g., the type of data and visualizations that can be fitted into standard print formats).

    Because most data sources are still in print form or digital formats that imitate print, data gathering still involves a lot of legwork: a trip to the library or bookstore, ordering books through interlibrary loan or even purchasing them, locating and accessing individual digital documents, etc. Because the sources are scattered, integrating the information gained from them is time consuming. Even with digital documents, one often has to read and search them separately and toggle between documents to extract what is needed.

    Moreover, the same kinds of limitations that one faces when publishing research are faced by the authors of the research one is consuming. Spatial, temporal, and format limitations usually mean that not all the relevant data or analyses can be published. Even in the rare occasions when they are made fully available, they are usually presented in print-dictated forms that are not very reusable, e.g., not stored in relational databases or annotated with machine-readable markup languages. Furthermore, copyright restrictions typically limit the extent to which the data and analyses can be reused in further research. To make matters worse, space limitations often hinder authors from fully documenting and explaining their thought process and methods with clarity.

    Last, access to scholarship, even if one could overcome this communication gap, is hindered by the Closed Access model where ownership resides largely outside the academy in the hands of commercial companies. Scholarly research is quarantined behind access barriers (whether print or digital in nature) and held for what sometimes feels like a king’s ransom. Access barriers in turn help perpetuate the situation where information is scattered, fragmented, and difficult to effectively gather and integrate. Unless a researcher is privileged enough to have easy access to a well-heeled research library or is independently wealthy, enormous financial and intellectual expense is demanded even to attempt to understand other people’s research, let alone successfully decipher them.²⁷ With so many accessibility problems, much research does not achieve any kind of wide circulation, much less enjoy sufficient attention to warrant replication or falsification. The overall result is ever-worsening knowledge fragmentation and an ever-widening access gap.²⁸

    Scholars in the wider academic world have begun to address similar kinds of problems with a worsening access gap and knowledge fragmentation. The fields of technology and the sciences have led the way.²⁹ These efforts are already starting to address most obstacles and concerns effectively and to tilt the global political, legal, and cultural environment in favor of freedom of access. On the one hand, they provide abundant inspiration, impetus, and models for biblical studies to press ahead. On the other hand, they are demonstrating that the concerns and objections to open scholarship, which mainly center on economic issues and permissive reuse rights, are not insuperable, and that, in any case, the benefits outweigh the costs.³⁰ Given that my main concern is to show how open access, open data, and open scholarship augment one another, I will focus primarily on reasons why biblical scholars should join the larger global scholarly world in solving knowledge fragmentation and access barrier problems.³¹

    A necessary, though not sufficient, ingredient to resolve accessibility problems is open access (OA). As Suber points out,

    OA benefits literally everyone, for the same reasons that research itself benefits literally everyone. OA performs this service by facilitating research and making the results more widely available and useful. It benefits researchers as readers by helping them find and retrieve the information they need, and it benefits researchers as authors by helping them reach readers who can apply, cite, and build on their work. OA benefits nonresearchers by accelerating research and all the goods that depend on research, such as new medicines, useful technologies, solved problems, informed decisions, improved policies, and beautiful understanding.³²

    Beyond open access, open data is also vitally important for the integrity of biblical studies as an empirical scientific discipline. This is because data is as much a product of scholarship as publications.³³ First of all, scholars can look at the data others used to reach a conclusion and analyze it for themselves. Moreover, if the data is relevant for further research, either along the same lines or along different lines, the data can be reused and built on without having to go through the process of collecting the data. Further, with the triple open approach advocated here, the data is tied directly to the common infrastructure and open corpus of the Greek New Testament—either built directly on the annotations provided by others or from one’s own revisions to those annotations or one’s own additional contributions. The

    Enjoying the preview?
    Page 1 of 1