Tuesday, Oct 22
12 to 4 ET

12:00 to 12:10 - Welcome

12:10 to 12:50

Opening Keynote by Dr. Leo Lo, Dean of the College of University Libraries & Learning Sciences at the University of New Mexico.

Generative AI and Libraries

12:50 to 1:00 - BREAK

  • Presenter: Alicia Lillich, NIH

    This lightning talk will focus on the development and delivery of a library instructional session designed to help researchers and teams build a practical, tailored framework for integrating generative AI tools into their work. The class guides participants through a process that evaluates how AI tools fit into their specific research needs, helps them develop a custom strategy for AI implementation, and emphasizes the importance of documenting their approach to maintain research integrity and reproducibility.

  • Presenter could not join

  • Presenter: Hilary Murusmith, Princeton University Library 

    When working with a book or serial published in a foreign language, catalogers may sometimes be the first to create a record in their language of cataloging, although there may be an excellent MARC record in another language from which a new record can be derived. However, deriving a new record can be a tedious task that does not require further judgment and expertise.

    As a member of an internal interest group on generative AI in cataloging, I’ve been exploring if Copilot can help with this particular task. In my talk, I will go over how I developed an effective prompt and the steps I followed to export records from OCLC and include them in my prompt. Then, I will show my experimentation with five non-English language MARC records and how I was able to refine my prompt based on the results I received. I will conclude with an evaluation of the results and considerations like how to establish a workflow that would result in time savings. While this talk covers a niche application of generative AI in cataloging, I hope my experience will encourage catalogers to try using generative AI to address specific problem areas in their work.

1:40-1:50 - BREAK

  • Presenter(s): Ashley Champagne, Patrick Rashleigh, Khanh Vo. Brown University

    Artificial Intelligence (AI) offers incredible potential in digital humanities work as ChatGPT and other AI tools offer the ability to write code with only reading knowledge of the language. However, using ChatGPT and other A.I. tools is itself a skill; users need to learn how to ask questions, how to review the answers that ChatGPT offers, and how to ask ChatGPT to improve its answers. The use of artificial intelligence (A.I.), particularly large language models (LLMs), holds significant promise in digital humanities due to DH’s highly interdisciplinary nature that necessarily involves a blend of diverse technical and subject-specific abilities. This interdisciplinarity can block progress: for example, a graduate student in History might conceive of an innovative digital humanities project but lack the skills in programming to even prototype the project.

    While A.I. cannot replace programming expertise, A.I. can lower the barrier of entry to DH work by offering automated code generation to get projects started. In addition, its ability to explain and answer questions about the code it generates provides excellent scaffolding for researchers to develop their coding skills. By understanding how to ask good questions of AI, students can also learn how to improve the answers (and the code) that ChatGPT creates. This can be a powerful skill for many students who are interested in building their digital humanities skills. The Center for Digital Scholarship at Brown University has changed our workflows related to programming to use A.I. to write python code relevant for many digital humanities projects. This presentation will share the work CDS has created related to A.I. and digital humanities pedagogy, as well as engage the audience in an interactive mini-workshop to practice computer programming with A.I.

  • Presenter: Lisa Gayhart, New York University

    Using AI-generated images to facilitate co-design activities: Using AI-generated images can assist people working in libraries in imagining, brainstorming, and prototyping new ideas beyond the constraints of physical materials, budgets, and technical or hands on skills. The talk will review one example of using AI-generated images in design workshops for a library space renovation.

  • Presenter: Greta Heng, San Diego State

    For rare books and special collections with classical Chinese materials, information interpretation and extraction for resource description is challenging due to the need for specialized knowledge of Chinese history and literature as well as the time required. The authors faced this challenge while working on a Linked Data (LD) project focused on describing Chinese historical places. The complexity of classical Chinese texts makes them difficult for humans to interpret, and even more challenging to convert into structured LD formats (subject-predicate-object triples) in English for enhanced data description, discovery, and sharing. These challenges motivated the authors to explore the potential of generative AI in processing classical Chinese texts and facilitating this conversion. Various generative AI tools available on the market were tested and the one works best with selected classical Chinese materials was identified. This presentation will share the insights gained from using generative AI, the experiences in refining prompts for improved outcomes, and the challenges encountered throughout the process.

  • Presenter: Melissa Robohn, McDermott Library AF Academy

    In Autumn 2024 we will use AI-assisted document and data analysis to engage in some self-reflection. We are starting a project to study how McDermott Library resources and services are cited in published and presented output by faculty and students. We already track statistics indicating collections and services used at the beginning of the research cycle, but analysis of the bibliographies, citation lists, and acknowledgments included in research output can provide perspective on which resources and services are utilized in academic output. We want to see how this end-game perspective differs from the standard beginning-of-the-research-process metrics currently available such as COUNTER reports and reference interview tracking. Through this study, the cost of services and products that directly contribute to successful academic performance can be validated and we can look at the gaps where the library is completely absent as opportunities for targeted outreach.

    The views expressed in this presentation are those of the author and do not necessarily reflect the official policy or position of the United States Air Force Academy, the Air Force, the Department of Defense, or the U.S. Government.

  • Presenter: Shelby Watson & Abbie Norris-Davidson, University of Mississippi

    Automated Speech Recognition (ASR) represents a promising avenue for libraries and archives to improve the findability and accessibility of audiovisual collections, both for the benefit of Deaf/hard-of-hearing communities and for researchers who want to study audiovisual materials. In recent years, the number of ASR tools promising quick and accurate transcriptions has increased exponentially. However, these tools have varying accuracy which can hinder the quality of transcription, particularly if the output is not edited by a human. Since 2020, faculty at the University of Mississippi Libraries (UML) have been investigating and assessing the performance of leading ASR technologies with a focus on archival audiovisual materials.

    Our most recent project compared four popular ASR tools: Whisper, Otter.ai, Rev.ai, and Panopto. Project leads chose collections from the University’s digital archives representative of the speakers and accents commonly found in UML’s collections. Transcripts were generated using each of the tools while researchers noted the time required, ease of use, and the degree of editing required for each tool. Accuracy of the machine-generated files was determined by calculating the word error rate against high-quality human-generated transcripts. We will discuss the workflow and the benefits, drawbacks, and potential biases inherent in this approach.

  • Presenter: Sai Deng, University of Central Florida.

    Artificial Intelligence (AI) has recently significantly transformed metadata workflows in libraries. While chatbots and related services have made AI more accessible, fully integrating AI into workflows remains challenging. This presentation highlights the use of AI at the University of Central Florida (UCF) Libraries to assist in assigning subject headings, including Faceted Application of Subject Terminology (FAST) terms and keywords, to both digital and traditional collections.

    UCF Libraries adopted the OpenAI API to generate subject headings, conducted evaluations and comparative studies of AI-generated terms and contributor-supplied terms, revealing the potential benefits of AI in metadata creation. This approach is particularly valuable for libraries with limited staff resources, enabling efficient metadata creation for diverse collections, including unique local materials. Furthermore, AI-generated subjects provide meaningful references for cataloging specialized collections.

    While the project’s outcomes are promising, the extent to which AI can be integrated into workflows varies depending on local circumstances. Incorporating AI into workflows introduces new challenges and opportunities for librarians, and collaborating with external partners has equipped librarians with the skills needed to navigate this evolving landscape, broadening the scope of metadata and cataloging practices.

3:10 to 3:20 - BREAK 

3:20 to 4:00

Closing Day Keynote - Michael J. Paulus, Jr., is the University Librarian, Creighton University.

Information, Automation, and Virtue: A Framework for Designing New Information Processes and Practices

Wednesday, Oct 23
12 to 4 ET

12:00 to 12:10 - Welcome

12:10 to 12:50

Opening Keynote by Mr. Russell Michalak, Library Director at Goldey-Beacom College.

Ethical AI Integration in Academic Libraries: Navigating Institutional Resistance, Privacy Concerns, and Equity in Information Literacy

12:50 to 1:00 - BREAK

  • Presenter: Mingyan Li, Kavita Mundle, Ian Collins, University of Illinois Chicago. Eric H. C. Chow.

    Libraries worldwide are exploring the integration of Large Language Models (LLM), generative AI, and machine learning to enhance metadata workflows. This research was conducted by librarians from the University of Illinois Chicago and Hong Kong Baptist University, focusing on using Google Gemini and OpenAI to generate descriptions and subjects for videos of silent films. Several Google Gemini prompts have been tested to guide AI tools in developing the most accurate results. In the next step of this research project, different implementations for the AI-assisted metadata workflow will be tested and compared to identify the best timing for human intervention. This research will help gain valuable insight on AI's potential and challenges in providing and enhancing resource description and discovery.

  • Presenter: Sonia Yaco, Rutgers University

    This session presents the results of a multi-year research project that identified AI tools that can improve access and enhance discovery capabilities in archival collections, allowing comparisons across disparate materials. The project is a collaboration with Rutgers and Durham University UK cultural heritage professionals, humanities scholars, and computer science experts. Thirty software tools were applied to untranscribed handwritten and typewritten documents and photographs in an English language Japanese cultural heritage collection at Rutgers University Libraries. The findings are presented as three use cases, showing that AI can mine text, find insights in collections, and link text to photographs. Scenario One discusses accessibility through transcription, translation, and narrative extraction. Scenario Two introduces tools for data pattern exploration to provide deeper understanding of collection content. Scenario Three shows ways to improve discoverability and integration of multimedia collections. The purpose of the session is to provide an understanding of practical ways that AI can be used with distinctive collections. Barriers to the use of AI will be discussed including ethical considerations, the need for infrastructure support, and policy challenges. The authors’ experiences with AI tools and implementation will provide a useful guide to others looking to explore AI for their archival collections.

1:30 to 1:40 - BREAK

  • Presenter: Aaron Pahl, University of Alabama Birmingham

    AI has the potential to transform how we approach archiving, research, and daily library operations.

    I will demonstrate how AI tools can be utilized to enhance efficiency and creativity in the field. I will explore how AI can assist with generating scripts and organizing ideas. The emergence of AI chatbots opened possibilities for cultural heritage practitioners who often wear many hats that require finding innovative solutions to problems outside of their area of expertise. We will explore how to generate scripts from text prompts using AI.

    At a basic level librarians can leverage AI chatbots as a starting point to organize jumbled thoughts into a coherent draft before ultimately rewriting the material. This is especially beneficial when staring at a blank page looking for that first sentence.

    I will also highlight the experimental use of AI in creating virtual 3D exhibits, which has the potential to create engaging and interactive displays that bring archival materials to life, offering new ways to interact with collections.

    By showcasing practical examples and case studies, I will provide attendees with actionable insights and tools to implement AI in their own workflows. The goal is to inspire librarians to harness AI in their professional roles.

  • Presenter: Nancy Richey, Western Kentucky University

    Special Collections Libraries and archives are being asked about the use of AI and Genealogical Research. This form of research remains at the top of how patrons continue to come into the library for traditional research. How is AI being used in family history research and what are the problems and promises for both patron and the library.

  • Presenter(s): Max Prud’homme, Jeevithesh Cattamanchi Venu, and Hiranya Pappu, Oklahoma State University 

    The Oklahoma State University Digital Archives are in a new phase of improving the discovery and preservation of historical materials by leveraging advanced digital methods, such as facial recognition and machine learning technologies. Facing challenges like sparse and inconsistent descriptive metadata, decreasing resources, and a fading institutional memory, the Archives seek to enhance archival asset discovery and optimize computational processes to enhance access and preservation to cultural heritage materials. To achieve this, Digital Archives are integrating AI technologies, applying a deep learning framework designed for face analysis and recognition. With consideration to scalability and sustainability, the team has developed a universal model to enhance descriptive metadata, search functionality, and discovery. The development team coordinates efforts with the metadata team that organizes, creates, and edits thousands of archival records. The objective is to augment the value and improve the searchability and visibility of digital materials to better contextualize the whole university collection of faculty, alumni, as well as buildings and more since its founding in 1890. The presenters propose to showcase the project flow, context, planning, design and architecture, focusing on scalability, sustainability, and ethical issues associated with face recognition technology.

  • Presenter: Amy James, Baylor University

    In today’s rapidly evolving digital landscape, AI tools are becoming commonplace for student researchers. However, the convenience of generative AI often comes with significant challenges, particularly in the realm of information accuracy and ethics. As a librarian at Baylor University, I identified a growing concern among students in our EdD Learning and Organizational Change (EdD LOC) program: students were using generative AI, but stopping with the initial response(s) that it provided, i.e. they were not critically evaluating the AI-generated content. This not only compromised the quality of their work but also posed ethical dilemmas, as they were citing hallucinated articles in their dissertations.

    In response, I developed a targeted lesson aimed at enhancing AI literacy among our EdD LOC students. The session focused on going beyond the initial responses provided by generative AI in research, emphasizing the importance of verifying AI-provided sources. I also introduced practical strategies for integrating AI responsibly into their academic work.

    This lightning talk will highlight the key components of the lesson, share insights into the challenges and successes encountered, and provide actionable steps for other librarians looking to implement similar initiatives. By fostering a deeper understanding of how students are using generative AI in academia, we can better prepare our students to navigate the complexities of AI in research.

2:40 to 2:50 - BREAK

  • Presenter: Tomeka Jackson, Jen M. Jones. Clemson University 

    AI can assist with creating more specific subject headings and can be used to generate summaries for films. The use of AI, in this instance, allows for more accurate subject heading assignments. However, human catalogers should still review AI-generated suggestions. Ultimately, using AI in conjunction with LCSH can improve the discoverability of moving images.

    Clemson University's Metadata Services Team has experimented with AI Tools to assist with cataloging and metadata workflows. Recently, we used Microsoft Copilot to evaluate the use of Library of Congress Subject headings and Genre forms. In this presentation, we will show the process of this experiment using screenshots and examples to discuss the advantages and disadvantages these tools create in cataloging various materials and how AI tools can help assist with effective workflows .

  • Presenter: Jeremy Nelson, Stanford University Libraries

    A major benefit of the open-source FOLIO Library Services Platform is the community's ability to experiment with new technologies and features. This presentation will demonstrate a proof-of-concept back-end module called "edge-ai" that uses Large Language Models (LLMs) to automate the creation of FOLIO JSON records from textual prompts and uploaded images of books. As part of the demonstration, we will address privacy concerns, hallucinations, and other challenges associated with using fast-changing LLMs in the edge-ai module. Additionally, we'll introduce the iterative development process of the edge-ai module as we explore support for other generative AI use cases in circulation, finance, and various FOLIO functions for libraries. Presentation website is available at: https://jermnelson.github.io/ai4libraries-2024/

  • Presenter: Meg McMahon, Chris Hyde, Marc McGee, and Betts Coup, Harvard Library

    This presentation will explore how Harvard Library structured a project to examine applications of conversational AI to support work and better serve their community. Fourteen library staff representing different departments and library work participated in this project using ChatGPT+ to conduct various structured experiments relevant to library work. Within the presentation, the team will explain how they conceptualized and built a community of practice engagement model for this work, along with the deliverables of a library-specific use case library and prompting guidance.

    After discussing the project setup, three library staff members will showcase library-specific use cases related to technical services and archival work. First, there will be a demo of using ChatGPT+ to help create usable finding aid content from dealer lists of archival collections. Second, there will be a demo of using ChatGPT+ to create Python scripts for pulling files out of downloaded objects from a preservation system. Finally, there will be a demo of using ChatGPT+ to help make Python scripts for unique ID associations within geojson files.

    By sharing the group’s experiences, we aim to inspire conference attendees to think of new ways to structure staff engagement and experimentation with generative AI tools.

    Helpful Links:

  • Presenter(s): Richard Song, Qin Lippert, Epsilla.

    Join us for a practical, hands-on session where you'll explore the transformative power of AI and Retrieval-Augmented Generation (RAG) to enhance your library's content and services. We’ll start with a high-level overview of AI technologies, followed by demonstrations of practical AI applications in libraries, including AI-powered research guides, FAQs, and special collections management. You'll then have the opportunity to create your own AI-driven applications, such as chatbots or smart search tools, tailored specifically to your library's needs. Whether you're aiming to enhance user experience or improve content discovery, this session will equip you with the knowledge and tools to implement AI solutions in your library. Come prepared to get hands-on and leave with actionable skills to elevate your library's digital offerings.