In the UK, Digging into Data phase 3 is funded by AHRC, ESRC and Jisc. Over the next few months each funder will be writing a blog post relevant to Digging into Data. Last October, Christie Walker from AHRC attended the Big Humanities Data Workshop in the USA and she has written the following post about the workshop.
The second Big Humanities Data Workshop took place on 27 October 2014 at the IEEE International Conference on Big Data in Washington D.C. The workshop was attended by a number of academics and funders, including AHRC from the UK, the National Endowment for the Humanities and the Institute of Museum and Library Services from the US, and the Social Sciences and Humanities Research Council from Canada.
The workshop began with an interesting keynote from Michael Levy (Director of Digital Collections) and Michael Haley Goldman (Director of Global Classroom and Evaluation) from the United States Holocaust Museum. Levy and Haley Goldman spoke about the opportunities that big humanities data, new techniques and tools can provide in Holocaust research and education.
The workshop papers covered several themes:
- Complexity / Scale / Historical Analysis
- News / Film
- Frameworks / Infrastructure
- Geospatial / Mobile
- Digging into Data
A total of 16 papers were presented at the workshop, and Digging into Data had a strong presence with 7 papers selected. The Digging into Data presentations represented a variety of methods, data types and challenges for the arts, humanities and social sciences:
- Mining Microdata: Economic Opportunity and Spatial Mobility in Britain and the United States, 1850-1881 (DiD round 2), presented by Evan Roberts – University of Minnesota
- ‘Understanding the Role of Medical Experts during a Public Health Crisis: Digital Tools and Library Resources for Research on the 1918 Spanish Influenza’, presented by Tom Ewing – Virginia Tech (An Epidemiology of Information: Data Mining the 1918 Influenza Pandemic, DiD round 2)
- ‘Scaled Entity Search: A Method for Media Historiography and Response to Critiques of Big Humanities Data Research’, presented by Kit Hughes – University of Wisconsin (Project Arclight: Analytics for the Study of 20th Century Media, DiD Round 3)
- ‘A Computational Pipeline for Crowdsourced Transcriptions of Ancient Greek Papyrus Fragments’, presented by James Brusuelas – University of Oxford (Resurrecting Early Christian Lives: Digging in Papyri in a Digital Age, DiD round 3)
- ‘Scientific Findings as Big Data for Research Synthesis: The metaBUS Project’, presented by Frank Bosco – Virginia Commonwealth University (Field Mapping: An Archival Protocol for Social Science Research Findings, DiD round 3)
- ‘Metadata Infrastructure for the Analysis of Parliamentary Proceedings’, presented by Richard Gartner – King’s College London (Digging into Linked Parliamentary Data, DiD round 3)
- Integrating Data Mining and Data Management Technologies for Scholarly Inquiry (DiD round 2), presented by Richard Marciano – University of Maryland
The workshop concluded with a Funders panel and discussion chaired by Professor Andrew Prescott (University of Glasgow). Brett Bobley (NEH), Bob Horton (IMLS), Crystal Sissons (SSHRC) and Christie Walker (AHRC) discussed their organisations’ approach to big data and funding more generally.
The Big Humanities workshop is unique in that it takes place with the backdrop of a very technical big data conference. However, it highlights to both workshop participants and to the wider IEEE Big Data conference that the arts, humanities and social sciences have a great deal to bring to the conversation about big data and that these disciplines bring their own big data challenges to the table. The workshop generated a lot of very interesting discussion, both in the workshop and beyond.