The Princeton Ethiopian, Eritrean, and Egyptian Miracles of Mary digital humanities project (PEMM) is a comprehensive resource for the miracle stories about the Virgin Mary in Ethiopia, Eritrea, and Egypt, and preserved in Gəˁəz parchment manuscripts between 1300 and the present. Directed by Prof. Wendy Laura Belcher and then managed by Evgeniia Lambrinaki, PEMM was launched in March 2018, using as its base the miracle story identifications William F. Macomber made in the 1980s. Dataset. PEMM 2.0 includes the data collected by the project in Google Sheets from its inception to July 4, 2023. This date marked the end of our use of Google Sheets as our database and the end of Jeremy Brown's full-time involvement with the project (when he moved to be the cataloger of Ethiopic manuscripts at HMML). This data includes 1,002 identified stories (or 940 separate stories) (called Canonical Stories); 549 stories translated into English (288 stories translated by PEMM team; 223 stories translated and published by others) and another 200 stories summarized; 676 fully cataloged manuscripts (with another 334 identified, but awaiting digitization) (in Gəˁəz and a few in Arabic) (called Manuscripts); 51,690 stories documented in those manuscripts (called Story Instances); 21,403 typed Gəˁəz incipits (unique first lines) for those stories; and 2,547 paintings with 4,205 scenes in 262 manuscripts (called Paintings). The manuscripts come from 92 repositories and libraries around the world (called Collections) and the stories were composed in Ethiopia, Eritrea, and Egypt (and probably Nubia, although not confirmed), as well as Europe and the Levant (called Story Origins). Database. The PEMM Project began by using Google Sheets as a lightweight relational database. To learn about this innovative digital humanities approach by Princeton’s CDH’s, read the “Is a Spreadsheet a Database?” (February 21, 2021) article by PEMM lead developer, Rebecca Sutton Koeser. Due to our extremely large dataset (7 Google sheets in one workbook, each with at least 40 columns, and one with 50,000 rows, with dozens of complex formulas linking the fields in the various sheets), Google Sheets would repeatedly hang up. So, in July we migrated all our data to an Aurora PostgreSQL database, accessing it with a content management system called Directus. However, this Zenodo dataset represents the data as it last appeared in Google Sheets. Website. The c...