#!/usr/bin/env python # coding: utf-8 # # Step - 1 : Legacy of Slavery Certificates of Freedom Collection - Context Based Data Exploration and Cleaning Step # ### Applying Computational Thinking and Histrorical, Cultural Context based approach to an Archival Dataset Collection # * **Student Contributors:** Rajesh GNANASEKARAN, Alexis HILL, Phillip NICHOLAS, Lori PERINE # * **Faculty Mentor:** Richard MARCIANO # * **Community Mentor:** Maya DAVIS, Christopher HALEY (Maryland State Archives), Lyneise WILLIAMS (VERA Collaborative), Mark CONRAD (NARA) # * **Source Available:** https://github.com/cases-umd/Legacy-of-Slavery # * **License:** [Creative Commons - Attribute 4.0 Intl](https://creativecommons.org/licenses/by/4.0/) # * [Lesson Plan for Instructors](./lesson-plan.ipynb) # * **Related Publications:** # * **IEEE Big Data 2020 CAS Workshop:** [Computational Treatments to Recover Erased Heritage: A Legacy of Slavery Case Study (CT-LoS)](https://ai-collaboratory.net/wp-content/uploads/2020/11/Perine.pdf) # * **More Information:** # * **SAA Outlook March/April 2021:** (Coming Soon) # # ## Introduction # This module is based on a case study involving [The "Legacy of Slavery Project"](http://slavery.msa.maryland.gov/) archival records from the Maryland State Archives. The Legacy of Slavery in Maryland is a major initiative of the Maryland State Archives. The program seeks to preserve and promote the vast universe of experiences that have shaped the lives of Maryland’s African American population. Over the last 18 years, some 420,000 individuals have been identified and data has been assembled into 16 major databases. The [DCIC](http://dcic.umd.edu) has now partnered with the Maryland State Archives to help interpret this data and reveal hidden stories. # # We, as a team of students as part of a 2-day [Datathon 2019 at Maryland State Archives](https://ai-collaboratory.net/projects/ct-los/student-led-datathon-at-the-maryland-state-archives/) interacted with the historical data set collection, "Certificates of Freedom" from the Maryland State Archives compiled database. # # We organized the data exploration and cleaning around [David Weintrop’s model of computation thinking](https://link.springer.com/content/pdf/10.1007%2Fs10956-015-9581-5.pdf) and worked based on a [questionnaire] (TNA_Questionnaire.ipynb) developed by The National Archives, London, UK to document each step of our process. # # ![CT-STEM taxonomy](taxonomy.png "David W.'s CT Taxonomy") # # ### **C**omputational Thinking Practices # * Data Practices # * Collecting Data # * Creating Data # # ### **E**thics and Values Considerations # * Historical and Cultural Context Based Exploration and Cleaning # * Understanding the sensitivity of the data # # ### **A**rchival Practices # * Digital Records and Access Systems # # ### Learning Goals # A step-by-step understanding of using computational thinking practices on a digitally archived Maryland State Archives Legacy of Slavery dataset collection # # ## Step 1: Context Based Data Exploration and Cleaning Process # # We followed a case study methodology for this project to achieve the objective of exploring, analyzing and visualizing the dataset collections downloaded from the Maryland State of Archives database. As the dataset collections were available as downloadable csv files, the technical tasks addressed by our group were to identify the right tools that could be used to consume the csv files for exploratory analysis, cleaning and visualization purposes. Below are the steps for data exploration and cleaning process using Python programming language on the Certificates of Freedom dataset. # # # # Acquiring or Accessing the Data # The data for this project was originally crawled from the Maryland State Archives **Legacy of Data** collections # The data source is included in this module as a comma-separated values file. The link below will take you to a view the data file: # * [LoS_CoF.csv](Datasets/LoS_CoF.csv) # To process a csv file in Python, one of the first steps is to import a Python library called as 'pandas' which would help the program convert the csv file into a dataframe format or commonly called as a table format. We import the library into the program as below: # In[133]: # Importing libraries - pandas used for data science/data analysis and machine learning tasks and numpy - which provides support for multi-dimensional arrays import pandas as pd import numpy as np # Using the pandas library, create a new dataframe in the name 'df' using read_csv function as shown below: After creating the dataframe, use the print() function to display the top 10 rows loaded in the dataframe. # In[134]: # creating a data frame which is a table-like data structure that could read csv files, flat files, and other delimited data. # Converting input data into a data frame is a key starting point with Python programming language for big data analytics # Below command reads in the Certificates of Freedom dataset which should already be loaded in a folder called 'Datasets' as LoS_CoF.csv df = pd.read_csv("Datasets\LoS_CoF.csv") # Below command prints the first 10 records after the data is copied from the csv file print(df.head(10)) # We anticipated errors and misinterpretation of names, numbers, etc. since this database was mostly transcribed manually by hand from the physical or scanned copies of the Certificates of Freedom. Our approach was to individually explore and clean the data column-by-column utilizing the text and numerical operation functions in Python programming language for this purpose mostly. We looked at the dataset holistically at first, identifying features that allowed us to generate meaningful stories or visualizations. Upon confirmation of the features list, we analyzed each of them in detail to document bad data and eliminate them if possible, modify data types, exclude them from the final visualizations if found to be invalid, etc # ## Context Based data exploration and cleaning # As the team members were from a diverse group of technology, historical, and archivist background, there were options to work individually all along or to work in groups all along, but we decided to do a hybrid setup of analyzing alone and reporting the results back to the group for discussion. With respect to the analysis performed on the dataset, decisions were data-driven or historical facts driven. For instance, to address the feature in CoF dataset - Prior Status Column: Research was conducted to determine the prior status of those who were categorized as a “Descendant of a white female woman” as shown below from the set of unique categories. Source: Wikipedia - History of slavery in Maryland. This research was beneficial in identifying what group certain observations belong to. # In[135]: # df is the data frame variable which stores the entire dataset in a table form. Below command converts the specific column or feature 'PriorStatus' as Categorical type instead of String for manipulation df["PriorStatus"]=df["PriorStatus"].astype('category') # In[136]: # After conversion, let's print the number of categories available for that particular feature from the dataset print(set(df["PriorStatus"])) ![PriorStatus](Pics\CoF_Data_Clean_Prior_Status.PNG "CoF Prior Status") # In[137]: # As could be seen above, there are various types of Prior Status that are similar in nature. the value 'nan' in Python means it has no values. # Below set of commands form a component in Python called as a function. Functions are a block of commands which could be used to perform the same action every time they are called. # The below function converts the input parameter to the right Prior Status category based on some conditional statements. def fix_prior_status(status): # initiate variables to hold the literal value free = "free" born = "born" slave = "ave" descend = "Descend" # conditional statements to use in-built 'find' function to check if the prior status passed has the value of the literal checked, and if so the status would be modified as mentioned # in the 'return' statement if status.find(born) != -1: # it should also be noted that indentation is a key requirement with Python, not where the return statement starts after the 'if' return "Born Free" else: # nested if's are possible in Python to conditionally control the else logic if status.find(slave) != -1: return "Slave" else: if status.find(descend) != -1: return "Born Free" else: if status.find(free) != -1: return "Free" else: return "Unknown" # Below command starts with the beginning indentation indicating a new set of commands outside of the function, even if its in the same cell block like shown here. # The 'apply' function applies the function definted above to the data frame's each records' Prior Status field avlue. df["PriorStatus"] = df["PriorStatus"].apply(fix_prior_status) # The 'unique' in-built function prints out the distinct values of the transformed or modified prior status of the data frame print(df["PriorStatus"].unique()) # Through researching the literature, conversations with historians and experts in the field, discussions with archivists from the Maryland State Archives, the team members followed a set of steps where certain unique characteristics of a particular feature for instance were identified and shared with the entire group for their inputs before finalizing the results # Some other examples include identifying issues with the columns like Date issued for the Cof, County as explained below: # ## Issues with Date Issued for the CoF # Through healthy discussions on what-if scenarios as most of the data were historical and we were bringing each of our expertise into the conversations, several insights were gleaned for specific columns which were vital to this Project. Also there were discussions on how data should be presented, collected, and analyzed without impacting the sensitivity of the people involved, especially since this set of collection was unique. # # One of them is the date, there were different formats of date captured in the transcribed collection. This field is to indicate the date when the certificate of freedom was prepared and signed. There were a number of issues with this date field in the original dataset. Different date formats -- There were around 600 records with NULL value, a bunch of them with just YYYYMM format, most of them in the format YYYY-MM-DD and YYYYMMDD format. # In[138]: # Below command prints out the descriptive details of the column 'Date' df["Date"].describe() # In[139]: # Below command list the number of null or na values in the 'Date' column of the data frame df["Date"].isna().sum() # In[140]: # Below command displays an array of unique date values in the 'Date' column df["Date"].unique() # In[141]: # Below command displays the specific records that was identified as erroneously entered. The inner command 'df[]' first converts the 'Date' feature to a 'String' data type, and then uses another # in-built function to filter the records that match with the supplied criteria and the outer 'df[]' displays the results of that filtered records from the inner dataframe. df[df['Date'].astype(str).str.strip()=="184006"] # As could be seen above, there are different formats for the date column, some with missing month etc, some of these were manually verified for accuracy by checking the scanned documents from the MSA database as shown below: # In two of the instances, as seen below, the day of issue has not been found to be legible or visible, hence the MSA transcriber may have not been able recorded the date. There was no date but only month and year captured on the original CoF itself for c290 page 224 - Jeremiah Brown # # ![DateIssue1](Pics\CoF_Data_Clean_Empty_Date1.PNG "CoF Date Issue 1") # ![DateIssue2](Pics\CoF_Data_Clean_Empty_Date1.PNG "CoF Date Issue 2") # Another instance of data entry error was for c290 page 185 Charles W Jones as shown below with the date captured as 1840516 instead of 18400516 # # ![DateIssue3](Pics\CoF_Data_Clean_Incorrect_Date.PNG "CoF Date Issue 3") # In[142]: # Below command replaces all Null or nan values to the literal 'None' for ease of manipulation later in the process df["Date"]=df["Date"].fillna('None') df["Date"].unique() # In[143]: # Below command creates a new column 'DateFormatted' on-the-fly (one of the cool things I like about python) and is copied with the results from the 'Date' column using a # transformation function called 'to_datetime()' by passing in the parameter 'error=coerce' which converts all erroneous date values into a string called 'NaT' df['DateFormatted'] = pd.to_datetime(df["Date"], errors="coerce") # In[144]: # Below command prints the unique converted date values from the newly created column and also displays 'NaT' for errorneous date values. df["DateFormatted"].unique() # In[145]: # Below command prints a sample of the output for the new columns 'Date' and 'DateFormatted' side-by-side to show how the original field values were transformed to a proper date # format and the bad values are given a 'NaT' df[['Date','DateFormatted']] # In[146]: x = 0 bad_date=[] # Below function is a loop function which processes each value of the new column 'DateFormatted' to check for invalid value marker 'NaT' and if found, it picks up the original # value from the 'Date' column and appends to a list. Once all the records are checked, it prints the unique values of this list using the 'set' function and the total number of # bad ones for i in range(len(df['DateFormatted'])): if pd.isna(df['DateFormatted'][i]): bad_date.append(df['Date'][i]) x += 1 print(set(bad_date)) print("Number of Bad date records", x) print("Number of unique items in the Bad date", len(set(bad_date))) # One of the important limitations while working with excel with dates older than 01/01/1900 was that the dates are not calculated and translated correctly. Hence proper formatting of the dates was crucial to this analysis. # ## Issues with County column where the CoF was issued # By looking at the scanned copy of the CoF for c290 page 130 for Joseph Caldwell, the county is found to be Talbot from the original ad but the data was entered as Baltimore County only for CoF but Census was captured correctly as Talbot county # # ![CountyIssue](Pics\CoF_Data_Clean_County_Error.PNG "CoF County Error") # In[147]: #Below command parses the 'Freed_lastName' and 'Freed_FirstName' columns from the dataset to match with the names from the erroneous record # and prints the transcribed record corresponding to the scanned document above df[(df["Freed_LastName"]=="Caldwell") & (df["Freed_FirstName"]=="Joseph")] # In[148]: # Below command uses the 'loc' function to find the index of the record belonging to the above criteria and displays the county column df.loc[((df["Freed_LastName"]=="Caldwell") & (df["Freed_FirstName"]=="Joseph")),'County'] # In[149]: # Below command updates the County value to 'TA' based on what was found from the document df.loc[((df["Freed_LastName"]=="Caldwell") & (df["Freed_FirstName"]=="Joseph")),'County'] ='TA' # In[150]: # Below command uses the 'loc' function to find the index of the record belonging to the above criteria and displays the updated county column df.loc[((df["Freed_LastName"]=="Caldwell") & (df["Freed_FirstName"]=="Joseph")),'County'] # In[151]: # Below commands help us to save the modified dataframe into a new output csv file which could be used in further steps of processing in the next notebook modules. dfo = pd.DataFrame(df) dfo.to_csv('Datasets\LoS_Clean_Output.csv', index=False) # # Notebooks # # The below module is organized into a sequential set of Python Notebooks that allows to interact with the Legacy of Slavery's Certificates of Freedom collection by exploring, cleaning, preparing, visualizing and analysing it from historical context perspective. # # 2. [Certificates Of Freedom: Context Based Data Preparation](LoS_CoF_Data_Preparation.ipynb) # 3. [Certificates Of Freedom: Context Based Data Visualization and Analysis](LoS_CoF_Data_Viz.ipynb)