This module is based on a case study involving The "Legacy of Slavery Project" archival records from the Maryland State Archives. The Legacy of Slavery in Maryland is a major initiative of the Maryland State Archives. The program seeks to preserve and promote the vast universe of experiences that have shaped the lives of Maryland’s African American population. Over the last 18 years, some 420,000 individuals have been identified and data has been assembled into 16 major databases. The DCIC has now partnered with the Maryland State Archives to help interpret this data and reveal hidden stories.
We, as a team of students as part of a 2-day Datathon 2019 at Maryland State Archives interacted with the historical data set collection, "Certificates of Freedom" from the Maryland State Archives compiled database.
We organized the data exploration and cleaning around David Weintrop’s model of computation thinking and worked based on a [questionnaire] (TNA_Questionnaire.ipynb) developed by The National Archives, London, UK to document each step of our process.
A step-by-step understanding of using computational thinking practices on a digitally archived Maryland State Archives Legacy of Slavery dataset collection
We followed a case study methodology for this project to achieve the objective of exploring, analyzing and visualizing the dataset collections downloaded from the Maryland State of Archives database. As the dataset collections were available as downloadable csv files, the technical tasks addressed by our group were to identify the right tools that could be used to consume the csv files for exploratory analysis, cleaning and visualization purposes. Below are the steps for data exploration and cleaning process using Python programming language on the Certificates of Freedom dataset.
The data for this project was originally crawled from the Maryland State Archives Legacy of Data collections The data source is included in this module as a comma-separated values file. The link below will take you to a view the data file:
To process a csv file in Python, one of the first steps is to import a Python library called as 'pandas' which would help the program convert the csv file into a dataframe format or commonly called as a table format. We import the library into the program as below:
# Importing libraries - pandas used for data science/data analysis and machine learning tasks and numpy - which provides support for multi-dimensional arrays
import pandas as pd
import numpy as np
Using the pandas library, create a new dataframe in the name 'df' using read_csv function as shown below: After creating the dataframe, use the print() function to display the top 10 rows loaded in the dataframe.
# creating a data frame which is a table-like data structure that could read csv files, flat files, and other delimited data.
# Converting input data into a data frame is a key starting point with Python programming language for big data analytics
# Below command reads in the Certificates of Freedom dataset which should already be loaded in a folder called 'Datasets' as LoS_CoF.csv
df = pd.read_csv("Datasets\LoS_CoF.csv")
# Below command prints the first 10 records after the data is copied from the csv file
print(df.head(10))
DataID DataItem County Owner_FirstName Owner_LastName \ 0 AR7-46 1 AA Ann Ailsworth 1 AR7-46 2 AA Ann Ailsworth 2 AR7-46 3 AA Ann Ailsworth 3 AR7-46 4 AA William Alexander 4 AR7-46 5 AA Thomas Allen 5 AR7-46 6 AA Thomas Allen 6 AR7-46 7 AA James Alleson 7 AR7-46 8 AA Mary Alwell 8 AR7-46 9 AA Mary Armiger 9 AR7-46 10 AA Mary Atcock Witness Date Freed_FirstName Freed_LastName Alias ... \ 0 NaN NaN Keziah Cromwell NaN ... 1 Zachariah Duvall 1811-06-24 Resiah Cromwell NaN ... 2 Jenifer Duvall 1811-06-24 Kesiah Cromwell NaN ... 3 NaN 1815-03-28 Handy McCeomey NaN ... 4 NaN 1837-07-10 Nancy Ennis NaN ... 5 NaN 1837-08-03 Jim Sharpe NaN ... 6 NaN 1826-10-28 Belly NaN NaN ... 7 NaN 1844-11-08 Howard Davis NaN ... 8 NaN 1819-01-27 Abigail NaN NaN ... 9 Jacob Franklin, Jr. 1812-12-30 Ned NaN NaN ... Folder Document Page Entry DatasetName \ 0 NaN NaN 42686.0 12.0 FF 1 NaN NaN 24.0 3.0 FF 2 55.0 NaN NaN NaN FF 3 NaN NaN 50.0 2.0 FF 4 NaN NaN 257.0 1.0 FF 5 NaN NaN 257.0 2.0 FF 6 NaN NaN 242.0 1.0 FF 7 NaN NaN 372.0 1.0 FF 8 NaN NaN 126.0 2.0 FF 9 NaN NaN 31.0 3.0 FF Notes isWorking isError \ 0 NaN 0 0 1 NaN 0 0 2 Freed by will of Mrs. Ann Ailsworth. 0 0 3 Freed by manumission, dated 27 March 1815. Rai... 0 0 4 Freed by petition to Anne Arundel County Court... 0 0 5 Freed by petition to Anne Arundel County Court... 0 0 6 Freed by manumission, dated 28 Oct 1826. Raise... 0 0 7 son of Nelly. Freed by manumission, dated 12 A... 0 0 8 along with Richard G. Stetton. Freed by manumi... 0 0 9 NaN 0 0 ChangeDate CreateDate 0 39:20.3 39:20.3 1 39:20.3 39:20.3 2 39:20.3 39:20.3 3 39:20.3 39:20.3 4 39:20.3 39:20.3 5 39:20.3 39:20.3 6 39:20.3 39:20.3 7 39:20.3 39:20.3 8 39:20.3 39:20.3 9 39:20.3 39:20.3 [10 rows x 28 columns]
We anticipated errors and misinterpretation of names, numbers, etc. since this database was mostly transcribed manually by hand from the physical or scanned copies of the Certificates of Freedom. Our approach was to individually explore and clean the data column-by-column utilizing the text and numerical operation functions in Python programming language for this purpose mostly. We looked at the dataset holistically at first, identifying features that allowed us to generate meaningful stories or visualizations. Upon confirmation of the features list, we analyzed each of them in detail to document bad data and eliminate them if possible, modify data types, exclude them from the final visualizations if found to be invalid, etc
As the team members were from a diverse group of technology, historical, and archivist background, there were options to work individually all along or to work in groups all along, but we decided to do a hybrid setup of analyzing alone and reporting the results back to the group for discussion. With respect to the analysis performed on the dataset, decisions were data-driven or historical facts driven. For instance, to address the feature in CoF dataset - Prior Status Column: Research was conducted to determine the prior status of those who were categorized as a “Descendant of a white female woman” as shown below from the set of unique categories. Source: Wikipedia - History of slavery in Maryland. This research was beneficial in identifying what group certain observations belong to.
# df is the data frame variable which stores the entire dataset in a table form. Below command converts the specific column or feature 'PriorStatus' as Categorical type instead of String for manipulation
df["PriorStatus"]=df["PriorStatus"].astype('category')
# After conversion, let's print the number of categories available for that particular feature from the dataset
print(set(df["PriorStatus"]))
{nan, 'slave', 'Free', 'Slave ', 'Unknown; Free Born', 'Free; Slave', 'Free born', 'Slave; Slave', 'Enslaved', 'born free', 'Freeborn', 'Free ', 'Born Free', '?', 'BornFree', 'free born', 'Unknown', 'Descendant of a white female woman', 'Slave', 'Unknown; Slave', 'Free born ', 'John', 'Free Born', 'S;ave', 'Born free'}
# As could be seen above, there are various types of Prior Status that are similar in nature. the value 'nan' in Python means it has no values.
# Below set of commands form a component in Python called as a function. Functions are a block of commands which could be used to perform the same action every time they are called.
# The below function converts the input parameter to the right Prior Status category based on some conditional statements.
def fix_prior_status(status):
# initiate variables to hold the literal value
free = "free"
born = "born"
slave = "ave"
descend = "Descend"
# conditional statements to use in-built 'find' function to check if the prior status passed has the value of the literal checked, and if so the status would be modified as mentioned
# in the 'return' statement
if status.find(born) != -1:
# it should also be noted that indentation is a key requirement with Python, not where the return statement starts after the 'if'
return "Born Free"
else:
# nested if's are possible in Python to conditionally control the else logic
if status.find(slave) != -1:
return "Slave"
else:
if status.find(descend) != -1:
return "Born Free"
else:
if status.find(free) != -1:
return "Free"
else:
return "Unknown"
# Below command starts with the beginning indentation indicating a new set of commands outside of the function, even if its in the same cell block like shown here.
# The 'apply' function applies the function definted above to the data frame's each records' Prior Status field avlue.
df["PriorStatus"] = df["PriorStatus"].apply(fix_prior_status)
# The 'unique' in-built function prints out the distinct values of the transformed or modified prior status of the data frame
print(df["PriorStatus"].unique())
['Slave' 'Born Free' nan 'Unknown' 'Free']
Through researching the literature, conversations with historians and experts in the field, discussions with archivists from the Maryland State Archives, the team members followed a set of steps where certain unique characteristics of a particular feature for instance were identified and shared with the entire group for their inputs before finalizing the results
Some other examples include identifying issues with the columns like Date issued for the Cof, County as explained below:
Through healthy discussions on what-if scenarios as most of the data were historical and we were bringing each of our expertise into the conversations, several insights were gleaned for specific columns which were vital to this Project. Also there were discussions on how data should be presented, collected, and analyzed without impacting the sensitivity of the people involved, especially since this set of collection was unique.
One of them is the date, there were different formats of date captured in the transcribed collection. This field is to indicate the date when the certificate of freedom was prepared and signed. There were a number of issues with this date field in the original dataset. Different date formats -- There were around 600 records with NULL value, a bunch of them with just YYYYMM format, most of them in the format YYYY-MM-DD and YYYYMMDD format.
# Below command prints out the descriptive details of the column 'Date'
df["Date"].describe()
count 23057 unique 9956 top 1832-05-28 freq 296 Name: Date, dtype: object
# Below command list the number of null or na values in the 'Date' column of the data frame
df["Date"].isna().sum()
598
# Below command displays an array of unique date values in the 'Date' column
df["Date"].unique()
array([nan, '1811-06-24', '1815-03-28', ..., '18430912', '18430913', '18430916'], dtype=object)
# Below command displays the specific records that was identified as erroneously entered. The inner command 'df[]' first converts the 'Date' feature to a 'String' data type, and then uses another
# in-built function to filter the records that match with the supplied criteria and the outer 'df[]' displays the results of that filtered records from the inner dataframe.
df[df['Date'].astype(str).str.strip()=="184006"]
DataID | DataItem | County | Owner_FirstName | Owner_LastName | Witness | Date | Freed_FirstName | Freed_LastName | Alias | ... | Folder | Document | Page | Entry | DatasetName | Notes | isWorking | isError | ChangeDate | CreateDate | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
23307 | AR7-46 | 23310 | BA | Geo | Gillingham | NaN | 184006 | Jeremiah W. | Brown | Jerry | ... | NaN | NaN | 224.0 | 5.0 | FF | Freed by manumission, dated 15 June 1824, reco... | 0 | 0 | 37:45.8 | 03:44.1 |
23308 | AR7-46 | 23311 | BA | NaN | NaN | NaN | 184006 | Rachael | Brown | NaN | ... | NaN | NaN | 224.0 | 6.0 | FF | NaN | 0 | 0 | 37:45.8 | 05:54.2 |
2 rows × 28 columns
As could be seen above, there are different formats for the date column, some with missing month etc, some of these were manually verified for accuracy by checking the scanned documents from the MSA database as shown below:
In two of the instances, as seen below, the day of issue has not been found to be legible or visible, hence the MSA transcriber may have not been able recorded the date. There was no date but only month and year captured on the original CoF itself for c290 page 224 - Jeremiah Brown
Another instance of data entry error was for c290 page 185 Charles W Jones as shown below with the date captured as 1840516 instead of 18400516
# Below command replaces all Null or nan values to the literal 'None' for ease of manipulation later in the process
df["Date"]=df["Date"].fillna('None')
df["Date"].unique()
array(['None', '1811-06-24', '1815-03-28', ..., '18430912', '18430913', '18430916'], dtype=object)
# Below command creates a new column 'DateFormatted' on-the-fly (one of the cool things I like about python) and is copied with the results from the 'Date' column using a
# transformation function called 'to_datetime()' by passing in the parameter 'error=coerce' which converts all erroneous date values into a string called 'NaT'
df['DateFormatted'] = pd.to_datetime(df["Date"], errors="coerce")
# Below command prints the unique converted date values from the newly created column and also displays 'NaT' for errorneous date values.
df["DateFormatted"].unique()
array([ 'NaT', '1811-06-24T00:00:00.000000000', '1815-03-28T00:00:00.000000000', ..., '1843-08-22T00:00:00.000000000', '1843-08-24T00:00:00.000000000', '1843-09-12T00:00:00.000000000'], dtype='datetime64[ns]')
# Below command prints a sample of the output for the new columns 'Date' and 'DateFormatted' side-by-side to show how the original field values were transformed to a proper date
# format and the bad values are given a 'NaT'
df[['Date','DateFormatted']]
Date | DateFormatted | |
---|---|---|
0 | None | NaT |
1 | 1811-06-24 | 1811-06-24 |
2 | 1811-06-24 | 1811-06-24 |
3 | 1815-03-28 | 1815-03-28 |
4 | 1837-07-10 | 1837-07-10 |
... | ... | ... |
23650 | 18430826 | 1843-08-26 |
23651 | 18430905 | 1843-09-05 |
23652 | 18430912 | 1843-09-12 |
23653 | 18430913 | 1843-09-13 |
23654 | 18430916 | 1843-09-16 |
23655 rows × 2 columns
x = 0
bad_date=[]
# Below function is a loop function which processes each value of the new column 'DateFormatted' to check for invalid value marker 'NaT' and if found, it picks up the original
# value from the 'Date' column and appends to a list. Once all the records are checked, it prints the unique values of this list using the 'set' function and the total number of
# bad ones
for i in range(len(df['DateFormatted'])):
if pd.isna(df['DateFormatted'][i]):
bad_date.append(df['Date'][i])
x += 1
print(set(bad_date))
print("Number of Bad date records", x)
print("Number of unique items in the Bad date", len(set(bad_date)))
{'1859-00-00', '1830-04-31', '1846-05-00', '1842-00-00', '1802-30-30', '1819-03-00', '1836-09-31', '1855-01-00', '1819-07-00', 'None', '1840516', '1816-19-06', '1854-00-00', '1819-15-04', '1856-01-00', '1819-15-10', '1845-19-05', '1858-09-31', '1860-09-00', 'Kamer', '1829-04-31', '1855-00-00', '1858-08-00', '18404024', '1823-00-00', '189390417', '1831-04-31', '1853-00-00', '1815-16-20', '1807-07-33', '1854-08-00', '1819-04-00', '1816-17-23', '1847-00-00', '1821-00-00', '1856-02-30', '184006', '1822-00-00', '1819-00-00', '1819-12-00', '184004', '1858-12-00'} Number of Bad date records 657 Number of unique items in the Bad date 42
One of the important limitations while working with excel with dates older than 01/01/1900 was that the dates are not calculated and translated correctly. Hence proper formatting of the dates was crucial to this analysis.
By looking at the scanned copy of the CoF for c290 page 130 for Joseph Caldwell, the county is found to be Talbot from the original ad but the data was entered as Baltimore County only for CoF but Census was captured correctly as Talbot county
#Below command parses the 'Freed_lastName' and 'Freed_FirstName' columns from the dataset to match with the names from the erroneous record
# and prints the transcribed record corresponding to the scanned document above
df[(df["Freed_LastName"]=="Caldwell") & (df["Freed_FirstName"]=="Joseph")]
DataID | DataItem | County | Owner_FirstName | Owner_LastName | Witness | Date | Freed_FirstName | Freed_LastName | Alias | ... | Document | Page | Entry | DatasetName | Notes | isWorking | isError | ChangeDate | CreateDate | DateFormatted | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
22633 | AR7-46 | 22635 | BA | NaN | NaN | Oakley Haddoway | 189390417 | Joseph | Caldwell | NaN | ... | NaN | 130.0 | 3.0 | FF | Raised in Talbot County. Thos kell, clerk | 0 | 0 | 37:47.1 | 38:31.2 | NaT |
1 rows × 29 columns
# Below command uses the 'loc' function to find the index of the record belonging to the above criteria and displays the county column
df.loc[((df["Freed_LastName"]=="Caldwell") & (df["Freed_FirstName"]=="Joseph")),'County']
22633 BA Name: County, dtype: object
# Below command updates the County value to 'TA' based on what was found from the document
df.loc[((df["Freed_LastName"]=="Caldwell") & (df["Freed_FirstName"]=="Joseph")),'County'] ='TA'
# Below command uses the 'loc' function to find the index of the record belonging to the above criteria and displays the updated county column
df.loc[((df["Freed_LastName"]=="Caldwell") & (df["Freed_FirstName"]=="Joseph")),'County']
22633 TA Name: County, dtype: object
# Below commands help us to save the modified dataframe into a new output csv file which could be used in further steps of processing in the next notebook modules.
dfo = pd.DataFrame(df)
dfo.to_csv('Datasets\LoS_Clean_Output.csv', index=False)
The below module is organized into a sequential set of Python Notebooks that allows to interact with the Legacy of Slavery's Certificates of Freedom collection by exploring, cleaning, preparing, visualizing and analysing it from historical context perspective.