Privacy in Digitally-Driven Projects in Forensic Anthropology

Today, the majority of research and daily practices in Forensic Anthropology have a digital component. When writing grant proposals for forensic research, institutions, such as National Institute of Justice or the National Science Foundation, generally fund projects that have deliverables in the form of large data mining and sharing via digital sources. In daily practice, forensic anthropologists aim to identify individuals primarily through the use of software with large amounts of reference data to which they compare their target individual.

Forensic Anthropology is unique from other subdisciplines in Anthropology in that our data require individualizing information to answer many of our research questions. These data are typically shared between various practitioners or institutions around the world, either for collaborative research purposes or for application by numerous practitioners. This means that the data are frequently hosted on open source platforms or shared and stored digitally, bringing several issues of privacy to the forefront. These issues are of utmost concern in forensic contexts given sensitivity of the data, specifically personal information associated with each data entry, such as age, birth year, sex, ancestry, residential histories, etc. This information could be compiled in a database as simple data line entries or could have associated images of skeletal material and/or radiographs.

The first privacy issue is the potential for personal data to be at risk. Obviously, forensic anthropologists have experience in dealing with these concerns and take security measures to protect these individuals (e.g. anonymizing the individuals, following clinical procedures for dealing with sensitive patient information, and completing Institutional Review Board (IRB) training and certification). Additionally, the subjects used in many of these studies are deceased and cannot express their consent or disapproval for how or why their information is being shared. The rules of whom and how subjects can be used in a study vary depending on where you are working, but the issues of privacy concerns persist.

For example, I am a part of research team working on a project using dental radiographs to estimate age in juveniles from dental development scores. The goal of this project is to create a large, diverse reference dataset to improve juvenile age estimation. This will require digitally sharing large radiographic collections with associated ancestry and age information between contributors and researchers and between the research team housed in two different universities. To protect sensitive information, the data will be anonymized prior to contribution, IRB approval was secured, and computer scientists aid in securely storing the radiographs.

A second concern is one of accessibility to the data. Here, I am speaking primarily in regards to reference datasets used to aid in establishing biological profiles of unidentified individuals (i.e. age, sex, ancestry, and stature estimations). The majority of our current standard practices rely on software to compare our unknowns to large reference datasets and establish biological profiles with statistical basis. But who has access to this data if it is freely available? And how might different users apply and manipulate the intentions of the data? This issue could be in the form of someone in a forensic science discipline using the data but not being properly trained, ancestry data being used for racial discrimination, or many other instances. Museum curators are no strangers to this matter. Museums are frequently questioned by the general public who claim they have accessed information from the museum and have created their own narrative for the material, urging curators to change the narrative to reflect their own perspective.

As I stated in a previous post, I work on a project collecting and sharing large reference datasets of cranial morphological trait data for the estimation of ancestry. This is another project where large data exchanges are occurring across digital space on a regular basis, including ancestry, age, sex, and birth year. When providing data to researchers, we remove catalogue numbers prior to releasing the data so individuals cannot be identified or traced. However, we are digitally receiving that data. I believe ancestry data needs to be requested and approved to receive the data rather than publicly available given the potential sociopolitical damage if the data be manipulated by outside users for ill intentions. This project requires formal requests and data is released upon approval only. That being said, this, along with anonymizing the data prior to release, may also restrict scientific advancement for this project in some ways.

This post really only begins to scratch the surface in privacy issues in digitally-driven forensic anthropology projects. Ultimately, I argue that these considerations to privacy concerns need to be integrated into research proposals in the future given that the majority of forensic anthropology research now has large digital components. This may be in the form of funding institutions requiring a section establishing a code of ethics to explicitly state how the research team intends to handle privacy concerns specific to their project or requiring computer scientists to be a part of the research team when there is a large digital component. These requirements could also encourage researchers to consider the advantages and disadvantages of various methods of securing privacy, what the tradeoffs are, and who the project impacts on a larger scale other than focusing on the benefits of their discipline. Asking these questions as part of the research design will ensure that forensic anthropologists (and other disciplines) are generating methods and datasets from an ethical standpoint.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *