AI can reconstruct ancient ruins, bringing the past into focus, virtually
Modeling the evolution of urbanization over the course of human history unlocks key aspects of human civilization. Understanding how ancient urban settings were designed and built can significantly develop informed decisions for future urban development. Through 3D images captured by drones, an artificial intelligence based framework will fill in the missing pieces of urban models from remnants of ancient ruins.
As part of Purdue’s ROSETTA Initiative, interdisciplinary researchers at Purdue received a one million dollar National Science Foundation Award to develop artificial intelligence (AI) for reconstructing archeological sites. Under the Rosetta Initiative, Purdue faculty led by (PI) Daniel Aliaga, Associate Professor in the Department of Computer Science joined Ian Lindsay, Associate Professor in the Department of Anthropology and Dr. Rajesh Kalyanam with Purdue Research Computing. The team also includes Steven Wernke, Associate Professor of Anthropology at Vanderbilt University, Parker VanValkenburgh, Assistant Professor of Anthropology at Brown University, and ROSETTA funding director, Professor Sorin Adam Matei, Associate Dean of Research and Graduate Education at Purdue University.
The project will use satellite and drone technology to survey an ancient urban site, saving time and minimizing archaeological digs, which at times may destroy parts of the site to reveal its structure. Since only a sparse number of ancient structures remain on the sites as they have been eroded or destroyed by time, the scanned structures become the sole data points and are used to reconstruct 3D virtual models of the urban developments. The funded project plans to recreate multiple ancient sites from the late Pre-Hispanic and Colonial-period Andes and the Bronze/Iron Age periods in the South Caucasus.
Once the drone images are recorded, this data is used to perform deep-learning driven segmentation, classification, and completion allowing us to see how ancient urban areas appeared.
At this level of computing, inferencing approaches show significant promise, but they struggle in a situation of relatively sparse data and obscured structures. Aliaga states they get around this problem by realizing there is anthropogenic (human-derived) design.
“Urban environments typically exhibit human-made geometric features such as right angles, straight edges, parallelism, and/or symmetries," Aliaga said. "We can use these predictive designs to develop the atomic parts and elemental rules in the initial build.” For example, the following sites clearly show man-made features: a) Kuala Lampur, b) Sousse, Tunisia, c) Mohenjo Daro, Pakistan, d) Mawchu Llacta, Peru, e) Kuelap, Peru, f) Aparani Berd, Armenia. Urban structures of today (e.g., a-b) have clear structure but so do ancient cities (e.g., c-f).
This project seeks to develop AI technologies to infer the prior structure from the sparse remnants and to recreate the past. The project will focus on sparse structures such as in Peru and Armenia.
Aliaga adds, “With these approaches, we will generate sufficient synthetic data to perform deep-learning driven segmentation, classification, and completion. Essentially we will be able to predict what the structure looked like using only minimal remains of the building structures.”
As foundational work, the researchers have previously developed procedural inference technology, using deep visual computing, that from a single aerial over of a building can recreate the entire building, even sides that are not seen and façade fragments that are occluded.
This is a novel approach for software to be used to reconstruct ancient urban structures based on minimal information. As a first domain application, this project can assist computational archaeologists to produce potential 3D reconstructions of historical urban sites, greatly assisting the analysis of ancient urbanism. The aim is also to produce a taxonomy and temporal sequencing of the observed structures.
“We anticipate the computational methodology can be re-tooled to assist in other domains also limited to sparse data points of an underlying structured geometry," said Aliaga.
Within the profession of anthropology, this could mean other civilizations outside of the study parameters. The potential for this type of development could aid multiple professions, including urban planning, environmental modeling, and more.
Professor Daniel Alliaga’s research is primarily in the area of 3D computer graphics but overlaps with computer vision and visualization while also having strong multidisciplinary collaborations outside of computer science. His research activities are divided into three groups: a) his pioneering work in the multi-disciplinary area of inverse modeling and design; b) his first-of-its-kind work in codifying information into images and surfaces, and c) his compelling work in a visual computing framework including high-quality 3D acquisition methods. His inverse modeling and design is particularly focused at digital city planning applications that provide innovative "what-if" design tools enabling urban stakeholders from cities worldwide to automatically integrate, process, analyze, and visualize the complex interdependencies between the urban form, function, and the natural environment.
NSF Funding: https://nsf.gov/awardsearch/showAward?AWD_ID=2107096&HistoricalAwards=false
Writer: Emily Kinsell, email@example.com
Source: Daniel Aliaga, firstname.lastname@example.org