It has been almost exactly 3 months since I wrote my last blog post, but that doesn’t mean I haven’t been working on the project or learning about the which necessary foundational steps we need to take for the project to succeed.
So here’s a quick update about my work on the project:
During Spring Term, Dave Pfaff taught me the basic processes for photogrammetry. I created my first two point cloud based models in PhotoScan, and obtained two successful results.
While you can never have too many photos, you can have too little photos, and while I have never had a model fail on me, for learning experience, it would be nice to know more about the “line” that separates a working model from a failed model.
There is a tradeoff when using large datasets. While large datasets produce denser, more “complete” point clouds, the greater parameterization of materials exponentially lengthens point cloud creation processing times. My first model was part of a small tabletop photogrammetry session, comprising 60 some-odd photos, and took 15 minutes to process a dense point cloud. My second model, based upon 220 photos of a mausoleum, took 14 hours to process a dense point cloud. So to be terribly brief, photo processing requires patience (and monopolization of the computer lab when creating multiple models).
Jumping ahead, I’m currently working in Florence alongside Aidan Valente.
We have divided our work schedules into site surveys, photography, library research, and uploading images to a cloud database.
So far I would say the project is going well (unexpected hiccups included). As of now, I have photographed the Bigallo, Orsanmichele’s original niche statues (period relevant), and some architectural features on display in the Convent of San Marco. Lighting is always an issue, but for the most part the pictures are coming out well.
The question is whether or not the pictures will produce effective point clouds.
And since the Bigallo is the focal point of our summer’s work, I have halted ongoing photography to upload our complete dataset for the Bigallo. That way Dave can batch process the Bigallo to determine whether or not we have a working point cloud based model, if and where we need to take more photos, and as a way to shift the focus of our work towards available resources in the Kunsthistorisches Institut library. In this manner, we can better evaluate our workflow and systematize the broader workflow for the upcoming school year.
The greatest feat of our work this summer is its capture of important information such that we are able to flesh out and better define the scope of our project. While we will never be at a loss of viable work, the wealth of available resources is inimical to the completion of the project. So much of the beginning of this project is about finding our footing. What really matters? How do we avoid being overwhelmed?
To start, everyone working on this project needs to be on the same page.
As far as I’m concerned, the 2D map is a dumping ground for the 3D map - a sort of footnotes to the digital conservation of cultural heritage - albeit in a heavily contextualized and curated manner.
While a powerful tool for the creation of highly detailed models, photogrammetry introduces its own set of problems. How can photographs of a building taken in 2017 (understanding its historical changes) be used to simulate Florence as it was in 1492? Do we use photogrammetry at all then? I think the answer is yes.
I don’t see this project as concluding in the creation of a one to one recreation of a 1492 Florence based upon architectural paintings, research, and best guessing. We are not video game creators.
I see the import of this project as the creation of a Florence with art in situ where architectural changes are contextualized by didactic information. Our simulacrum stimulates thought about how objects and spaces interact(ed), but concedes its own anachronisms. There are always additions, but there are also frescoes/paintings/floorplans/etc. (separate of art in situ) to frame these architectural changes.
To effectively progress, I suggest translations (and broader research) be handled as such: (sight) read mass amounts, take notes, and determine the usefulness of the catalogued information. From this catalogue, we can decide whether or not to produce full or partial translations or simple summaries of information. We can then integrate our translations with modern sources to produce thorough histories of objects, buildings, and Florence as a whole. We can store and map any information we deem relevant in the 2D map. And finally we can incorporate this 2D mapped information into our main focus, the 3D map of Florence - a completed, highly detailed, and fully interactive VR space.