Jump to content

Recommended Posts

I'm writing recommendations for how we at the Minneapolis Institute of Arts will archive and access RTIs, both inside and outside our DAMS; we are using TMS for collections management, and Virage MediaBin for asset management. I don't know how MediaBin will treat groups of images plus XML and LP and other supporting files, but I'm going to be looking for a way to keep them all associated with each other. My first steps will be to establish which of the files that RTI Builder creates that RTI Viewer needs, both for archiving and dissemination of RTIs. For example, does RTI Viewer need the original jpeg-exports folder, or just cropped-files?

I'm eagerly awaiting the arrival of CHI's Digital Lab Notebook (http://culturalheritageimaging.org/Technologies/Digital_Lab_Notebook/index.html) but we need an interim solution and in-house documentation for best practices for archiving RTIs, and that's why I'm posting here. I'll keep the group updated with my progress.

Link to comment
Share on other sites

Charles,

 

Great questions. There is a lot in this topic, and I just want to put a couple of things out there to start with.

 

RTiViewer only loads the .ptm or .rti file and does not need any additional files or information.

 

However, the question of what to archive has to include the notion that you may want to reprocess the data, or use the upcoming algorithmic rendering tools with an already captured data set. Also that folks may want to know something about the data that went into creating the finished file. We think it's critical to save the original captures, and we strongly recommend doing that as dngs.

 

As for the digital lab notebook, I would say that we are today meeting the requirement that you keep track of what was done with the images, but the part that's missing is managing that information to make it more accessible and useful, and all in one place, Here are the key bits of information about the collected data and how it was processed, if you follow our methodology as used in our training sessions and described in the capture guide and highlight image processing guide: there is information in the shooting log about the object and some aspects of how it was captured. There is information stored in the dng file within the xmp data structure about how you converted the images and any modifications you made such as white balance or exposure compensation. And there is a log file produced by the RTiBuilder which keeps track of everything that was done for processing, including a complete record of the calculated lp file (light position file) I would say those are the key elements to save. I think saving the "blend.jpg" produced by RTIBuilder is extremely useful too, in terms of a visual idea of how many files and what light spread you had for a particular RTI. All the other files are ancillary and can be reproduced from the original data (and here I mean converted dng files), and knowing some things about it as saved in the above mentioned files.

 

I hope this helps as a conversation starter. There is more to think about here in terms of what people might want to access, and whether you want to regenerate jpegs when needed vs store them, etc. There is also a reasonable likelihood that you have multiple RTI files from the same data set, such as both a ptm and rti file, specific crops, or different sizes.

 

Carla

Link to comment
Share on other sites

Thanks for the additional information. I'll do more experimentation with the builder and viewer before we come to any semi-final archiving strategy, but for now I'm recommending we archive, at a minimum:

- the DNGs from the capture session, with our object metadata embedded, and with the presets zeroed out. In our workflow, we process from CR2 to DNG as a very last step, even after making JPEGs, so every change we made to the CR2 is included in the final DNG.

- the entire assembly-files folder, because in the example I'm looking at it's 111k, and it contains both the blend and the LP file

- the .ptm file

- the Shooting Log spreadsheet

And here's what we can leave out of the archive:

- the jpeg-exports file: these can be easily recreated from the DNGs (but as ACR changes, interpretations of DNGs may change, so maybe having a processed JPEG is worthwhile)

- the cropped-files folder: each of these JPEG layers, I think, is included pixel-for-pixel inside the PTM

And here's what I don't know:

- is the XML file that's at the same level as the assembly-files folder (it's not inside the assembly-files folder) the log file?

- how can I wrap all these files up into a neat little package that we'll be able to find and reinterpret in a few decades?

Link to comment
Share on other sites

Charles,

 

You have a misunderstanding about how ptm and rti files are created and what is included in them.

"- the cropped-files folder: each of these JPEG layers, I think, is included pixel-for-pixel inside the PTM"

 

The finished ptm/rti does NOT include the jpegs in any way. RTIs do not interpolate or "layer" the images. Instead the ptm or hsh (polynomial texture map or hemispherical harmonics algorithm - encoded in programs called "fitters") calculate a new type of file using the data found in the light position file, and in each jpeg image in the input set. The finished rti or ptm file contains color information per pixel - the calculated "albedo" color, which is the illumination independent color without shadows or highlights - and a mathematical description of the surface normal per pixel. The input images are used to calculate this information, but are not saved in the file. The viewers then use this information to virtually relight the image and to apply the mathematical enhancements (aka rendering modes).

For more on surface normals there is a description here:

 

To answer your final 2 points, yes, the xml file is saved in the capture set folder, at the same level as all the folders such as assembly files and finished files. If you create more than one rti from the data set, it will have a record of all of them. It also contains a copy of the lp file, so you don't have to have one as a separate file - though as you note, it is a tiny text file, so why not.

 

We do not have a stan
dard way of bundling these things up at this time. We Have been saving the full directory structure without the cropped jpegs. You can also remove the jpegs, since they can be recreated from the dngs. I would note that dng is a standard format with the image data stored as a 16 bit tiff, and with the modifications such as white balance and exposure compensation stored in the xmp metadata standard. There are many tools that can work with dng files, so I'm not too concerned about being able to produce jpegs if needed.

 

Carla

 

 

 

 

Link to comment
Share on other sites

Seeing this exchange about archiving RTI data highlights for me what is surely one of the most intriguing and strategic projects at CHI: the Digital Lab Notebook (DLN).

 

The DLN is designed to serve the same function as a written scientist’s lab notebook before the digital age. In its simplest form, the DLN is a record of the means used to digitally capture information about something -- let's say, a cultural object -- including the history of events associated with the object's capture and subsequent processing. Finally the result is a completed digital representation of that object that is accessible, transparent, and empirically verifiable.

 

CHI is now developing its next generation of software tools that will generate digital lab notebooks containing Linked Open Data with advanced knowledge management features. CHI's methodologies, capture, and processing tools are designed to collect all of the information necessary for a scientific lab notebook.

 

I suggest anyone interested in the vision for DLN to take a look at CHI's description of it here:

http://culturalherit...k/more_dln.html

Link to comment
Share on other sites

  • 1 month later...

I've written up reccommendations for how we should archive our RTIs and all their supporting files, and I've attached a PDF here. If I make major changes to the document I'll post a new one. One of the things I'll discuss with our database guru is if we can put all the files into a single ZIP file or something like it, and if that will enter and exit our digital asset management system unscathed. That ZIP wouldn't have the ability to preview, or download individual files, but it may retain the directory structure, which we don't do with other assets that are imported into the system.

Thanks, Carla, for clarifying what RTI Builder does with the cropped-jpegs - I was imagining that the Viewer was reading layers assembled from the jpegs, but now I have a better understanding of RTIs.

20121114_MIA_RTIarchiving.pdf

Link to comment
Share on other sites

Thanks for this discussion of a critical topic. As a small example, there was once a glass plate negative image of a painting that has been in our family since it was brought from England to the Institute of Arts in 1925. Our only print of the image was lost by accident in 2007. Due to a reorganization of museum archives, the glass negative was also lost, and with it a portion of the record of the painting's provenance that could shed light on its history and condition upon arrival. It was our our family's lapse for not keeping a copy of the records in a safe location. Electronic records can be even more ephemeral than a physical object, and I really appreciate the focus on this issue.

 

I'd also recommend this short video titled "Digital Preservation Workflows for Museums," although it's a couple of years old:

Link to comment
Share on other sites

  • 1 month later...

I'm looking for a Content Management System (CMS) for archiving and sharing RTIs, and found the Medici open-source system developed by the National Center for Supercomputing at University of Illinois at Urbana-Champaign:

https://opensource.ncsa.illinois.edu/confluence/display/MMDB/Home

 

It appears to have some useful tools and features, such as automatically extracting metadata from image files. It's also described to be scalable. I'm particularly interested in features that could be conducive for distributed research and scholarship, and compatible with the Digital Lab Notebook philosophy. I'm curious if anyone has tried using it outside of an institutional environment? Is it difficult to manage, or worth the effort for an independent user to learn how to use it? Any opinions on the usefulness of this system would be appreciated.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...