Jump to content
Graeme

RTImeline

Recommended Posts

Hi all,
 
I am giving a talk in York, UK on Saturday 6 July 2013 at Digital Heritage 2013: Interfaces with the Past.
 
http://www.york.ac.uk/digital-heritage/events/cdh-2013/
 
As part of that I thought I might create an MIT SIMILE "RTImeline" charting the development of RTI, and its components. (I don't think this exists already - of course if it does please let me know.)
 
I realise this might quickly dissolve into disagreement (!) but do people have some ideas of key events in the development of RTI that should go on the timeline? If so respond to this with:
 
- title
- one or more URLs if possible
- a date
- an end date if you want to specify a range
 
I realise this is cheeky as I am the one giving the presentation but of course all contributions will be very gladly acknowledged!
 
Cheers,
 
Graeme
 
>>>
 
Some uncontroversial examples (I think - please let me know if I am wrong!):
 
Publication of "Enhancement of Shape Perception by Surface Reflectance Transformation" by Tom Malzbender, Dan Gelb, Hans Wolters and Bruce Zuckerman
 
http://www.hpl.hp.com/techreports/2000/HPL-2000-38R1.pdf
 
March 2000
 
N/A
 
>>
 
Publication of "Polynomial Texture Maps" by Tom Malzbender, Dan Gelb and Hans Wolters
 
http://www.hpl.hp.com/research/ptm/papers/ptm.pdf
http://www.hpl.hp.com/research/ptm/papers/PtmSig6Talk.pdf
 
August 2001
 
N/A

 

>>

 

Publication of first on-line PTM viewer by Clifford Lyon.

 

http://materialobjects.com/ptm/

 

December 2004

 

N/A

 
>>
 
Publication of "Surface enhancement using real-time photometric stereo and reflectance transformation" by Tom Malzbender, Bennett Wilburn, Dan Gelb and Bill Ambrisco
 
https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnx0b21tYWx6YmVuZGVyfGd4OjI4MzRiMTZkN2RiNjkwYzQ
 
June 2006
 
N/A
 
>>
 
Publication of "New Reflection Transformation Imaging Methods for Rock Art and Multiple-Viewpoint Display" by Mark Mudge, Tom Malzbender, Carla Schroer and Marlin Lum. First us of Reflectance Transformation Imaging. Introduction of Highlight RTI and PTM Object Movies,
 
http://culturalheritageimaging.org/What_We_Do/Publications/vast2006/
 
November 2006
 
N/A


>>

 
Publication of "Image-Based Empirical Information Acquisition, Scientific Reliability, and Long-Term Digital Preservation for the Natural Sciences and Cultural Heritage" by Mark Mudge, Tom Malzbender, Alan Chalmers, Roberto Scopigno, James Davis, Oliver Wang, Prabath Gunawardane, Michael Ashley, Martin Doerr, Alberto Proenca and João Barbosa. Introduciton of Empirical Provenance in the context of RTI
 
http://culturalheritageimaging.org/What_We_Do/Publications/eurographics2008/index.html

 

April 2008

 

N/A

 

  • Like 4

Share this post


Link to post
Share on other sites

Graeme - this is a great start!  I guess what should be included is partly dependent on what level of detail you want and if you want to include milestones that aren't necessarily covered by publications.

 

Clearly the 2000 and 2001 papers by Malzbender et. al are seminal.  You can go earlier to BRDF and other reflectance work if you want to - depends on the starting point you choose. 

 

Tom Malzbender tends to reference these 2 papers - 

for BRDF  (bidirectional reflectance distribution function)- Nicodemus 77

Reflectance Function and Reflectance Fields - Debevec 2000
 

Other milestones of note - 

Adoption of RTI technology - I think this was really opened up after the development of the Highlight method.  As stated above, this was written up in a paper for VAST 2006 by CHI and Tom Malzbender.  We first attempted this technique in the field at the Foz Coa Paleolithic Petroglyph site in Portugal in June 2006, followed by other rock art material in Wyoming, USA in August 2006.

 

In the early days it required an excruciating process of determining the pixel at the center of the highlight on the sphere and constructing a highlight file by hand.  So the development of the LpTracker software at the University of Minho (the pre-cursor to RTIBuilder) was critical for people to actually use the technique.  I think that we had that in 2007.  There were various cobbled together scripts for building RTIs, once you had the lp file from LPTracker. This was radically improved by RTIBuilder, which was a collaboration between the University of Minho in Portugal and CHI.  I don't remember when we put out the first release of that, but I'm thinking it was late 2008 early 2009. RTiBuilder added a log file, the ability to manage the process and do different crops, and create different size PTMs from the same capture set.

 

The next release of RTIBuilder added the ability to use the HSH - Hemispherical Harmonics - algorithm for building RTIs.  It also added the ability to reopen an existing "project" and so build the same data set using both algorithms quite easily.  There were log file improvements, and other usability improvements as well.

 

Actually, while I'm on the topic, the development of the HSH algorithm was an important milestone.  It is included in the 2008 Eurographics Tutorial that we organized and Graeme references in the above post.  The work was done at UC Santa Cruz under James Davis, with then PhD students Oliver Wang and Prabath Gunawardane.  They published other papers on this work as well:

 

Material Classification using BRDF Slices - http://users.soe.ucsc.edu/~prabath/wango_brdfseg.pdf

Optimized Image Sampling for View and Light Interpolation http://users.soe.ucsc.edu/~prabath/prabath_viewlight.pdf

 

I'll note that for this project, the technical team included CHI, Tom Malzbender, the UC Santa Cruz team, and folks from the Visual Computing Lab in Pisa.  The work on multi-view RTI, the new RTI file format, the HSH algorithm, improvements in the "digital Lab notebook" including an exploration of the use of the CRM or this type of data, the RTiViewer  and other stuff all came out of this project: http://culturalheritageimaging.org/What_We_Do/Projects/imls_2006/index.html  It was funded by the Institute of Museum and Library Services in the US.  There was a lot of important work that came out of this collaboration.

 

As mentioned, the RTiViewer came out of this project.  That was the first viewer to include support for the new HSH built .rti files.  It also added several new rendering options for PTMs.  There is a journal article on the enhancements: http://culturalheritageimaging.org/What_We_Do/Publications/acmdl2010/index.html

 

That viewer also began experimentation with a tiling approach to allow larger ptm files to be distributed over the web, and it does support the ".mview" or muti-view format which also came out of the above mentioned project.   I'll say that I think that a streaming approach makes more sense for the web, but these early experiments were important.  RTiViewer was a big improvement over ptmviewer in terms of its user interface which included a number of often requested features from users, as the original ptmviewer from HP was really a research viewer with a limited interface and the use of keyboard shortcuts (not all of them documented)  to perform many of the tasks.  RTIViewer has a real user's guide with screnshots, etc.  (I still use ptmviewer for some things - but that's another thread)

 

Also related to adoption I would say that the development of formal training by CHI moved things forward.  We have delivered our 4 day RTI training more than 20 times to more than 300 participants.  The participants come from ~60 different institutions, many of them museums.  Grant funding helped this on it's way, but there has also been general support from a number of institutions to pay for training.  The grant funded project is listed here: http://culturalheritageimaging.org/What_We_Do/Projects/imls/index.html

 

I want to make it clear that lots of folks are doing RTI successfully without taking the training class, and folks from the training have also taught many of their colleagues.  My point is that I think the training classes did move the adoption forward and helped many institutions, particularly museum conservation departments to successfully adopt the technology. Our grant funded project included delivering the training at all 6 masters programs in art conservation in North America, and we believe we will continue to see RTI projects in museum conservation and that RTI use will grow in that field.

 

I think the project that Graeme's team did in 2010 and 2011 with Oxford also helped adoption in the UK and Europe and created lots of great examples, and made many more folks aware of the technology and what it can do. The partnership with the Archaeological Data Service within that project is a critical step in figuring out digital preservation requirements.  So kudos to Graeme and team for that successful project.

 

The awarding of the grant to Southampton and Oxford to develop a web based viewer is also critically important, even though it just started, so we don't have the result yet.

 

I'd love to see others chime in on this topic.  I think it's an interesting one, and thanks to Graeme for working to pull the pieces together.

 

Carla

 

 

 
 
  • Like 2

Share this post


Link to post
Share on other sites

This is great Carla! Other things to add would be underwater RTI (Dave, George and others) and the Dellepiane et al method for large objects (http://vcg.isti.cnr.it/Publications/2006/DCCS06/) and the first WebGL viewer (SpiderGL). I think that paved the way to more recent attempts to create native browser RTI tools.

 

Also I don't want to get into semantics re: RTI and other approaches but the leuven minidome http://www.minidome.be/v01/home.php and their new WebGL viewer should be in there too:

http://www.arts.kuleuven.be/info/ONO/Meso/cuneiformcollection

 

Cheers,

 

G

  • Like 2

Share this post


Link to post
Share on other sites

If you want to get into related stuff, its hard to know where to draw the boundaries. (That's a good thing!) I would suggest including the Algorithmic Rendering work out of Princeton under Szymon Rusinkiewicz - Initially published as non-photorealistic rendering for images with normals.  Here's the key paper  (there has been more work since then, but this seems important)

 

We also have a general page about Algorithmic Rendering, and more will be coming out over the next year as part of the CARE tool project.

 

A number of folks have been doing multi-spectral imaging with RTI, including CHI starting in March 2011. We have shown some of this work in presentations, as have others, but I don't know what's been published at this point.

 

There are a lot of folks who have worked with photometric stereo and other approaches - you just have to decide where you want to draw the line for your current presentation.  

 

What's exciting about all this is that there is a lot of interest and activity in the area of images with normals, or RGBN data - which is fabulous!

  • Like 2

Share this post


Link to post
Share on other sites

OK. So here is my updated version. Please do all add in your thoughts. I know I have left out some key points but I would rather they were added by the people concerned - don't be shy!

I have simplified the timeline data so it only contains a single date rather than a range. I have also added in an extra item which is an image link. Hembo and I will use this in generating the interactive timeline from these data.
 
Cheers,

 
G
 
&&&

Publication of "Enhancement of Shape Perception by Surface Reflectance Transformation" by Tom Malzbender, Dan Gelb, Hans Wolters and Bruce Zuckerman. This work relates to the definiton of BRDF by Nicodemus, Richmond and Hsia 1977, and work by Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokiny and Mark Sagarz in 2000.
 
http://www.hpl.hp.co...L-2000-38R1.pdf
 
March 2000
 
http://www.hpl.hp.com/research/ptm/images/4tabfig-530.jpg

>> 
 
Publication of "Polynomial Texture Maps" by Tom Malzbender, Dan Gelb and Hans Wolters
 
http://www.hpl.hp.co.../papers/ptm.pdf and http://www.hpl.hp.co...PtmSig6Talk.pdf
 
August 2001
 
https://sites.google.com/site/tommalzbender/_/rsrc/1362967457128/home/dome_oblique150.tif
 
>> 
 
Publication of first on-line PTM viewer by Clifford Lyon.
 
http://materialobjects.com/ptm/
 

December 2004
 
http://materialobjects.com/ptm/sd1_s.jpg
 
>> 
 
Publication of "Surface enhancement using real-time photometric stereo and reflectance transformation" by Tom Malzbender, Bennett Wilburn, Dan Gelb and Bill Ambrisco
 
https://docs.google....MTZkN2RiNjkwYzQ
 
June 2006
 
https://sites.google.com/site/tommalzbender/_/rsrc/1362971573194/home/RealTimeRT150.tif
 
>> 
 
First attempt of the Highlight method in the field at the Foz Coa Paleolithic Petroglyph site in Portugal in June 2006. This work by CHI was followed up by other rock art material in Wyoming, USA in August 2006. The highlight method opened the floodgates for capture of RTI.
 
http://culturalherit...tions/vast2006/

 June 2006

>> 
 
Publication of "New Reflection Transformation Imaging Methods for Rock Art and Multiple-Viewpoint Display" by Mark Mudge, Tom Malzbender, Carla Schroer and Marlin Lum. First us of Reflectance Transformation Imaging. Introduction of Highlight RTI and PTM Object Movies. Release of PTM Builder.

 
http://culturalherit...tions/vast2006/ and http://www.hpl.hp.com/research/ptm/HighlightBasedPtms/

 
November 2006

 
https://sites.google.com/site/tommalzbender/_/rsrc/1362971713871/home/Highlight150.jpg

 
>> 
 
Publication of "High Quality PTM Acquisition: Reflection Transformation Imaging for Large Objects" by Matteo Dellepiane, Massimiliano Corsini, Marco Callieri and Roberto Scopigno. New method for orientating lights to capture large objects.

http://vcg.isti.cnr.it/Publications/2006/DCCS06/
 
November 2006
 
http://vcg.isti.cnr.it/Publications/2006/DCCS06/system.png
 
>> 
 
Publication of “Illustration of Complex Real-World Objects using Images with Normals” by Corey Toler-Franklin, Adam Finkelstein, Szymon Rusinkiewicz. This work is part of a long standing collaboration between Szymon Rusinkiewicz and CHI around what has come to be known as Algorithmic Rendering – the use of non -photorealistic rendering (NPR) approaches to make clearer information contained in Red, Green, Blue, Normal (RGBN) data.CHI subsequently employed this approach in a range of cultural heritage contexts.

http://gfx.cs.princeton.edu/pubs/Toler-Franklin_2007_IOC/index.php
and http://culturalheritageimaging.org/Technologies/Algorithmic_Rendering/
 
August 2007
 
http://gfx.cs.princeton.edu/pubs/Toler-Franklin_2007_IOC/pinecone.jpg
 
>> 
 
David Potts created a side-by-side PTM viewer proof of concept based on the materialobjects PTM viewer. This allowed viewing in stereo of PTMs captured with a suitable eye separation. He also produced a version of the materialobjects browser that could be embedded in a webpage. Subsequently Hembo Pagi produced a WordPress plugin to wrap up the same code. Note: code is no longer online but link below provides the original context, and information about the WP plugin.

www.pinan.co.uk/ and http://acrg.soton.ac.uk/blog/467/
 
January 2008
 
>> 
 
Publication of "Image-Based Empirical Information Acquisition, Scientific Reliability, and Long-Term Digital Preservation for the Natural Sciences and Cultural Heritage" by Mark Mudge, Tom Malzbender, Alan Chalmers, Roberto Scopigno, James Davis, Oliver Wang, Prabath Gunawardane, Michael Ashley, Martin Doerr, Alberto Proenca and João Barbosa. Introduction of Empirical Provenance in the context of RTI, and also of HSH.

http://culturalherit...2008/index.html
 
April 2008
 
http://culturalheritageimaging.org/IMAGES/goldcoin_stripes.jpg
 
>> 
 
Release of LpTracker software on SourceForge. This allowed automatic identification of highlights and was the predecessor of RTIBuilder.

 
http://sourceforge.net/projects/lptracker/

 
August 2008

 
>> 

 
Release of RTIBuilder by University of Minho and CHI. RTIBuilder added a log file, the ability to manage the process and do different crops, and create different size PTMs from the same capture set.

 
http://culturalheritageimaging.org/What_We_Offer/Downloads/Process/

 
January 2009

 
https://fbcdn-sphotos-d-a.akamaihd.net/hphotos-ak-prn1/p480x480/560018_537181906321156_617560788_n.jpg

 
>> 

CHI awarded grant from the National Center for Preservation Technology and Training (NCPTT). Working with rock art experts, the grant funded a comprehensive 2-day workshop for 3D digital rock art documentation and preservation, and the first RTI web-based training materials.

http://culturalheritageimaging.wordpress.com/2009/03/19/ncptt-grant-award/
 
March 2009
 
http://culturalheritageimaging.org/IMAGES/nyu_training.jpg
 
>> 
 
Publication of "Material Classification using BRDF Slices" by Oliver Wang, Prabath Gunawardane, Steve Scher and James Davis. This introduced the use of HSH for image segmentation.

 
http://users.soe.ucsc.edu/~prabath/wango_brdfseg.pdf

 
June 2009

 
http://zurich.disneyresearch.com/~owang/pub/images/brdfseg.jpg

 
>> 

 
Publication of "Optimized Image Sampling for View and Light
Interpolation" by Prabath Gunawardane, Oliver Wang, Steven Scher, Ian
Rickards, James Davis and Tom Malzbender. Paper compares Polynomial Texture
Maps with 6 coefficients against Spherical Harmonics with 9 coefficients and
shows that more terms increases the perception of shininess and reduces error.

 
http://users.soe.ucsc.edu/~prabath/prabath_viewlight.pdf

 
September 2009

 
http://zurich.disneyresearch.com/~owang/pub/images/viewlight.jpg

 
>> 

 
Creation of the Leuven minidome.

 
http://www.3d-coform.eu/index.php/tools/minidome and http://www.3d-coform.eu/downloads/3DC_D_4_1_WP4_YR1_FINAL.pdf

 
November 2009

 
http://www.3d-coform.eu/images/stories/minidome.jpg

 
>> 

 
Release of new version of RTIBuilder. This added the ability to use the HSH
- Hemispherical Harmonics - algorithm for building RTIs.  It also added the ability to reopen an existing
"project" and so build the same data set using both algorithms quite
easily. There were log file improvements, and other usability improvements as
well.

 
http://culturalheritageimaging.org/What_We_Offer/Downloads/Process/

 
January 2010

 
>>

 
Blog post
by Tom Goskar about Virtual RTI and LiDAR data. First attempt to use PTM fitted
data as a means to interact with LiDAR landscape datasets.

 
http://www.wessexarch.co.uk/blogs/computing/2010/08/26/interactive-landscape-relighting

 
August 2010

 
http://www.wessexarch.co.uk/files/imagepicker/a/admin/thumbs/virtual-ptm-dome-stonehenge-whs-lidar.jpg

 
>> 

 
Publication of “Polynomial Texture Mapping and 3D representations” by
Lindsay MacDonald and Stuart Robson. This paper compared the results of PTM,
photometric stereo and laser scanning. It concluded that photometric stereo
produced the best normals. This is further developed in detail by MacDonald
2011. He also produced a PTM fitter (and phometric stereo fitter) in MatLab
that crucially is able to fit arbitrarily large input images by tiling.

 
http://www.isprs.org/proceedings/XXXVIII/part5/papers/152.pdf
and http://ewic.bcs.org/upload/pdf/ewic_ev11_s8paper4.pdf

 
June 2010

 
>> 

 
Publication of
"Polynomial texture mapping and related imaging technologies for the
recording, analysis and presentation of archaeological materials" by Earl, Beale, Martinez and Pagi. This published the virtual RTI method
for using the PTM fitter and viewer as a means to interact with 3d data. It
also provided an example of locating multiple PTMs in capture space.

 
 

 
http://eprints.soton.ac.uk/153235/

 
June 2010

 
http://www.jcms-journal.com/article/viewFile/56/67/560

 
>> 

 
Publication of "SpiderGL: A JavaScript 3D Graphics Library for
Next-Generation WWW" by Marco Di Benedetto, Federico Ponchioy, Fabio
Ganovelliz and Roberto Scopigno. This WebGL viewer included a PTM shader.

 
http://vcg.isti.cnr.it/Publications/2010/DPGS10/spidergl.pdf

 
July 2010

 
http://spidergl.org/img/teaser.jpg

 
>> 

 
Publication of "The Stirling Castle wood recording project" by
Karten and Earl. This compared PTM and laser scan data and examined the
degradation of surfaces during the conservation process, and as originals,
casts and moulds.

 
http://eprints.soton.ac.uk/342682/1/065_2010WEB.pdf

 
July 2010

 
http://eprints.soton.ac.uk/342682/1.haspreviewThumbnailVersion/065_2010WEB.pdf

 
>> 

 
CHI and Szymon
Rusinkiewicz received funding from the National Science Foundation to develop
the Automated Documentation and Illustration of Material Culture through the
Collaborative Algorithmic Rendering Engine (CARE) tool.

 
http://culturalheritageimaging.org/What_We_Do/Projects/nsf/index.html

 
October 2010

 
http://culturalheritageimaging.org/IMAGES/soldiers_nsf_ar.jpg

 
>> 

 
Publication of “Archaeological applications of
polynomial texture mapping: analysis, conservation and representation” by Earl,
Martinez and Malzbender. This paper provided an evaluation of PTM across a
range of archaeological applications. It demonstrates the value of the
Malzbender and Ordentlich 2005 approach to maximum entropy lighting and the
potential for batch processing of a PTM archive to identify the best views. We
are building on this in our latest work on mining our RTI archive.

 
http://eprints.soton.ac.uk/156253/1/EarlMartinezMalzbender2010.pdf

 
December 2010

 
http://www.jcms-journal.com/article/viewFile/56/67/558

 
>> 

 
Creation via the AHRC RTISAD project of an annotation framework for RTI
data, based on the addition of bookmarks to the RTI Viewer enabling viewer
controls to be set according to the parameters saved in an XML bookmarks file.
These files could be shared allowing for collaborative annotation.

 
http://acrg.soton.ac.uk/tag/rtisad/

 
January 2011

 
>> 

 
Martin Hunt described his method for capturing underwater RTI via the
RTISAD wiki. He captured a small set of PTMs in freshwater. He also examined a
Cornish marine site but at this stage no clear marine PTMs were gathered. He
made a fibreglass dome with apertures for placing a light in consistent
positions. Data were then processed via a standard LP file. The work was first
made public to my knowledge at the AHRC RTISAD workshop in Oxford.

 
http://acrg.soton.ac.uk/blog/1528/

 
January 2011

 
>> 

 
First multispectral imaging undertaken by CHI. Subsequent work in
multispectral includes ongoing research by Eleni Kotoula. CHI work was
presented at CAA2012 amongst other places.

 
https://www.ocs.soton.ac.uk/index.php/CAA/2012/paper/view/633
and http://acrg.soton.ac.uk/blog/1569/

 
March 2011

 
http://acrg.soton.ac.uk/files/2012/11/figure3left.jpg

 
>> 

 
Poster presentation at
CAAUK of “RTI in conservation examination, analysis and documentation” by
Kotoula and Earl. This introduces work studying the application of RTI,
including microscope RTI, to conservation.

 
http://academia.edu/1175819/RTI_in_conservation_examination_analysis_and_documentation

 
March 2011

 
http://culturalheritageimaging.files.wordpress.com/2012/04/microdome_panels4.jpg

 
>> 

 
Four day training programme by CHI in RTI at the NYU Institute of Fine Arts
Conservation Center. CHI have so far delivered their 4 day RTI training more
than 20 times to more than 300 participants. 
The participants come from ~60 different institutions, many of them
museums. 

 
March 2011

 
>> 

 
Creation of a specification via the AHRC RTISAD project for implementing an
RTI repository.

 
http://acrg.soton.ac.uk/tag/rtisad/

 
June 2011
 
>> 
 
Publication of “Reflectance transformation imaging systems for ancient documentary artefacts” by Earl, Basford, Bischoff, Bowman, Crowther, Dahl, Hodgson, Martinez, Isaksen, Pagi, Piquette and Kotoula. This paper introduced the RTISAD project and also included the use of RTI data in computer graphic rendering (at Catalhoyuk), microscope capture, RTI annotation and a minidome mounted on and controlled by a commodity camera. It also introduced the use of MentalRay for contour shading, enhancement and non-photorealistic rendering of RGBN data derived from RTI.
 
http://eprints.soton.ac.uk/204531/1/ewic_ev11_s8paper3.pdf
 
July 2011
 
>> 
 
Presentation by Lindsay MacDonald of integration of PTM, photometric stereo and laser scan data to analyse the original and cast of the Hunters Palette, an early Egyptian (c. 3100 BCE) stone slab in the British Museum.
 
http://www.cosch.info/documents/10179/30087/Abstract_WG2_Lindsay+MacDonald.pdf/e21bf8cf-004c-4ce6-86eb-1ad597c6ec6c;jsessionid=52E1F1AB6A31E3E001A9AC7282D8BCC2?version=1.0
 
March 2012
 
>> 
 
Publication of “Printing Reflectance Functions” by paper with specular micro-geometry by Tom Malzbender, Ramin Samadani, Steven Scher, Adam Crume, Douglas Dunn and James Davis. Whilst not directly related to RTI it gives a teasing glimpse of a printed page that reacts to light orientation, using data that could be acquired via RTI.
 
https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnx0b21tYWx6YmVuZGVyfGd4OjcwMGQwNDVkMDM0Y2FhMWI
and


 
May 2012
 
http://graphics.soe.ucsc.edu/prf/Teaser.jpg
 
>> 

Hembo Pagi wrote a blog post about a simple way of switching between
screenshots from RTI Viewer and original input photographs. I find this an
extremely useful way of sharing key information. The repository developments
that continue at Southampton are designed to support this preview method, and
we already make use of it extensively on the ACRG website.
 
http://forums.culturalheritageimaging.org/index.php?/topic/202-example-of-2d-way-of-showing-rti-results-in-web/
and http://acrg.soton.ac.uk/tag/rti-example/
 
October 2012
 
http://www.arheovisioon.ee/wp-content/uploads/2012/10/ir-spec.jpg
 
>> 
 
Eleni Kotoula introduces False Colour Imaging RTI and Transmitted RTI.
 
http://acrg.soton.ac.uk/blog/2786/
and http://acrg.soton.ac.uk/blog/2796/
 
February 2013
 
http://acrg.soton.ac.uk/files/2013/02/Picture11.jpg
 
>> 

Launch of project by David Selmo to capture underwater RTI data and to
examine the hardware and software implications, and the issues imposed by the
underwater environment. The first attempted capture of PTM that I am aware of
took place in 2010 and used a frame with holes cut to provide a means to place
lights in a consistent dome orientation. Turbidity proved a significant issue
hence the focus of Selmo’s work. Selmo produced the first highlight RTI capture
I know of.
 
http://forums.culturalheritageimaging.org/index.php?/topic/230-rti-underwater-a-research-project-university-of-southampton
and http://cma.soton.ac.uk/blog/2013/05/underwater-reflectance-transformation-imaging-a-success/
 
March 2013
 
http://cma.soton.ac.uk/files/2013/05/Dave-Notes-change-400x300.jpg

>> 

Publication of "Multi-light Imaging for Heritage Applications" by
Sarah Duffy. This provided an overview of RTI and included a range of examples. Demonstrates move to mainstream archaeological community.
 
http://www.english-heritage.org.uk/publications/multi-light-imaging-heritage-applications/Multi-light_Imaging_FINAL_low-res.pdf
 
June 2013
 
>> 
 
Launch of the Leuven minidome online viewer for their data format.
 
http://www.arts.kuleuven.be/info/ONO/Meso/cuneiformcollection
 
July 2013

http://www2.arts.kuleuven.be/info/bestanden-div/images/MB D11 a1.preview.jpg
  • Like 2

Share this post


Link to post
Share on other sites

Graeme - Nice job with this!  I do have a couple of small quibbles:

 

 

On the section about Szymon Rusinkiewicz's work, I want to clarify a couple of things.

 

Publication of “Illustration of Complex Real-World Objects using Images with Normals” by Corey Toler-Franklin, Adam Finkelstein, Szymon Rusinkiewicz. This work is part of a long standing collaboration between Szymon Rusinkiewicz and CHI around what has come to be known as Algorithmic Rendering – the use of non -photorealistic rendering (NPR) approaches to make clearer information contained in Red, Green, Blue, Normal (RGBN) data.CHI subsequently employed this approach in a range of cultural heritage contexts.

 

While it is true that we knew Szymon at this point, and we provided a couple of data sets that were used in this early work, and one result appeared in this paper, I wouldn't say that the work came out of our collaboration.  It was the result of this work out of Princeton that led us to develop the joint proposal with Szymon for the CARE tool which was later funded by the NSF.  

 

I think that a key thing left off of the list, was the award of a National Leadership Grant from the US Institute of Museum and Library Services  that funded the development of HSH, the new viewer, the work on multi-view, etc.  While you have some of the pieces listed here, the grant was the key thing that funded that whole set of projects, including our early work on the digital lab notebook and the adoption of the CIDOC CRM. That was awarded in late 2006.

 

 

Carla

  • Like 1

Share this post


Link to post
Share on other sites

I took a quick look at a couple of the KU Leuven files using their on-line viewer.  The viewer seems to work pretty well; and I like the features of exaggerated shading and line drawings, which appear similar to some of the CARE tools that are being developed in collaboration with CHI and Princeton.  The accessibility of some metadata in the viewer is a nice feature. 

 

I noticed that KU Leuven is using unique file formats, either .cun or .zun.  It would seem that there's a potential that the proliferation of different file formats and viewers could lead to some confusion with regard to dissemination and adoption by the widest possible community.  Is there a possibility that viewers being developed like Graeme's AHRC Project will be able to handle all of these different data formats, or that KU Leuven's viewer will be able to incorporate .rti and .ptm file formats?  Through following some of these links, I've only just realized that the V&A Museum and Cornell appear to have adopted the KU Leuven file formats.  More collaboration among institutions would be beneficial to standardize file formats and avoid duplication of effort and money spent.

  • Like 1

Share this post


Link to post
Share on other sites

Taylor raises a good point.  I think it is a balancing act though, when you have new research and new approaches being invented.  For example, when the HSH fitter was created the results could not work in the ptm format (there are some real limitations to that format) so we came up with a new one.  We did it in collaboration with Tom Malzbender and other researchers and we designed it so that you could put both PTM produced data and HSH produced data into. We tried to imagine other modifications and how they might be handled.  However, seeing into the future is a difficult thing to undertake.  

 

In addition tot he folks at KU Leuven, there is other active research going on, that is likely to come up with even more different kinds of output formats.

 

That said, I think there are some ways to mitigate this:

 

1.  All the file formats should be documented and the specs made available.  (for the .rti format - we have emailed it to people who asked for it - there is probably a better solution than that)

The file formats should be "open."  By this I meant that anyone can read from them and write to them, and that they are documented.  This is a licensing issue not a technical one.  PDF is an example of an open format, Flash is an example of a closed one.  I think the market will make clear this preference for open formats.  Please note that open formats do not necessarily imply open software (though we think that is a good idea too)  Open formats enable lots of programs to support a format, and that is a good thing.

 

2.  Save your original images - as we have always recommended, you should save your original images (we recommend converting to DNG as your archive format)  This allows you and others in the future to take advantage of new algorithms and new software approaches

 

3.  Keep a record of what you shot and how you shot it.  We do this now through a spreadsheet shooting log, but soon there will be a capture tool that will make it easy to gather a significant amount of information about your equipment, setup, who was there, and why you were doing the imaging (as much or as little of this information as you wish - with reusable templates).  This will be mapped to the CIDOC CRM-digital and saved as RDF.  Other tools in the RTI and AR pipeline are being developed or updated to take advantage of this, and to add information about the processing steps employed.  While you don't have to use this exact approach, you need information about what you shot and how you shot it, to make your images continue to be useful into the future, and to take advantage of new approaches.

 

4.  Where possible, we have been able to leverage existing tools and work, and file formats like the "light position" or .lp file such that it is used by the ptmfitter, hshfiter.  We are planning to use the same format for the under development Algorithmic Rendering work, called the CARE tool.  This makes it easier for folks to reuse their data and also their knowledge of existing tools like RTIBuilder to generate the needed light position file for a range of approaches and outputs.

 

5.  Discussions like this need to keep happening.  Users need to request of researchers they are working with or technologies they are considering adopting that they be made to fit within the existing tool chain, file formats, etc. as much as possible.  If groups deviate fromt his, they should have a good reason (and there are good reasons).  We believe that the tools being made available as open source, also makes it easier for others to leverage and reuse.  All the software projects CHI is directly involved in have the code and binaries available under a GNU General Public License version 3.  This wil be true for al he work coming out of the above mentioned CARE project as well.  While the file formats are an absolute requirement to gain any leverage and reuse, the software being open source is not strictly speaking required - but it certainly helps matters substantially. 

 

This is a huge topic and there is more to say about it.  Things will continue to evolve, and that's a good thing demonstrating that this is an active area of research and development.  We need to keep the conversation going.

 

Carla

  • Like 2

Share this post


Link to post
Share on other sites

Just a couple of other links to add at the Graeme's timeline.

 

>>

 

September 2010

 

Publication of "Dynamic Shading Enhancement for Reflectance Transformation Imaging " by Gianpaolo Palma, Massimiliano Corsini, Paolo Cignoni, Roberto Scopigno, Mark Mudge. New methods to improve the visualization of the RTI data by shading enhancement.

 

http://vcg.isti.cnr.it/Publications/2010/PCCSM10/jocch2010.pdf

 

 

>>

 

March 2012

 

CAA2012 "Telling The Story Of Ancient Coins By Means Of Interactive RTI Images Visualization" by Gianpaolo Palma, Eliana Siotto, Marc Proesmans, Monica Baldassarri, Clara Baracchini, Sabrina Batino, Roberto Scopigno. An interactive web system for the presentation of a coin collection using storytelling criteria and RTI images. The first experimentation to expose the RTI manipulation at a different public (the public of a museum).

 

http://dare.uva.nl/document/516092 (pp 177-185)

 

 

>> 

 

September 2013

 

Opening of the kiosk for the interactive presentation of the coin collection of Palazzo Blu in PIsa. The kiosk is installed inside the museum and it is available online at the following link:

 

http://vcg.isti.cnr.it/PalazzoBlu/ (it requires WebGL)

 

 

>>

 

December 2013

 

Release of the tool to publish and visualize high resolution RTI images on the web using WebGL. (tested with 24 Mpixel images)

 

http://vcg.isti.cnr.it/~palma/dokuwiki/doku.php?id=research

 

 

>>

 

Regards

Gianpaolo

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...