Jump to content


Sign In or Register to gain full access to our forums.

Most Liked Content

#804 Geeky details about the "Normals Visualization" available in the new...

Posted by Carla Schroer on 15 January 2014 - 09:56 PM

The 1.1 RTiViewer has the ability to display a normals visualization.  There is some basic information about this in the User Guide on page 23. Additionally, the concept of surface normals and how they are calculated by having known light positions is described on the CHI website on the RTI page:   However, we do receive questions about exactly what the normals visualization represents and how the data is stored.  Here is a geekier explanation than is in either of the above places for those that want to look under the covers.


Background - The surface normal is a vector that is perpendicular to the tangent plane for any point along a 3D surface.  In RTI files, the surface normal is calculated per pixel based on the image capture set and on knowing the light position for each image in the capture set.  Surface normal accuracy can be affected by: shadows and highlights on the subject; sample size and spread of the lights in the image capture set; accuracy of the light position calculation; and which algorithm is used to calculate the normals. Other factors include whether the images are aligned, and whether the images are in focus for the areas of the subject from which you want to calculate normals.


Surface normal calculation and representation - The surface normal calculation is part of the fitting algorithm, though the data in a PTM or RTI file are not stored directly as surface normals.  When a PTM or RTI file is loaded into an RTI viewer, it calculates the surface normals for use in the viewing environment.  The surface normal is represented by x, y, and z coordinates, which are calculated as floating point numbers, normalized to be between -1 and 1 (in addition, x*x + y*y + z*z = 1; in other words, length of a normal vector has to be 1.) The x, y and z coordinates correspond to a point where the origin (0,0,0) is on the surface at that pixel, and the x,y,z coordinates describe a point in space away from the origin.  The normal vector starts at the surface and goes through that point.


Representation of normal fields - It is common to represent normal fields through false color visualization, where the x, y, and z coordinates are mapped to RGB: red, green and blue, respectively.  The normal visualizations are useful in their own right, as well as carrying the coordinates that describe each normal per pixel (as described above). This means that the numerical data for the normals can be compared for a variety of purposes, such as tracking changes to the surface of the same object, or comparing similar materials, for example in a study of tool marks. There are some alignment issues to resolve.  Once resolved, the data can be compared numerically.  This opens up a variety of additional studies beyond just visual inspection in a viewing environment.  (Note there are solutions for alignment, but that topic goes beyond the scope of this post)


The attached image shows the surface normals visualization applied to a hemisphere.Attached File  18-normal.jpg   146.54KB   7 downloads


I want to stress that, because various factors can affect normal accuracy (as described above), using normal comparisons for study requires capturing high-quality RTI data and the application of the HSH algorithm (or new algorithms as they become available).


I would like to thank to Sema Berkiten, PhD student at Princeton, for walking me through the details of this.

  • Carla Schroer, caseycameron, Dennis and 4 others like this

#1487 RTI Dome Build Instructions

Posted by leszekp on 21 June 2016 - 05:56 AM

I've started a Hackaday page on how to build an RTI dome like the ones I've used for this work on lithic artifacts. It can drive 3W/1A LEDs, so should be able to light up a dome at least a meter in diameter (maybe even bigger, depending on how long an exposure time you're willing to tolerate). Still in the early stages, but I've got a fairly complete parts list up, along with some background on the design; instructions should start going up shortly. Comments and suggestions welcome. Open Hardware license, Creative Commons on the instructions, software when released will be under GPL v. 2.0.

  • Carla Schroer, caseycameron, Taylor Bennett and 3 others like this

#1221 RTI For Lithic Artifacts

Posted by leszekp on 06 May 2015 - 07:39 AM

About 3 weeks ago, I did a presentation at the Society For American Archaeology conference in San Francisco, titled "Documentation Of Lithic Artifacts Using An Inexpensive Reflectance Transformation Imaging System". I've just put an extended video version of that talk online at a website I've put together. It covers not just lithic artifact imaging with RTI, but also a field-ready portable RTI system I designed and constructed, and some experiments with converting RTI data into 3D models. The site also contains links to both RTI data files and 3D ply files for lithic artifacts, to view and download. Hope you find it interesting.

  • Carla Schroer, Taylor Bennett, Jon and 2 others like this

#868 Spheres in corner of frame, distortion, poor ptm

Posted by marlin on 11 March 2014 - 05:22 PM

Hello mgts24!


thanks for the message. I will attempt to reply to your post with some useful information.


You have experienced Lens Distortion. The 24mm-105mm L is a really nice lens, but it may not be the best lens to collect RTI data sets.


As we know, Lens Distortion is in every lens -- thats just optics.  Some have more, some have less, a lot less. We steer professional towards the prime lenses.



The 50mm Macro Prime is one of our daily drivers in our camera bag:



The other worker bee is the 100mm L (or its previous)




Both of the above lenses have the least amount of lens distortion, or rather, the most acceptable amount of lens distortion.


Lens Resource

Websites like http://www.dpreview.com/ (go here and look up your lens), can offer valuable insight into the distortion(s) that are inherent of the lens in your camera bag.


For example check out this interactive chart on sharpness on the 100mm Macro L. There other good stuff in there too. (note the comments about the 100mm being exceptional in the distortion category)




A word about zoom lenses (lenses which are adjustable, e.g. : 100mm-400mm). Its not bad to collect RTI data with a zoom lens, but its a lot better to use a Prime lens --- we know this. But another consideration, is that zoomable lenses, lenses that have that turnable ring to adjust your FoV, will often times *shift - especially if the camera is pointed down towards the earth. Your very first image might be tack sharp and in focus, but then the last image in the RTI data set might be soft and out of focus. I've seen this numerous times. The lens simply shifted during the capture sequence. If you have to use a lens that translates to adjust the FoV, then use 'tape' (gaffers tape) to physically 'tape' the lens in place to itself ---- preventing movement and keeping all silent. (don't forget that you also Must be on Manual Focus "MF" mode).


Spheres in the corner.

Spheres in the corner of the frame will appear egg like with lenses that contain a lot of lens distortion. Even the best prime lenses might have a little bit of 'wobble' to them. Moving the spheres towards the middle of the frame even just a tiny bit can help the distortion. In some setups you can control the sphere placement, if you can do this, you might as well. Other times, bc of the subject the sphere may end up in the corner. Remember that RTI Builder is pretty flexible and will most times, more than not, still offer a usable and acceptable result ---- even if the sphere is egg like. (hopefully enough to reveal that hidden text you are looking for).


Lens Distortion Correction.

quick word on Lens Distortion correction. If you want to correct for lens distortion, there are a few ways to do this. Its too much to disscuss in this post but, if you do correct your images, you would want to correct them at the DNG level or at the 'jpeg-exports' phase, and then continue to process your data from there.


Hope that this Helps.

Happy F-stop.



  • Dennis, Taylor Bennett, mgts24 and 1 other like this

#847 DIY LED RTI array

Posted by Charles Walbridge on 26 February 2014 - 04:52 PM

I've put together a cheap and quick lighting array for making RTIs of daguerreotypes we have in the collection of the Minneapolis Institute of Arts. I used lighting equipment we have in the photo studio, LED spots we use in the galleries, and about $30 worth of lamp wire and cord switches. 

The light stand I've used has wheels, so now two of us can shoot the 48 source images for an RTI in about 10 minutes. Because the four lights on the array are the same distance from the object, I can position the array at 12 o'clock, measure the distance to the object with my fancy RTI string, and then turn the lights on and off from the cord switches on the individual lamps. Then I'll roll the stand to the one o'clock position and repeat the process.

I've put up pictures of the array on a Google Plus page here:


and I can share them with the RTI community on Facebook too.

Let me know what you think --



Attached Files

  • Dennis, Taylor Bennett, mgts24 and 1 other like this

#780 RTIViewer 1.1 release now available

Posted by Carla Schroer on 05 December 2013 - 07:11 AM

We are thrilled to announce the release of RTIViewer 1.1.  The new features in this release are the ones most requested by our users.


You can read about the new features and download the release (windows and mac versions), user guide and examples on the RTIViewer page.


This software is available as free open source software.


Cultural Heritage Imaging is a small independent nonprofit organization and we need donations from people like you who use and value these tools in order to keep doing the releases.  Please help us by making a donation to support this work.


We want to gratefully acknowledge the efforts of Ron Bourret, Gianpaolo Palma, and Leif Isaksen for the development work that went into this release.  Much of the work was performaed as volunteers.  We also want to thank Judy Bogart for doing the User Guide updates and descriptions for the new features. We had some beta testers for the release, and we thank them for their time and effort to try things out and report issues. And finally, we thank all the members of the CHI team for the work to oversee the release including setting the requirements for the release, testing it and providing feedback in every phase of development, running the beta testing program, identifying and reviewing the user guide updates, and all the other tasks that go with getting something like this done and out.  Much of that work was done as volunter labor as well.


If you have comments or questions, or want to say nice things about the new release, please post those in the "All Viewers" forum





Attached File  rti-viewer-interface-sm.jpg   209.88KB   0 downloads

  • caseycameron, Dennis, Taylor Bennett and 1 other like this

#728 Minimum reflective sphere diameter and HSH precision

Posted by Mark Mudge on 20 August 2013 - 05:41 AM

Here is the reasoning behind CHI's recommendation that there be at least 250 pixels along the diameter of the reflective spheres in an RTI image. The RTIBuilder software finds the pixel in the exact center of the highlight produced by the illumination source. The more pixels there are across a reflective sphere, the more the incident illumination direction can be refined.

If you look across a reflective black sphere (or any shiny sphere), the middle part of the sphere reflects the hemisphere of light in the direction of the camera. The outer part of the sphere reflects the hemisphere behind the reflective black sphere. This is how a sphere, often called a light probe, can capture the illumination information of an entire environment.

When building an RTI we are only concerned with the central part of the reflective black sphere because the light positions used to illuminate the subject exist in the hemisphere facing the camera. The number of pixels in the diameter of this central region of the reflective black sphere determines the angular resolution of the incident light direction. If there are 180 pixels across this central region the angular resolution will be 1°. If there are 90 pixels across the central region the angular resolution will be 2° and so forth. CHI recommends 250 pixels across the entire reflective black sphere because that will ensure that there will be at least 180 pixels across the central region of the reflective black sphere that is used in calculating RTI incident light positions (which are stored as x.y.z coordinates in a light position file - which is calculated from the highlight data)  For an explanation of how the light positions are calculated from the highlight data, see our paper with Tom Malzbender from VAST 2006.

If your purpose is to use your RTI for visual interpretive purposes, there will likely be no perceptible differences between an angular resolution of 1° and 2°. If however your purpose was to refine or generate a three-dimensional surface, the angular resolution of the incident light source and the resulting surface normals contribute significantly to the accuracy of the 3-D surface. As we cannot foresee how the documentation we produce today will be re-purposed by others in the future, we suggest, when practical, to capture the highest quality data.



  • Carla Schroer, Dennis, Taylor Bennett and 1 other like this

#337 Introduction to the project

Posted by Graeme Earl on 19 February 2013 - 12:46 AM

The UK Arts and Humanities Research Council (AHRC) have provided follow on funding to support the development of an on-line, open source RTI viewer that works on all platforms. We would welcome all input to the project and we will be sharing all of our progress as the project develops. More details to follow ASAP.

There is a round-up of the activities from the previous project here: http://acrg.soton.ac.uk/tag/rtisad/

You can follow the project via @AHRCRTI and my input via @GraemeEarl @AHRCRTI


Note concerning privacy:


Any content that is posted on this forum is public and will get indexed by search engines. You have to make an account to post here and to see the list of members. There is information about individual members available to other members, but only if you choose to fill in profile information for your account. Only administrators on the system see the member's email addresses.

  • Carla Schroer, marlin, LENA and 1 other like this

#1164 HSH or PTM - How to choose the best fitter

Posted by James Davis on 20 February 2015 - 07:04 PM

You can think of this as a sparse data interpolation problem. We measure N lighting directions, and we have to fit M polynomial coefficients. The data is noisy, not properly bandpass filtered, has outliers, etc. If you think back to your freshman calculus or other numerical class, in an ideal world you need at least M knowns, to estimate M unknowns, so we have N>M. If this isn't true, then we have an overfitting problem. Of course the data isn't perfect, so as a rule of thumb, lets say we need (N/2)>M. 


It was asked above what overfitting is? See the image below for a visual/math way to think of this problem. In practice, it means that you will see "noise" appear when you put the lighting direction at any direction other than the ones you sampled. This is the extra wiggling in the plot on the right that isn't real data. This happens when you have too many polynomial terms.




Now on the question of PTM vs HSH, what are we changing, we are changing the choice of polynomial. They are both polynomials, but maybe one as a term for xy and the other has a term for x^2. The original PTM paper defined 6 terms. The plot given above from the Zhang et al paper has a definition that allows a variable number of terms. This is the first time I've seen PTM defined with a variable number of terms, and I think all existing fitters and viewers use the 6 term definition. HSH are Hemi-spherical Harmonics. They are also polynomials, but have a historical mathematical definition which includes 4 terms, 9 terms, 16 terms, etc. The question is which polynomial terms are best? Well, that depends on your data. We did some experiments in the 2007 time range of just trying random polynomial terms to see if we could do better than PTM or HSH, and indeed we could, but it was dependent on the images we tested, and we abandoned that research before finding a new set of terms which was always better. The plot above says that for whichever set of images was tested, the extended definition of PTM was better than HSH. 


In terms of real choices of tools you can use, you can have PTM-6 or HSH-4, HSH-9, HSH-16, there aren't widely available tools for anything else. Since we dont really have compression built into any of the tools, you can expect the file sizes to roughly scale with the number of terms. You can also expect to fit the data better with more terms. Since matte surfaces are more flat and specular surfaces have a bump at the highlight, then thinking about the plot above, we can see that more terms let us represent the bump of the specular highlight better.


The last point is about how to evaluate error. In papers we like our nice plots. We generally use some metric that comes down to a number. It might be RSME or some perceptually driven metric, but it always comes down to a "quality number". This is a gross simplification. In practice none of these methods know what the object *really* looks like between light directions, because we didnt capture an image there. So we are making up what it looks like. Its the space between the samples in the plot above, is it straight?  or curved? or has a wiggle? We just dont know. It might be true that the "quality number" thinks the wiggle has lowest error, but viewer A likes the straight fit and viewer B likes the curved fit. This is why I said earlier in this thread that you would have to look and see what you like. In the image processing papers people often use Structual Simimlarity (SSIM) when they want a human perceptual number, but its still just a "quality number" which still grossly simplifies the situation. In my experience its primarily a feel good for researchers to claim they are doing the right thing, but its not substantially different than RSME.



  • Carla Schroer, Taylor Bennett, leszekp and 1 other like this

#827 Scientific Method HSH vs. PTM

Posted by James Davis on 13 February 2014 - 06:21 PM

You have the basic idea right.


In a normal image, each pixel stores the RGB color value for that pixel.


If we want a relightable image, we could just store the RGB color value for each of 50 lighting directions in each pixel, and then look up the right one when we want to draw the picture. But this wouldn't let us interpolate in the color in between the lighting directions we actually took pictures of. So we fit a polynomial to the 50 RGB values instead. This polynomial has 6 terms and thus 6 coefficients in PTMs, and either 9 or 16 coefficients in the most common RTIs. Spherical harmonics are just a specific set of polynomials. So really PTM and RTi are just doing exactly what you would do in excel if you wanted to make a plot from some scatter data points and told excel to fit a curve for you. When you use the 'normal' PTM or RTI render this is happening.


One additional thing that is often used with PTM and RTI is to calculate the surface normal. The surface normal (the local orientation of the surface) can be used to calculate synthetic lighting that many people find useful for visualizing small scratches and features on objects. When you use this mode the picture is rendered via computer graphics and the PTM coefficients aren't used at all.


There are a variety of rendering modes and some might combine both sets of data.So part of your confusion is there is a set of techniques often used together and the exact method depends on the rendering mode you chose.

  • caseycameron, Dennis and marlin like this

#736 RTI Underwater: a research project. University of Southampton

Posted by David Selmo on 09 October 2013 - 02:09 PM

October 10, 2013


The URTI research project was a tremendous success, in part due to some of the fantastic input I received from individuals from this blog.  I was able to generate PTM files from an 18th century wooden wreck in the cold turbid current of the Solent and PTM files off a 1st Century BC Roman shipwreck in the Western Mediterranean.  Both sets of field PTMs offered diagnostic resolution in archaeological wood.  In one particular PTM, I was able to isolate individual ‘tool carving planes’ in a mark on a floor timber.  This diagnosis was of particular value to the Spanish maritime archaeologists studying the shipwreck.  It showed them… that…well…we ‘know that we know that we know that somebody PUT these marks on this particular timber with a bladed tool.’  


Controlled laboratory turbidity experiments in this masters research proved to be extremely challenging for a variety of reasons (discussed in detail in the dissertation).  I had to attempt the experiment three complete times to pull it off. Each time required many hours of laboratory preparation at the National Oceanography Center Sediment Analysis Laboratory.  I almost gave up on
it, having already achieved more than enough for high scores in a masters dissertation with the shipwreck PTMs. However, I was ‘strongly encouraged’ by my advisors that with regards to the turbidity objectives of the dissertation, that ‘failure…is not an option’ ...haha.  In the end, I was able to shoot 16 pixel-registered PTM’s of a piece of Samian ware (Roman pottery shard) underwater in our test tankn using the fully automatic and fully submersible fixed lighting dome I built for the research. Between each PTM I varied the turbidity with the addition of one gram of Bentonite powdered clay between each PTM.  


The results were fascinating.  I was able to mathematically demonstrate that the amount of progressive ‘noise’ in the source JPEG images was anywhere from 1.5 to 2.5 times higher than the amount of noise generated in the PTM normal renders for each of the associated 16 PTMs.  In other words… the PTMs proved to be far more robust in their ability to accurately render the image under progressivey turbid conditions than the very source images used to generate the PTMs.  (I believe there is some ‘averaging’ going on in the bi-quadratic equation generating the PTMs that is producing clearer results.)   This is an empirical result and one we will be publishing shortly.


I am a firm believer in sharing both the data and dissertation with all interested parties and will make it accessible here on this CHI blog with links as soon as the URTI publications are in the pipeline.  We are working on the publications right now.  I would anticipate being able to provide the link to this body of research material as early as November.

Again, thank you all for your interest and input.



Dave Selmo

  • caseycameron, Dennis and Taylor Bennett like this

#70 The feared "Unknown Error" message in RTI builder

Posted by Carla Schroer on 20 July 2012 - 04:00 PM

I have followed up with Dale and know that this turned out to be that his files were on the desktop on a Windows machine. You *can't have spaces in the filenames or anywhere in the total path to the files you process with RTiBuilder*. This issue accounts fo ~95% of all problems with RTiBuilder. On Windows the desktop is in a folder called "documents and settings" (note the spaces) So you can't work there on Windows machines. You need to put your files in a folder on a local hard drive where there are no spaces in the path.

We are working on some bug fixes to RTiBuilder this summer, and we intend to catch this error better and report something more useful to the user, so that users know when this problem occurs.

  • caseycameron, marlin and mgts24 like this

#627 RTI Underwater: a research project. University of Southampton

Posted by David Selmo on 07 June 2013 - 10:57 AM

I have completed the prototype of the first 'Underwater RTI Light Dome'


A HUGE thank you to Professor Mark Jones,  Master carpenter Mr. Dennis Cook,  Simon, and my new friends at the Mary Rose Museum in Portsmouth, UK for allowing me to build this in their workshop!


Pictures of it in the Google Drive folder:




It was made entirely of acrylic cut with a laser.  It essentially has a focal length of 400 mm (inside radius), and a total base ring OD of 880 mm.  Each leg features position for four LEDs.  It will hold both my $600 'point and shoot' auto-focusing digital camera and the department's $8000 worth of high-end underwater Nikon SLR equipment. I designed it to be completely modular, although to make it firm enough would have required additional bracketing.  I ran out of time at the workshop to make these additional brackets.  So unfortunately, it will be completely fused together with acrylic solvent/glue prior to wiring with LEDs.  


(@ George:  I didn't fully understand what you meant by 'surface normal visualization.'  Somebody showed me the RTIBuilder codes to display it and we took a look though. One of my PhD student advisors seemed to think it was due to shadows/reflectivity of the pool surface being tile.  ??  maybe?)


@ Marlin.. the CHI RTI kit arrived!  Thank you.  Now to start modifying the reflector balls so I can place them on target in the open sea.  I'm thinking just gluing a nut on to receive a piece of threaded rod for each.  Then I can just push the rod down into the sand/silt at the same relative height as the object to be photographed.)

  • Dennis, marlin and Taylor Bennett like this

#380 Photogrammetry

Posted by Carla Schroer on 25 March 2013 - 03:18 AM

If there is enough interest in discussing photogrammetry here, we can set up a new forum for it, and move this content over to it.  I'll wait and see the responses and interest in this topic over the next couple of weeks, and we can make the call.  We are happy to support that if folks think it is useful.



  • caseycameron, marlin and Taylor Bennett like this

#369 RTI Underwater: a research project. University of Southampton

Posted by David Selmo on 21 March 2013 - 10:00 PM

I am Tennessee resident and Florida technical cave diver,  a PADI and BSAC open water dive instructor, and am currently doing a Masters in Maritime Archaeology at the University of Southampton, UK (graduating Sept 17, 2013). At Southampton I was introduced to RTI (and CHI) by Hembo Pagi who helped me utilize it on a ‘deadeye’ rigging piece from a wooden sailing vessel. I completely fell intrigued with RTI. I started to contemplate how this technology could be utilized for submerged cave archaeology, especially in countries like Mexico and the United States where the archaeology must be left in situ in the cave. However, I soon realized RTI could be used to benefit the recording of in situ archaeology across the entire maritime spectrum of submerged site-types. 


After enquiring around a bit, I was fascinated to discover there is no published literature on the use of RTI underwater.  Surely there must somebody out there who has experimented with this underwater?  If there is, I’d like to network with them, because I have chosen this research for my masters dissertation.  I have between now and September 17, 2013 to determine the feasibility of RTI underwater, establish the best RTI underwater methodological approach, assess the environmental impact on the quality of RTI underwater data capture (in some quantifiable way) and generate 20,000 publishable words summarizing 5 months of RTI underwater research.


There appears to be some genuine enthusiasm in this research from interested third parties and I am so pleased that a number of amazing
people are coming forward to ‘speak into’ its success.  I will do my very best to keep up-to-date postings here on the CHI forum and I encourage thoughts and input from anyone who has something to contribute.

Currently I am drafting the Project Design.  When it is completed (next couple of weeks) it will be posted here.  In the meantime, I have two generic questions to present to the RTI community:

1. Does anybody know anybody who knows anybody who has dabbled with RTI underwater? I would love some contacts.

2. What might be some cool applications of RTI underwater?  In my own field, I know, for example, that I can detect ‘butchery marks’ on submerged Pleistocene bones and I can detect ‘tap marks’ in submerged petroglyph carvings…and whatnot.  But I would be interested in hearing thoughts on non-archaeological related RTI underwater applications.

I look forward to sharing this research journey with you!


Dave Selmo

Masters in Maritime Archaeology student

University of Southampton, UK


Email:  mos11b1p@hotmail.com




  • caseycameron, Taylor Bennett and David Selmo like this

#338 RAW v. JPEG

Posted by Carla Schroer on 19 February 2013 - 03:41 PM

This is actually a more complicated question than it might appear.  I'll break this down into a couple of parts, first describing why, even with processing jpegs, shooting RAW has huge advantages. Then, I'll give a high level explanation of some of the difficulties with creating a 16 bit pipeline for RTI (and some hopeful news about working with 16 bit files)


1. Why shoot RAW if you are going to process JPEGS?

Shooting RAW gives you the highest quality image you can get from your camera, and it also affords you complete control and record keeping about how that image is processed.  There's lots of information on line about RAW vs JPEG, so I won't get into that too much here, other than why you still want to shoot RAW for an RTI jpeg pipeline.  First and foremost the RAW files you shoot should become your archive originals for your RTI capture set.  This means when software does support a 16 bit workflow you can take advantage of it with images you have already shot.  It also means any other use of those images in the future (say for the Algorithmic rendering) -  you will have the highest quality images available.  Also important is that by shooting RAW, you have complete control over the jpegs that get produced, so you will end up with jpegs that are most likely quite different than what would come straight out of your camera.  For example, the sensors of most modern digital SLRs collect 14 bits per pixel per color channel of data (stored as a 16 bit file) Jpegs are 8 bits per pixel per color channel, so clearly you are losing a lot of information with a jpeg.  In the camera the data from the sensor is turned into a jpeg and you have no control and no record of what was done.  Generally the camera's processor will apply brightness, saturation and sharpening.  If you follow CHI's recommended workflow, you shoot raw, convert to DNG, and make sure that you have not applied any transformations to the data except white balance, and possibly exposure compensation.  It is important to stay away from sharpening.  In addition, you have a complete record o the transformations in the xmp metadata contained in the raw file.  You can also create a new set of jpegs with more or less exposure using exposure compensation and get a much better result than trying to manipulate jpegs.  Your white balance based on a gray card will also be more correct than trying to white balance a jpeg.  So even though you are creating an 8 bit file, you get to decide how to process the RAW for tht 8 bit file giving you a better result.


2.  Why not just support a dng or tiff workflow all the way through the tools?


We would love to do this, but it isn't as straightforward as you might think.  First, the RTI tools are built with small amounts of money from grants, and also with some volunteer efforts.  We have requirements to make the software run on both Mac and Windows, and also we need to maintain it. We have also chosen the GNU General Public License version 3 (GPL v3) as our license for the software.  In order to make coding much easier and more maintainable all the code is written to libraries that allow the code to be recompiled for different platforms and we don't have to change the code.  Further, the libraries give us all the support for lower level operations.  The libraries we choose have to be compatible with us shipping the resulting software with the GPL v3.  (they don't have to be GPL, they have to be compatible with that license - a completely separate and large topic)  For RTIViewer for example, we use the QT libraries, and they don't support 16 bit (or didn't at the time we had the money to build RTiViewer.  In addition, there are some limitations in support for dng.  While Adobe makes available libraries to support dng under a range of liberal license terms, at the time we were building RTIBuilder, they only made these available for C and C++ code and that tool is written in Java. And finally, the actual building of the .ptm or .rti file is done by command line tools called "fitters."  RTIBuilder manages all your files fr you, finding the light positions from the highlights on your sphere, doing record keeping, managing cropping, ets. but it doesn't actually build the finished file.  Instead it calls these external programs to do so.  the ptmfitter is from HP Labs and the executable is free for non-commercial use.  The source code is not publicly available, and HP is not working on this software.  It takes as input jpegs.  The HSHfiter is available as open source, and someone could modify it to support tiff or dng as input, but there is no funding for such an endeavor at this point.  Even if hshfitter was updated to support the 16 bit file formats, RTIBuilder would have to be modified to support those file type for most people to be able to take advantage of it, and as stated above that is a bigger project than you would want it to be.


Some hopeful news.  None of this means that the RTI pipeline won't support a 16 bit alternative in the future, just that it isn't in place now.  Further we are discussing the possibility (no promises at this point) of suport a full 16 bit pipeline for the CARE tool now under construction.  We would like to do that, but it is a question of time and budget as to whether it does get implemented.


I hope that this information helps folks understand why they should still shoot RAW, and why the current pipeline is a jpeg one.



  • marlin, Taylor Bennett and John Anderson like this

#328 CHI Forums now clean and clear

Posted by Carla Schroer on 12 February 2013 - 09:17 PM

We are pleased to announce that the malicious hack on our forum site has been cleared, and the forum is now safe for our

users! We have taken steps to prevent this happening in the future.


Please note that our primary web site was never affected, and our database of forum users and data was not compromised by the hack -- no worries there.


A couple of important facts:


·       This forum site was never distributing malware. 

·       The hack was designed to redirect users to a site that is known to serve malware.

·       If a user did not follow the redirect (most browsers will detect this and prompt the user), the user was not affected.


The hack was nasty and took much longer to clear than we had hoped. There were a series of issues that needed to be taken care of to get things clean.  We will be posting more information about what happened for those that are curious, or who might want to learn from our experience.


The hack came in originally through an old copy of wordpress installed on a staging site on our domain.  It was self-replicating and difficult to remove. One step we have taken to keep our site up and running is that we have moved the forum to a hosted site run by the company that creates the forum software we are using. 


Thank you to all of you who stuck with us through this time.  There is a lot of great information here, and we hope that continues to grow.  Stay tuned for announcements about updated software and uer guides coming soon...



  • caseycameron, Taylor Bennett and Williamfron like this

#1619 New geodesic domes

Posted by Kirk Martinez on 13 February 2017 - 10:09 AM

Hi all,

we just completed over a year of development on a geodesic dome system and installed the first on in the University of York. We're still writing up the details but there are some images etc on our website. Headlines are:

  • 65 1300 lumen LEDs
  • relatively portable - as it unbolts into a box of struts and some aluminium poles
  • quite fast - about 65s per capture
  • takes different cameras as long as they have an external wired shutter release
  • doesn't need a laptop to drive it - as long as you can focus and set camera exposure its standalone



  • Carla Schroer, Taylor Bennett and Steve like this

#1554 A Portable, Low-Cost, Open-Design Rig for Reflectance Transformation Imaging

Posted by stporter on 21 September 2016 - 09:35 AM

Hi everyone,


I wanted to share with you an RTI capture system I've devised that uses 3D printed and laser cut parts in addition to some easily acquired bits of hardware. The rig was designed as a sort of alternative to dome-style automatic capture systems for researchers who are not comfortable doing their own wiring or programming, have really low budgets, or work in especially adverse field conditions. It consists of a flashlight attached to a wooden arm, which is connected to a lazy susan bearing. This allows the arm / light to rotate around a stable platform. 




I recently presented this project as a poster at the annual meeting of the European Society for the Study of Human Evolution in Madrid, Spain. You can download files for 3D printing / laser cutting, assembly instructions, and my poster on the Data Repository for the University of Minnesota.


Please let me know if you have any comments or questions! Hopefully some of you may find this useful.


-Samantha Porter

  • caseycameron, Taylor Bennett and leszekp like this

#1193 Next Photogrammetry training: May 18-21

Posted by Carla Schroer on 24 March 2015 - 04:54 PM

HI folks,


We are pleased to announce that our next 4 day photogrammetry training lass will take place at our studio in San Francisco on May 18-21.  Learn more, and find a registration form here: http://culturalherit...ning/index.html


Note that this class has been rescheduled from April 20-23 to May 18-21, for those of you that might have seen an earlier announcement.



  • caseycameron, Donaldvogy and Williamfron like this