Jump to content

RTI Underwater: a research project. University of Southampton


rtiunderwater
 Share

Recommended Posts

I am Tennessee resident and Florida technical cave diver,  a PADI and BSAC open water dive instructor, and am currently doing a Masters in Maritime Archaeology at the University of Southampton, UK (graduating Sept 17, 2013). At Southampton I was introduced to RTI (and CHI) by Hembo Pagi who helped me utilize it on a ‘deadeye’ rigging piece from a wooden sailing vessel. I completely fell intrigued with RTI. I started to contemplate how this technology could be utilized for submerged cave archaeology, especially in countries like Mexico and the United States where the archaeology must be left in situ in the cave. However, I soon realized RTI could be used to benefit the recording of in situ archaeology across the entire maritime spectrum of submerged site-types. 

 

After enquiring around a bit, I was fascinated to discover there is no published literature on the use of RTI underwater.  Surely there must somebody out there who has experimented with this underwater?  If there is, I’d like to network with them, because I have chosen this research for my masters dissertation.  I have between now and September 17, 2013 to determine the feasibility of RTI underwater, establish the best RTI underwater methodological approach, assess the environmental impact on the quality of RTI underwater data capture (in some quantifiable way) and generate 20,000 publishable words summarizing 5 months of RTI underwater research.

 

There appears to be some genuine enthusiasm in this research from interested third parties and I am so pleased that a number of amazing
people are coming forward to ‘speak into’ its success.  I will do my very best to keep up-to-date postings here on the CHI forum and I encourage thoughts and input from anyone who has something to contribute.


Currently I am drafting the Project Design.  When it is completed (next couple of weeks) it will be posted here.  In the meantime, I have two generic questions to present to the RTI community:


1. Does anybody know anybody who knows anybody who has dabbled with RTI underwater? I would love some contacts.


2. What might be some cool applications of RTI underwater?  In my own field, I know, for example, that I can detect ‘butchery marks’ on submerged Pleistocene bones and I can detect ‘tap marks’ in submerged petroglyph carvings…and whatnot.  But I would be interested in hearing thoughts on non-archaeological related RTI underwater applications.


I look forward to sharing this research journey with you!


Sincerely,


Dave Selmo

Masters in Maritime Archaeology student

University of Southampton, UK

http://cma.soton.ac.uk/who-we-are/people/  

Email:  mos11b1p@hotmail.com



 

 
 

 

  • Like 3
Link to comment
Share on other sites


Dear Dave,


 


Thanks for the post to this forum. Your project, its ambitious, VERY interesting and numerous issues come to mind when I read your entry. Wow.


 


I dont know of anyone who has taken this on. George Bevan has spent some time under water (in a dry suit) shooting, swimming and looking around with camera(s). I have a feeling that he'll chime in on this thread when he sees it. As I recall he was capturing Photogrammetry data sets. He might be a good person to start with. (You also might want to shoot some photogrammetry when you're down there also)


 


Things like water clarity, currents, focus on you subject, proper exposure and the ability to move around the subject without bumping anything will be really important hurdles to clear. 


 


I suggest creating a 'mock up' environment, maybe this is a swimming pool or just an area in shallow water so you can work out the details before you actually dive to the location with all your gear.


 


Plan for details like:


-how far your camera will be from the subject (working space)


-how will you support the black spheres next to your subject (at least they sink)


-how will you trigger your light source?


-how will you manage the 'string'


-dial in your exposure with light tests


-optimize your f-stop / DoF (depth of field) on 'dry land' as a preset so that when you submerge you wont have to spin the focus ring too much


-bring down a grease pencil and a board so you can shoot images / notes, which could also be used to delineate your Data from your 'test' images


 


I'll think of more ...


 


There are many water housings for DSLR cameras and flashes (or continuous light sources) that would be essential to be familiar with. Some of the units offer dials so that you can change camera settings while submerged. (bonus!) Most camera housings are Lens specific. If you know what lens you need, you're gonna know what housing to rent.


 


Water clarity might be an issue. Mud and silt floating between you and the subject. I don't know how you could avoid this(?)


 


This topic is really interesting. I'll be back soon with more thoughts! thanks again, really interesting project


 


Marlin.


 


 


 


 


 


 


 


  • Like 1
Link to comment
Share on other sites

I agree this is a really interesting subject, and I'm sure there will be many potential uses for RTI and other computational photography methods underwater (examining underwater structures for evidence of corrosion or other defects, for example).  Keeping the camera still must be a challenge, but tools for image alignment are becoming available.  Another consideration that comes to mind is that you might find quite a bit of refraction of light through the lens port of the camera housing, possibly affecting the accuracy of calculations of normals. 

 

Mark Levoy and his students at Stanford have done some work on using synthetic aperture confocal imaging to view objects through turbid water, and you might look at his work:  http://graphics.stanford.edu/papers/confocal/

Also, photometric stereo imaging might be easier to accomplish, although I'm guessing the results would probably be less robust than RTI.  Here's some more research from an alumnus of Dr. Levoy's group in this area:  http://graphics.stanford.edu/~wilburn/

 

I'll follow this thread for more updates!

Edited by Taylor Bennett
  • Like 1
Link to comment
Share on other sites

Interesting project, David. I have thought of doing RTI underwater but there are formidable technical challenges in deploying the highlight method underwater, not least stirring up sediment. I'd suggest that a fixed dome+camera system, possibly mounted on an ROV would be ideal. The movement as the ROV hovers could be compensated for by post-processed alignment. Using photometric stereo rather than RTI to process the data-set may also be advantageous as you'll need many fewer images than for PTM/HSH,.

 

Alternatively, I think the best way to do what you want is to the shoot the photogrammetry (preferably using ADAMTech) and then create a "Virtual RTI" in Blender or another rendering project. The reason I suggest ADAMTech is that it will generate vastly more points than any other package...the calibration of the camera needs to be very precise to get the good stereo alignment to generate points. 

 

One significant application we've seen for this kind of surface metrology underwater is understanding machining or woodworking marks of vessels (provided there isn't a huge amount of corrosion).

 

I work extensively with the underwater archaeology unit at Parks Canada...they may be interested as well in some of the applications you have in mind. 

  • Like 2
Link to comment
Share on other sites

Post 2 CHI Blog 032613


Marlin, Taylor, and George:


I've been out of town on some much needed R & R to Cork, Ireland. (From London, any European destination appears to be about $100 US so I’m taking advantage while studying abroad.) Cork = Beautiful history & gloomy weather.  Just got back in and back to the grind. 


I am taking every piece of insight into direct consideration.  No link will go unclicked, no suggested PDF unread.  (and nobody will insult me by spelling out acronyms or breaking things down. :) 


@Marlin: I’ve been running these things through my head.  I am referring to them as the 'logistical considerations'...the 'mechanics' if you will.  And your post encourages me that I’m thinking along the 'right lines.' I have inspiration (and an initial drawing) for a submersible stainless tripod jig with a ‘boom arm’ to deploy the photography hardware.  Low profile, heavy, solid, providing 360 degree open space in the horizontal and vertical planes.  Boom arm oriented parallel with the direction of tidal/current/cave flow to reduce resonance against the jig.  Maybe something lift-bag deployable or perhaps swimmable and then traditional diver lead added once in position. 


@Taylor: Really appreciate these links. I read the second one, then I went to add it to my EndNote library only to see that somehow I had already found it while randomly researching on Google Scholar.  So instead of ‘adding’ it, I ‘checked it off’ as read. Fortunately, you putting it there inspired me to read it, and it proved to be valuable insight for a Skype meeting today with Tom Malzbender, the ‘father of RTI’ because it contained the very thing he wanted to talk about. So thank you. (I still have to read the synthetic aperture article.  Dr. Graeme Earl (Southampton) is keen on my doing the turbidity water study portion of this research and so I must look closely at these suggestions.)  'computational photography methods' I like that. Must follow up on that phrase. I am wondering if there wouldn't be an application in detecting steel fractures in submersed structure?


@George: You were already on a short list of names to immediately contact.  So I will check you off and consider the dialogue open and running here. The ‘fixed dome+camera
system’ was absolutely the line of primary thinking for the research.  Both yourself and Dr. Fraser Sturt (Southampton) have independently advised the same thing.   Additionally I have this notion of attempting data capture with high speed vid and an HID dive light focused narrow. (My idea is hit ‘rec’ and then manually move the light around the subject… a 20-30 second process at most, and then hit ‘stop.’ My thinking is I’d then pull the frames, correct for radial distortion and color matching in post, and then process them).  I would like to research both ‘high end’ and ‘low cost’ data capture.  Thanks for the idea about machining and woodworking marks.  I'm looking for 'justifications' to put in the Project Design write up and that fits nicely.  I would like to talk to you more about the other suggestions when I get closer.


SKYPE MEETING with Tom Malzbender, 03/26/13.

 

Tom would like me to pursue a submersible prototype of the Surface Enhancement Using Real-Time Photometric Stereo and Reflectance Transformation unit featured in this publication:  http://graphics.stanford.edu/~wilburn/   oh, and he said “Forget the dome… go with the arms.”  This is the approach I am currently pursuing the university resources for.


One approach facilitates an individual archaeologist to independently swim away with useful data about an in situ bone or bifacial lithic…the other allows for real-time analysis from multiple ROVs in an array across hundreds of feet for deep sea image enhancement of entire ship wrecks. There’s an article in Hot Rod magazine, 1967, Volume 20, Page 54 that is the first recorded statement of the old adage: “Speed costs money. How fast do you want to go?”  Somehow, I think a loose paraphrase of that applies with RTI underwater, too. :)

Link to comment
Share on other sites

Glad you find the links useful!  Regarding the second approach you describe, "real-time analysis from multiple ROVs in an array across hundreds of feet for deep sea image enhancement of entire ship wrecks,"  Tom Malzbender mentions a similar approach in a Google Tech Talk a couple of years ago (in response to a question at 57:50), near the end of this video:  

It occurred to me also that a plenoptic camera might be able to capture RTIs in a single exposure, by virtue of the ability to record the directions as well as intensity of light rays on the sensor.  It appears that using a camera array with a fixed light source is something like the inverse of capturing RTIs using a fixed camera while varying the direction of the light source; however in the video above, TM says the results of using a camera array are not as robust (apologies to TM and to you if I've misinterpreted his response in the video or its relevance to your second approach).  If you want to cover a large area at significant depths (hundreds of feet), getting enough light on the subject would be a significant challenge, and potentially you'd also have more light scattering due to turbidity and refraction due to stratification, temperature, and density, which would further complicate the image analysis.  This is not intended to discourage your ideas, however, and I could be corrected on all these points!

Link to comment
Share on other sites

It seems to me you'll definitely want to have a very specific goal in mind to go to the time and expense of attempting RTI underwater. Besides photogrammetry, which can get accuracies of <1mm underwater, there are a bunch of other remote sensing tools that are either recently on the market or about to come on. To name a few: 

BlueView BV5000 (http://www.blueview.com/Bv-5000.html)

2G Robotics ULS-500 (http://www.2grobotics.com/products/underwater-laser-scanner-uls-500/)

3D at Depth DP1 underwater LiDAR (http://www.3datdepth.com/)

Enea REVUE (http://www.enea.it/it/produzione-scientifica/energia-ambiente-e-innovazione-1/anno-2012/knowledge-diagnostics-and-preservation-of-cultural-heritage/terrestrial-and-subsea-3d-laser-scanners-for-cultural-heritage-applications)

Currently there's a lot of buzz about the DP1 in the offshore industry, while NOAA seems very keen on the BlueView. Each have their advantages. You've probably already encountered a few of these. 

 

Taylor, would a light-field camera give you surface normals? I've looked at the Raytrix systems for an industrial metrology project but it seems all you get is pure 3D information, much of which is, at present, of an inferior quality to what can be obtained by lasers or structured light.

Link to comment
Share on other sites

George, I'm not sure that a single exposure from a plenoptic camera would give enough data to generate the coefficients for the polynomials and calculate normals, although I've seen suggestions to that effect.  I'd think you'd at least need some more information about the source light direction (e.g., from reflective spheres or other means), and I'm guessing that you'd still need at least a few exposures with varying light directions to calculate normals.  I'm certainly speculating about this and should perhaps restrain my enthusiasms until I know better.  Also, the pixel count of plenoptic sensors developed so far is not very high, which limits their resolution.  Recently (more speculative enthusiasm!), I've been interested in synthetic aperture metamaterials:

http://cmip.pratt.duke.edu/pubs/metamaterial-apertures-computational-imaging

Link to comment
Share on other sites

You've probably seen this already, but the Raytrix R29 seems to be the best commercial light-field camera available. They have CUDA-driven software for reconstruction: http://www.raytrix.de/index.php/Optical_inspection.html

 

I'm going to try to get over to Germany in a few months to try it with a test object. I could certainly imagine applications for it in the underwater environment. 

Link to comment
Share on other sites

This is diverging from cultural heritage a bit, but I'd think the particle velocimetry application of light-field cameras could be used to study and quantify seeps, groundwater-surface water interactions, hydrothermal vents, currents, oil leaks, methane generation, pollutant discharges and transport, sedimentation, dredging projects, and such, some of which could potentially affect underwater archeological sites.

Link to comment
Share on other sites

Thank you everyone for the continued input.  I am processing these links one by one, literally.  The masters project proposal demands a cited reference/bibliography of understanding current trends so these links are invaluable to me.  Currently, all meetings and even emails are on hold during Easter break. In England, Easter break is a full month long slotted to end April 14th.  (More than just a couple of people will have flooded inboxes when they come back to work, haha.) George, the very specific goal I must nail down is the ‘research question.’ It is paragraph one of the project design on which the entire project pivots. The input for that varies with whom I speak to because everybody seems to want to see something different.  It has become apparent in a very short time that the scope of this research (conducted thoroughly and with modern high-tech methodology) is ‘beyond a masters’ project.  If there proves to be funding interest, I intend to generate a PhD proposal which in turn may open the flood gates to the big picture, but end up dictating a limited/reduced research question for this masters.  There is much to flush out in the weeks ahead before a draft of the project design hits this forum for your collective review.

Link to comment
Share on other sites

As an archaeological conservator much of the discussion has flown way over my head.
 

I see RTI as a technology well-suited for the fast developing field of deepwater archaeology where the cost of conventional excavation is prohibitive. All work is done robotically with tethered or autonomous underwater vehicles. In-situ study, preservation and monitoring will be called for. 

 
It is remarkable how difficult it is to interpret fine detail on underwater surfaces when they are viewed on shipboard monitors even using HD video from 3-chip cameras. There is no light in deepwater and one's lighting dissipates rapidly as one moves away from a surface. 
 
RTI still captures will overcome some of this by allowing us to use what I think of as the 'RTI-jitters', mousing back-and-forth over the image, to mentally re-construct topography. 
 
Besides the aid they will give to shared study of deepwater artifacts or assemblages, RTIs will make great museum display exhibits at nearby coastal state museums. These museum are being considered as ways to monitor sites to discourage looting. Linked by video cables to the deepwater wrecksite 'live' using remotely controlled bird-on-wire cameras the displays will allow constant monitoring. But lets face it after a while of viewing a static video everyone starts to focus on the shrimp on top of the amphoras rather than the artifacts. So RTI displays of artifacts will be great interactive supplements to re-direct the focus.
 
Besides the many practical problems working underwater I'm wondering how we will deal with 'marine snow'. When viewing UW scenes we mentally edit the floating debris between ourselves and the surface. I'm guessing multiple images at each light position combined with processing out anything that moves?
 
Talking about deepwater applications, though a great goal, sounds like talking about doing RTIs on the moon, the recommendation to begin in a clear pool sounds like wise advice.
Link to comment
Share on other sites

These are some great points, Denis. I've been amazed at the relatively low quality imagery produced by most commercial ROVs. The video cameras are fine for underwater repair tasks, but are really sub-par compared to terrestrial archaeological photography. 

 

I still think that photogrammetry is among the most promising tools for mapping deepwater sites, not least because it gathers all of indispensable colour information as well as 3D information. 

Link to comment
Share on other sites

  • 4 weeks later...

post-229-0-54057900-1367349632_thumb.jpgpost-229-0-95169200-1367349686_thumb.jpgpost-229-0-61589500-1367349773_thumb.jpgVery short notice pool session this Thursday, May 2nd, 2013. Dropped what i was doing and spent an entire day at the University of Southampton Center for Maritime Archaeology “dive shed” in the back.  The effort has produced THIS.  Keep in mind… not a single “tool” was employed in the construction of this (minus my trusty pocket knife.)  Y? Because the CMA doesn’t have any. The entire thing has been assembled with stuff lying around in the shed.  Archaeology planning framing iron, a piece of aluminum ladder, a red iron welded ‘box thingy’, and a 30’ Stilson pipe wrench that weighs about 20 pounds.  The entire thing has been put together with duct tape and string.  An assessment of all my available underwater lights left me with an Intova 250 lumen LED wide angle (back up cave diving light) as the light of choice. (Tom Malzbender personally scrutinized my half a dozen underwater lights last night on Skype…had to aim the lap top cam at the ceiling and put on a light show for him… haha. :)  The beam is bright enough for photography within a meter, and features neither hotspots nor dead spots in the center of the beam (unlike my HID’s and various other high intensity LED’s that are focused.) The focal plane is approx. 60 cm from the camera. (Maybe a bit far?  I may lift the object up off the pool floor some to bring it closer.) The department’s underwater SLR camera may or may not be available this Thursday for the pool session.  Therefore, the camera I’m using is my trusty 12meg Fujifilm FinePix F200EXR digital and it likes to attempt to auto refocus between every shot.  However, when set on continuous mode (firing 1 pic every 2 seconds as long as the bat and disk will go), and testing batches of approx 150 pics at a time, 90% remain in focus when I DON’T change the lighting  (ha ha.)  This percentage will surely go down when I move the light to 32 different positions, but I believe I will be able to weed them out of the capture set before importing the batch of pics for processing.  I am aiming to capture a photo from 4 positions (starting at the 45 degree, then 3 positions lower approximately 10 degree drops) in 8 arcs equidistant around the object on the pool floor.  The object is a small piece of 18th century ‘deadeye’ wooden ship rigging (about the size of your hand) I found on a survey project I’m working on. I have a nice surface RTI ptm file of it for control/comparison. A heavy iron cylinder will rest on the pool floor (seen in the pic below) below the camera with the object sitting on it (I will cover it with a small piece of white cloth for contrast) and the snooker ball off the side in the same focal plane.  A cave line lanyard loop is able to spin freely 360 degrees underneath the object, so I can pretty much manually control the torch and keep it at the right distance.  This is NOT how I wanted to start this project out!!!  But, hey… adapt, improvise, overcome.  (Incidentally, the project proposal for the ACTUAL project is still not fully drafted.  Due date is May 16th.  I will post a draft prior to submission.  Maybe you guys can shoot holes in it, and then I will edit and hopefully get a better grade.  :)

 

  • Like 1
Link to comment
Share on other sites

I love the creative use of materials, especially the pipe wrench!  You might find that the bungee transmits vibrations that could show up in misaligned images, but the setup looks very solid and this might not be an issue.  Try to turn off the autofocus and image stabilization (if any).  You could also include some higher angles for light directions--up to around 60 degrees if there's room for it.

 

There was a paper on Outdoor Photometric Stereo (Yu, Yeung, Tai, Terzopoulos, and Chan, 2013) at ICCP that might also be of interest.  It uses a mirror ball and ambient light to construct a normal map.  It starts on p. 149 of the proceedings; here's a link:

https://dl.dropboxusercontent.com/u/10998108/ICCP13-Proceedings-2.pdf

Link to comment
Share on other sites

oh. wow. shoot holes in this? Jedi, why would we do such a thing. Can't wait to see results and hear about your experience. My only suggestion is to really document the process and innovative 'gizmo' that is doing the documentation. really sound logic with your 'on the fly' and creative solutions!!!

Link to comment
Share on other sites

Thank you for the words of encouragement.  @George: embarrassed to say but I’m not 100% sure what you are asking :)  I’m guessing you would like me to take the camera out of the contraption and take a photogrammetry capture set as well.  Will do.  @Taylor:  60 degrees, not a problem.  More than enough room to do so.  I had said 45 only because in our team meeting the professors had suggested 45 because they found the higher position apparently didn’t add that much to the surface studio results.  Truth is, I’ll be wingin’ it anyway.  It is just another 5 second movement one position higher so why not.  There is a question as to whether or not the black snooker ball will sufficiently reflect in the pool environment and there was talk in the meeting about using a ‘mirrored’ ball.  I am also bringing a 1000 lumen HID cave diving light.  I will try them all.  Finally, I REALLY appreciate the articles you guys keep sending.  On the one hand, this is ‘research’ and stands on its own as such… but on the other hand it is a ‘graded dissertation’ on which a thorough review of past and present literature and work is critical to the grade.  So thank you. @Marlin: I have to generate 20,000 words on this when all is said and done… I am documenting EVERYTHING!  Today the GoPro will be on my forehead… haha.


UPDATE:


The Mary Rose Museum Conservation Department at the naval dockyards in Portsmouth, UK has opened up their wood shop and acrylics fabrication shop with laser cutter to me in support of the project.  This is very good news.  With the help of their master carpenter, my intention is to build an 8 arm underwater umbrella out of acrylics.  Each arm will have four LED’s mounted (total of 32).  The device will sit inside a water tank with the LEDs powered from the surface.   The goal is to create datasets that determine the effects of turbidity on the quality/usability of RTI output. I remain committed to 2 paths.  The indoor laboratory setting, and the outside field data gathering setting.  If the pool session works today I will learn all I need to know to be able to pull this off on an 18th century British wreck site off the coast of Spain this June.  Instead of the acrylic
umbrella being the light source in the field, it will serve simply as a ‘template/guide’ with which to ‘freehand’ move my handheld torch.  (i.e….dive down on the wreck, position the umbrella over the object, drop camera in holder, adjust settings, then begin continuous capture mode while moving the light using the arms only for positional
reference.  That way at this early stage of research, I do not need to build a fully submersible electronic device, but will hopefully be able to duplicate ‘laboratory control’ in light positioning in the field.)


Two hours to pool time. 
Cheers.



 

Link to comment
Share on other sites

This is the LED supplier I've been in contact with.

http://www.leds.co.uk/

 

The attachment is the LED they suggest I use for the project.  They are 490 lumens each.  That is twice the output of the torch I am using today in the pool.  However, the torch is focussed through a lens.  These will not be.  I'm guessing they will be perfect and should survive freshwater submerging.

Test2.pdf

Link to comment
Share on other sites

David, 

 

Have you seen the Triggerfish by Hedwig Dieraert? http://wetpixel.com/articles/review-triggerfish-remote-slave-trigger We were considering using them to trigger two Ikelite DS160s off camera to give us a fill light to improve photogrammetric matching. I wonder if they could also be used for your project to trigger an Ikelite strobe off camera. You'd probably need to have a small master flash on the camera, but given that the camera isn't moving it would alter your RTI data (it would just be like doing an RTI with the overhead lights on). I haven't seen another good off-camera underwater flash trigger. Has anyone else seen anything?

 

Another thing to consider is a hand-held sonar unit connected to your off-camera flash. Something like this: http://www.mantasonar.com/diveray.htm

They've really come down in price. If you could set an acoustic alarm to tell you you're at the right distance it would replace the string used in terrestrial RTI. This may unduly complicated things, though.

 

Another thing to consider is something like a Lastolite white balance card or a DSC Labs Splash Underwater EFP Chart. We haven't been using gray-cards or colour cards underwater (we just do a correction in Photoshop or Lightroom), but I definitely think they should be routine if terrestrial technical photography is to be matched underwater. 

 

One final thought...LED lights have very poor throw through the water column (at least the ones I've seen). They don't seem to me as good a solution for RTI as the sort of HID you have.  

  • Like 1
Link to comment
Share on other sites

post-229-0-73501000-1367525713_thumb.jpgGeorge nailed it.  The 250 lumen little hand held didn't cut it in the water.  Took 1500 pics today.  Seven complete sets around the object.  At least ONE of the sets WORKED!  When I threw 1000 lumens on it with my 15wat HID I got results as nice as under the dome in the lab.  So after HOURS of sorting through all these pics, breaking them down into their seven folders, sorting through the 200+ in 'set #4' (HID set)  I've pulled 80 pics.  13 positions around the object with 5-7 pics in the horizontal per position.  I SO wanted to process these tonight and post a few jpegs of the underwater results next to the lab dome results...but I cant figure out how to make the damn RTI Builder software work.  It keeps saying "the jpeg-exports folder is missing in your project directory"... so I gave up.  Made Facebook appointment on Monday to process them back in the lab. :(

  • Like 1
Link to comment
Share on other sites

Mission accomplished.  ptm file of underwater data set as nice as if it had been done under a dome in the sudio.   There were a couple of spots where I apparently put slight motion in either the object or the camera during light position transition, although the vast majority of pics stacked flawlessly.  Still, the few motion errors were enough to blur the image from those angles in the RTI Viewer.  So I took the pics and ran them through a PhotoShop app that aligned them at the pixel level (with some help..haha.) It took about ten minutes.  Then a second attempt at a ptm file produced excellent results.  Any suggestions on how I can continue to share the results?  I think I only have 38k of upload space left on this forum and that wont even be enough to post the project proposal.

  • Like 2
Link to comment
Share on other sites

 

Any suggestions on how I can continue to share the results?  I think I only have 38k of upload space left on this forum and that wont even be enough to post the project proposal.

 

 

David, I'd love to see your results! Perhaps your upload limit can be increased here by admin. If not I suggest you open a free Google Docs/Google Drive account and upload the file there giving it open sharing privileges i.e., anyone with the file address can view/dl, and then post a link to that address here. Last check Google Docs allows free accounts with up to 5 Gigabytes of total storage. Not sure what the maximum size limit for an individual file but I've uploaded files larger than 150 Mb.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

×
×
  • Create New...