Jump to content

cdschroer

Administrators
  • Content Count

    452
  • Joined

  • Last visited

  • Days Won

    78

Everything posted by cdschroer

  1. For photogrammetry software to create the best possible result (high precision with low uncertainty data) you need to move the camera between photos. The recommended amount is 1/3 of the field of view (creating a 2/3 overlap). The Kula system might be fine for doing individual "stereo" photos, and it looks like it can do stereo video as well. These could be viewed in various VR systems. However, it would absolutely not be recommended for a proper photogrammetric capture sequence. It would most likely make your data worse. We have some basic guidance for photogrammetric capture on our website here: http://culturalheritageimaging.org/Technologies/Photogrammetry/#how_to We are working on a "Guide to Photogrammetric Image Capture" which will have a lot more detail, along with graphics and photos. We expect to have it out before the end of the year. it will be a free guide available under a Creative Commons license. Carla
  2. Thank you for posting this! I think it's very helpful. We have some known issues with RTiBuilder and various firewall software. Your detailed information gives folks more to go on when facing this kind of issue. Carla
  3. Go to the help page on java.com: http://java.com/en/download/help/ for information on installing, removing, and testing Java versions. Carla
  4. Olaf - One other thing you can try is to make snapshots from RTiViewer. The Snapshots are the full resolution of the underlying RTI (rather that a screenshot, which is screen resolution) You can create a JPG or PNG from the snapshot menu item. You can line these up to be identical in powerpoint, and switch between different views. I find that to be really helpful for showing RTI results. I've attached an example.
  5. Hi Tom, Right now the RTIBuilder only uses one sphere to calculate the light position. Please note that neither the HSHfitter nor the PTMfitter calculate the light positions, they need a light position (.lp) file in order to run. So, RTIBuilder calculates that for you based on the sphere. RTIBuilder can generate a highlight file for each sphere, but we haven't had a chance to implement calculating the light position from 2 spheres into the software. There is some research software that calculates the light position from multiple spheres, and it is more accurate than using a single sphere. So, we recommend using 2 spheres when possible, so that your data has the ability to take advantage of this in the future. Carla
  6. I'm not totally clear about what you are asking. You Build RTIs using the RTIBuilder. I think you are asking if it is possible to view an RTI within powerpoint. The answer is no. Powerpoint does not support the RTI format. What we often do is make a video using screen capture software, and then put the video in the powerpoint. Alternatively, if you have the RTIViewer running separately on your computer, you can use Tab to switch to it ( Tab on a PC). Then use the same to get back to your presentation. A bit clunky - but workable. Carla
  7. Dale - I don't think there is a prohibition against using the server in this way, per se - but there could be some permission issue that's preventing it from working. This is especially true with the HSHFitter, because it creates and uses a hidden cache file. Sometimes giving the user admin privileges makes the problem go away. Not sure how yu are logging onto the server, and whether that is the issue. Carla
  8. Dear Phillip - On this forum we generally are dealing with more general photogrammetry issues, including advice on equipment and capture. We also discuss some of the metrics and other good practice ideas. We try not to deep dive into processing, because the different available software systems have their own forums. Some comparisons of software have taken place here. I'll note that the bulk of the folks participating on the photogrammetry forum here have worked with or taken a class at Cultural Heritage Imaging, and at the moment we are using and recommending photoscan pro. So, I don't think you are likely to get much response to a specific photomodeler question in this group. Good luck on your project!
  9. The question about the sphere recognition is not related to this topic. It should be posted as a new topic in the "Processing RTI" Forum" Also, please describe in detail the problem. Exactly what happens? What steps lead to the problem? Is there an error message? Does it not work as you expect? Include a screen shot if that is appropriate. Carla PS: I can move a new topic someone posts to the proper forum, but I can't move individual posts somewhere else. Please start a new topic about this issue.
  10. I think there are a few things getting conflated here - so I'll weigh in on the RAW vs JPEG and 8bit vs 16 bit in PhotoScan. I'll say again that I think there is enormous value in shooting RAW and controlling and having a record of the processing for your JPEGS (if you are using them). JPEGS from the camera are outside of your control and you have no records of how they were processed because the camera is a black box. Further, most of our work and the people we work with are creating documentation and want to use scientific imaging practices. One goal for scientific imaging is future reuse and also the ability for others to assess the data. Collecting and keeping data that can be much more accurately corrected for color, exposure, etc. as well as the clear record of it's processing seems like a no-brainer to me. Pretty much every museum and library imaging staffer we talk to says the same thing. They may not all use DNG for archiving, but they pretty much all shoot RAW. As for JPEG vs TIFF inside photoscan. It is proprietary software, so we don't know for sure. However, it's my understanding that the choice of jpeg vs tiff will not affect the alignment, optimization, and geometry produced by photoscan. The higher bit depth tiff images only give an advantage when building texture maps or ortho-photos. We generally use jpegs for our photogrammetry projects, unless we have a specific need for better, richer texture maps. However, the fact that we have shot RAW, controlled the processing, and saved the DNGs means that we (or anyone else who might want to use the data) can get back to 16 bit data for any changes in the image processing they might want to make. Also note that other software and future improvements and modifications to the processing might give greater advantages for the 16 bit images. We want to "future proof" our data as much as possible, so that the image sets we collect have the maximum reuse and payback over time. An example of this is new research software that uses a technique called "unstructured lumigraphs" to make a real-time renderer with much better rendering of the textured surface for specular material and other complex surfaces. This is work being done at the University of Minnesota and University of Wisconsin Stout are working on this. We expect to release an open source version of this software in collaboration with them before the end of the year. It relies on software like photoscan to create the model and align the photos, but then can take the results and give the user a much better experience of the surface. Unlike approaches that do this all with algorithms in software (like making a surface appear metallic), this tool uses the original captured photos for the real-time rendering. There is definitely a difference between using 16 bit tiffs in this case when compared to jpegs. More things like this will be coming. If you want maximum reuse for your data you should collect RAW and archive DNGs. You can then choose a lower resolution path, like jpegs for your current processing needs, knowing that you can recreate images that take advantage of new breakthroughs in the future. Carla
  11. What platform are you using? Can you successfully build PTMs? I have seen some issues with HSH (though not these exact messages) when there are permission problems and it can't write files - including a hidden cache file it uses while build the RTI. You could also check in the "assembly-files" folder and see if an lp file was created for you. Sometimes there is a better clue in the log file for the RTIBuilder - found in your project folder and called, .xml Carla
  12. There was some research done with multi-view RTI several years ago. The RTIViewer 1.1 does support it. There is a simple version and a more complex version. The simple version essentially acts like an object movie of RTIs. You create a special file that tells the viewer what RTIs are part of your multi-view RTI, and what their relationship is to each other ( the number of degrees apart) The more complex version did some interpolation between views after other tools were run using an optical flow approach. That approach turned out to have some strong limitations. If you want to try this out, I'm attaching the multi-view spec, and an example .mview file. The example is renamed to .txt so you can look at it and I can attach it. It has to be .mview to use with RTIViewer. This one is for a single row of 12 RTIs shot 30 degrees apart making a complete circuit. Carla multiview_rti_format_draft.doc cunei-cone-row-2_row2.mview.txt
  13. Aysel, The lens choice should be based on the size of the material you will shoot, and the distance away the camera will be. Also, do you have the ability to moe the camera to different positions in your dome environment, or is it fixed? The domes we have built put the camera on a slider that allows a lot of different positions (including inside the dome) which allows for a broader range of options with lenses. We prefer prime (single focal length) lenses. We generally use a 50mm macro or a 100mm macro when performing RTI. This is on a full frame sensor Canon camera. You can also look up reviews of lenses online at dpreview.com which might give you more insight into specific lenses and their characteristics. Carla
  14. Can you provide the exact error message you receive, and exactly when in the processing it appears? Carla
  15. Iain, There are a couple of issues here that could explain what you are seeing. First, it is correct that the RAW file carries more data than a JPEG. For most DSLR sensors it is 14 bits per pixel per color channel. A JPEG is 8 bits per pixel per color channel. But more importantly, the RAW data is un processed by the camera. A JPEG produced by the camera is processed and likely has sharpening, contrast and saturation applied. Even if you plan to use JPEGs, if you make the JPEGs from the RAW, you are in control of, and have a record of the processing. It will depend on your camera, and the settings for the camera, what is done to produce JPGs by the camera. I suspect what you are seeing is the result of sharpening. Note that sharpening is a really bad idea for all computational photography techniques, if your goal is high precision, low uncertainty, and reproducibility. Sharpening changes the pixels. A good photogrammetry workflow and optimization should yield RMSE errors in the tenths of pixels. And, if you are modifying the pixels by sharpening, then all bets are off about the metrics on your results. Go take a look at the RAW file, the Camera produced JPEG and a JPEG you produce from your RAW, with no processing applied - except a white balance. Can you see any difference at 100% (Note that to not process the RAW, you may need to "zero out" the default values in programs like Adobe Camera Raw and Lightroom, because they want to process your images by default as well) Our strong recommendation is to shoot RAW, save it as DNG for archiving, and create controlled tiffs or jpegs for processing into photogrammetry. In general we don't process the images other than a white balance and exposure compensation. If your lens is in the database you can remove chromatic aberration as well. We strongly recommend to NEVER apply sharpening or tone curves, or distortion correction, if you are doing scientific imaging. If you are making games and entertainment, then do whatever makes it look better. Carla
  16. Most likely the problem is that you haven't installed the ptmfitter from HP or told RTIBuilder where you put it. The full description of this is in the User Guide on page 10 and page 20. http://culturalheritageimaging.org/What_We_Offer/Downloads/Process/index.html the "Guide to Highlight Image Processing"
  17. I was going to recommend the same page as Leszekp. It only talks about memory requirements though. Our experience is that all of RAM, Graphics card, and CPU speed come into play. So you want to maximize all 3 (within your budget). There are also services you can use for processing large data sets, and you might do your own alignment and optimization, then send out for building high or ultra high dense clouds and associated mesh. Carla
  18. I like the idea of sharing the settings. The best way to do it is going to vary depending on the audience for the image. I think there are some good ideas here, but I also realize that we use this kind of information in a lot of different ways, and putting this in the caption isn't always appropriate. I think it does make sense to always have information about the settings available, and that's why RTIViewer produces an xml file with this data for every snapshot. At CHI, we use snapshots for lots of things, including print marketing materials, on web pages, and for technical and academic papers. In the last case, is I think where this suggestion makes the most sense. It really doesn't make sense for a postcard that has information about a training class (for example). Just my $.02. Carla
  19. Drop a line to info at c-h-i.org if you want to try the Beta version of the image alignment tool. Carla
  20. This forum is for questions about working with Reflectance Transformation Imaging (RTI) data and results. The "all viewers" refers to RTI viewers. This question isn't really appropriate to this forum. Carla
  21. Hi Rob, It looks like you have some kind of movement in the dataset. If the pixels don't line up exactly, you will see this kind of blurring - often only in some light directions. Is it possible that the paper moved slightly during imaging? Can you secure it with lead snakes or some other safe means? Alternatively, we have brushed off the image alignment tool from Princeton, and are working to complete it where it could be released. It's been sitting dormant for a while. I have a Beta version on Mac (sorry no windows build at the moment) - let me know if you want to try it out on your data set - and I'll get it to you via a dropbox (offline) Carla
  22. Thanks George - and I agree. I'll add that an RTI isn't just a normal field. It's a whole different way of collecting and processing data when compared to photogrammetry. The RTI file format is a different thing than either a single image or a 3D model. We (and others) employ both techniques, and we find them to be complementary. AS George notes there are other discussions of Photogrammetry and RTi on this forum. Here's a topic with access to the file format information: http://forums.culturalheritageimaging.org/index.php?/topic/389-where-can-i-find-the-file-format-specifications-for-rti-and-ptm/?hl=%2Bfile+%2Bformat
  23. Dear Rick, It is expected that only a few spheres are checked for the sphere detection part of the process. The software assumes that all the spheres are in the same place across the image set. The sphere detection is to determine the exact location of the sphere in the image and the center of the sphere. So, it doesn't need to use al the images. It does a bit of image analysis to figure out which images to use. When you get to the detect highlights part, then all the spheres need to be checked, because all the spheres have highlights in different positions, and each of those needs to be detected. When the sphere detection stage is finished, the image in the window changes, and the "next" button goes from grayed out, to being possible to click it. Does this happen for you? I also want to double check that you are using the shipping version of RTiBuilder 2.0.2? There was a beta of a 3.0 version that a few people had, and it has a bug like this in it on Windows environments. It is unlikely that you have this test version, but I do want to check. Carla
  24. Fingerprints can be imaged using RTI. The effectiveness is dependent on the substrate for the fingerprint. It wouldn't work on glass, for example. There is an example on out website here: http://culturalheritageimaging.org/What_We_Do/Fields/forensics/index.html I know there are archaeologists who have used RTI on fingerprints found in clay. Carla
  25. There is no 64 bit version of RTIBuilder, and it is not needed. The software is written in Java and requires only that you have Java installed on your system. We have successfully run it on Windows 7Pro, Windows 8 and Windows 10. What happens when you try to run it? Do you get any error messages? Some people have had problems with the software being blocked due to antivirus software. You can read more about that here: http://forums.culturalheritageimaging.org/index.php?/topic/438-rti-builder-problem-with-windows-8/?hl=%2Brtibuilder+%2Bvirus&do=findComment&comment=1310 This person is also having difficulty getting RTIBuilder to work. It is not something we have seen anywhere else. Carla
×
×
  • Create New...