Jump to content

All Activity

This stream auto-updates     

  1. Yesterday
  2. Thanks Krl, I was worried when you said you were 'cropping' them - I thought you were cropping them out of the image before processing! - those boxes are just to define the search area. Secondly, just looking at your original image, I have a few observations of which only the first couple may relate to the problem you have experienced. 1. The background on which the spheres are sitting (or is behind them) looks very close in colour to the spheres, because you're using a photograph where the grey card or background is in shadow - you're not doing the circle-edge detection any favours, its trying to detect black on near-black! 2. The illumination in the shot you kindly supplied looks rather low - the recommended light positions are no lower than 15 degrees to the plane of the object, and no higher than 65 degrees to the plane of the object. 3. The highlight-detection method of RTI generation depends on accurate detection of the highlight on the sphere as each frame is processed, and using that to deduce the direction from which each frame was illuminated. The light should be reflected as a 'spot', as near to a point - but the light looks to be more like patches on your spheres? (maybe either over-illuminated or something like a flood-light used rather than as close as you can mange to a point source). 'Wide' sources won't affect the sphere detection but it will give less-distinct shadows and hence degrade the final result. 4. The whole process depends on each frame having, as close as you can, only one direction of illumination. Looking at your sample image, it looks as if the are other light sources or something out of shot that are throwing multiple direction illumination on the spheres? - that shouldn't affect the feature detection but it will degrade the final result. I'd highly recommend that you go through the RTI Capture Guide which CHI publish at http://culturalheritageimaging.org/What_We_Offer/Downloads/RTI_Hlt_Capture_Guide_v2_0.pdf - have a look at how the spheres and illumination appear in successful use. CHI also have tutorial data sets on their web site which are known to work, as well general practise they allow you to see how successful capture looks. cheers/Dave
  3. https://imgur.com/a/cYAXSdh https://imgur.com/a/ticNYo2 It looks like there's a 0.5 MB cap to these posts, but here are links to the images you requested. Thanks a lot for your help!
  4. Krl, Can you post one of your source images with the spheres, and then a screenshot of where you say you "crop out the spheres"? Dave
  5. Last week
  6. Hello, I am relatively new to RTI, but have successfully used the software a few times. I keep getting stuck on the sphere detection step of my latest project, which is a highlight-based PTM fitter. I crop out the spheres, hit sphere detection, but once it's done the next button is still not click-able. I have deleted the project and restarted several times to no avail. I was able to make a highlight-based HSH fitter with the same piece. What am I doing wrong
  7. KrisLockyear

    Trojan Warning

    I'm having a similar problem. Did you find any help? Thanks, Kris.
  8. Earlier
  9. Dear Mark, Thank you for well thought valuable reply, I found out you are actually the CHI’s president and co-founder, so a genuine answer indeed ! Your kind offer to discuss collaborations has been broadcast here, and we do hope that the proposal for the expanded DLN distributed enterprise features shall be funded. Thierry F.
  10. Hi there, Just wanting to find out if the issues with the latests Mac OS have been resolved? I'm still using Mojave 10.14 and I need to update so that other software works, but then my fear is that my RTI software won't. I would be updating to MacOS Big Sur Any advice very much appreciated!! Aaron.
  11. Hello, I am a newbie in this field and as a beginner I am having problem in finding interior orientation parameters of my cell phone (from which I am taking pics). Also, I am given a task to find the ground distance between two points by the formula f/H=x/X where f=focal length of phone camera H=total height x= image distance between points X= ground measurements between points The problem is that how to measure image distance. I mean if my display is small my points will show smaller difference whereas if the display is large my points will appear far. Also any app to calculate this image distance or the traditional ruler will work (accuracy issues). Attached image viewed from Cannon 60d DSLR but the image is shot from cell camera (Samsung Note 5). Thank you.
  12. Hello, I am a newbie in this field and as a beginner I am having problem in finding interior orientation parameters of my cell phone (from which I am taking pics). Also, I am given a task to find the ground distance between two points by the formula f/H=x/X where f=focal length of phone camera H=total height x= image distance between points X= ground measurements between points The problem is that how to measure image distance. I mean if my display is small my points will show smaller difference whereas if the display is large my points will appear far. Also any app to calculate this image distance or the traditional ruler will work (accuracy issues). Attached image viewed from Cannon 60d DSLR but the image is shot from cell camera (Samsung Note 5). Thank you.
  13. TonyF

    Trojan Warning

    Hello, Every time I try to start this program, my anti-virus detects a Trojan virus and refuses to open. I am trying to learn RTI for archaeological purposes and this is the only program that I have found.
  14. Hi Thierry, Thanks for your interest in the DLN. We have made many changes to the DLN since the 2018 Beta release. Many of these changes were based on community suggestions. We are on track to release a new Beta of the expanded tool in the Spring. The full 1.0 DLN will be released by Summer this year. We would be happy to speak with you about your project ideas and send you the new Beta software when it becomes available. The current DLN release uses a standalone strategy. There are two reasons: The first is that this decision allowed us to concentrate on solving other fundamental issues; principally the solution of a broad range of technical problems and the simplification of the interface. The second reason is to encourage widespread DLN adoption by keeping the operational requirements simple, for individual scholars, small organizations, cultural communities, under-served people, and those operating in areas without robust bandwidth. CHI wants to encourage the decentralization of cultural heritage documentation. The DLN allows people all over the world to prove the quality of their documentary work (or lack thereof). This is key to the decentralization of the sources of cultural knowledge. The Postgres database allows some communication between databases on different computers using Backup/Restore functions. We understand this is awkward. The new DLN version supports XML import and export. This is convenient for exchanging database metadata. For example, if CHI joins another organization working on a project and we share equipment on a capture project, we can export our equipment kit in XML and the other organization can import it into their database for documenting the capture setups. This XML feature adds some additional ability to exchange information for small operations. There has always been an intention to produce a networked version, which is on the CHI list for future work. This “enterprise” tool would be redesigned for larger organizations with many, distributed digitization operations running at the same time. We have submitted a grant proposal to the National Endowment for the Humanities that includes planning funding for this enterprise version. The fate of the proposal will be known in August. We would like to hear ideas from larger organizations, like the Royal Library of Belgium, to make this effort responsive to the requirements of people working in our CH community. As always, we are happy to discuss collaborations. Here is a little more information about the new DLN: Among a new interface and many new features, the DLN now permits the metadata description of the processing operations applied to computational photography images that produce advanced work products, along with the description of the work products themselves. The new DLN also contains functions to automatically produce Submission Information Packets (SIPs) for archiving original images and / or work products (3D models, RTIs, Multispectral, documentary image sets, Orthomosaics…). The SIPs come in METS and Bagit wrapper formats. These formats contain manifests of the SIPs contents. The SIPs contain the archival images, work products derived from the images, the DLN metadata, relevant documents, etc. The DLN metadata comes in the form of XML and CIDOC/CRM mapped RDF. Other formats are under consideration. In addition, if all the OAIS compliant SIPs are sent to a Fedora 5 style repository, where RDF functions are available, significant self-organization of the contents will be enabled using the RDF metadata. The idea is to make the scientific imaging process as simple as possible, from the beginning all the way through the pipeline to the archival deposit. Now that this is possible, all of these functions can be adapted for a large-scale, networked organization. All that is needed are the financial resources and perhaps an expansion of our existing professional development team. But of course, this is true for all of our collective dreams for the future of cultural heritage. ––Mark
  15. Domip

    Turning camera

    Still in the waiting of Marlin's return from holidays... I still want to know how such device can be made (or bought) for turning the camera 90° & 270° degrees. Thanks Dominique
  16. Dear Developers, The DIGIT department from the Royal Library of Belgium (www.kbr.be/en) is interested to make use in a sustainable way of the software DLN:CaptureContext v.1 Beta. Nevertheless, DIGIT dept. has two requirements for the metadata stored in the PostgreSQL DB cpt_db : • Does DLN:CC support to work with a DB cpt_db hosted on a PostgreSQL server, rather than locally ? If so, is there a way to do so (DB client / server relationship on a local network) ? • Does DLN:CC support to read & write on a common DB shared among several DLN:CC workstations (multi-clients with one shared DB hosted on a PostgreSQL server) ? Does the Cultural Heritage Imaging intend to offer those features for a forthcoming stable release ? Thanks and regards for your dedicated work, Thierry F.
  17. In the face of the recent development where most museums are closed, what steps could Museums in Nigeria take to satisfy it's customers. Aina.
  18. Hi all, Recently we have released to the public our Photogrammetry&RTI project that we have been working for over 2 years. We were inspired by some of your posts here with the dome approach and we decided that we should share with you our results. Our aim was to create a cheap, affordable photogrammetric modeling as well as RTI imaging device, that anyone can create and does not cost thousands of euros. Because of that, we needed multiple cameras, led lights and a step motor, which allowed us to rotate an object placed on the rotating table and take photos around it. As our device heart we used RaspberryPi 4 and 4 ArduCAM cameras. We were aiming at small objects, artifacts which are not bigger than 100-120mm. In the first iteration of our device we have used a small (32cm in diameter) dome (an aluminum bowl from IKEA….) with 40 led lights and 4 cameras. This version was successfully tested on Cyprus about 2 years ago where we gathered extremely valuable experience. In the second iteration we decided to leave the bowl approach and create a spider-like shape, with 10 arms for the led strips (12 leds on each) and 1 seperate arm for the cameras. It is much more mobile than the previous version as we can disassemble it and because of that it is easier to work with. Because in the first iteration we noticed that depth field was an issue, we decided to go with higher resolution cameras that have motorized focus, so we can use focus stacking method to increase our depth field. For the led strips we used flexible pcb which we connected together with FPC cables. We are controlling them through our shield that goes on the top of the raspberry pi 4. Below I am sending 2 links to my LinkedIn posts as I can not upload images here. https://www.linkedin.com/posts/marcin-k%C5%82%C4%99bowski-458bb5146_photogrammetry-rti-textured-activity-6766022368184860672-n2MF https://www.linkedin.com/posts/marcin-k%C5%82%C4%99bowski-458bb5146_photogrammetry-rti-ptm-activity-6766744323838013440-z9yn Thanks for inspiration guys and let us know what you think about our device. I am sure we will share some results as we will start testing and creating more RTI images for the objects. Marcin
  19. Hi, Yes I have had this today on my new laptop
  20. Hello, On Friday I was trying to download the Windows version of the processing software, and it triggered my pcs antivirus software claiming there was a virus threat. Has anyone else had this problem? Thanks!
  21. Dear experts, Tech Creative Commons is a platform for scientific review, dedicated to the applied technologies sector, which has contributed to the dissemination of many successful papers, especially in the field of optics and astronomy. We are looking for a photogrammetry professional expert as possible reviewer of the paper "Tracking Apollo XVI Footage" acquired by us and which is under evaluation for a possible pubblication. We would be very grateful if anyone interested in a collaboration would like to contact us at info@tech-cc.eu. Thanks a lot for your cooperation. Jacques Debois Administrator Creative Commons Papers International Association
  22. Hello, I would like to integrate the rtibuild in a script. Is it possible to call rtibuilder from command line and passing image folder and lp file as arguments ? Thank you Nicolas
  23. Phil, I have captured room-sized interior spaces, but haven't imaged or scanned a factory as such; so these are a few generic thoughts, and much depends on the context, and what your deliverables are. If it is a factory occupied by equipment, pipes, ducting, conveyors, etc. then it is difficult for both laser scanners and photogrammetry as there will be numerous areas which are occluded (hidden) behind/above/below obstructions, so you'll only be able to capture the nearest face. Photogrammetry can struggle more because the processing software needs to 'see' and 'recognise' something in multiple images. With a laser scanner (at sufficient point density) you can pick up, say, a number of points on the nearest faces of say a duct, sufficient to estimate its diameter or section. Furthermore, you only need space for a single laser shot to get 'between' two items to capture at least a point behind - whereas with photogrammetry to derive that point you need it in at least two images, ideally more, and if you move viewpoint to take the next frame, you may lose sight (and even if you can see it, is there sufficient to triangulate its location?). If it is an empty space, then that is probably more achievable with photogrammetry subject to two factors - illumination and detectable tie points. In small rooms or ships' compartments for example, a ring-flash can be a useful illuminator; but for a cavernous space you will be dependent on existing lighting so you may be advised to use a tripod (so as to have as long an exposure as you can) and shoot from camera stations. Also in a cavernous space, especially if uniform, you may have difficulty when processing in establishing sufficient number and quality of tie points between your images so use of targets (ideally coded) would be worth considering - you could also use them as scale-markers. Although there may be some in the Cultural Heritage world who have imaged/scanned factories or comparable spaces, I would suggest that it might be productive for you to pose your query in more generic fora such as Agisoft's own forum, or FaceBook groups such as that dedicated to MetaShape or the generic 'LiDAR and Photogrammetry Review'. Dave
  24. hello i'm a drafter and i work as a freelance (since 1 week) some of my competitors have scanner(s) to create cloudpoints for revamping some workshops/factories as i have only few money yet, i can't buy a scanner (25-30k€) so i thought photogrammetry could be a solution. last year during the lockdown i did few tests on a parking area, outdoor, and i works not so bad (i used metashape)... these were my first shots. this week i tried to do the same exercice indoor and the result is definetly poorer! well my camera is a old one (Nikon D80) but it's not a bad reflex with a quite good lense... not sure that i can have result usable, with enough precision to be able to work with for my job if anyone have any feedback about photogrammetry for that kind of use, you're welcome!! :) Phil
  25. Hello all, My Mac, unbeknownst to me was on Auto-update and I'm unwillingly on Mac OS Big Sur..Most apps work. RTI Builder comes up as 'damaged. I tried the usual fix, to open apps from "anywhere" but that hasn't worked as of yet. Has anyone else run into this? If so, is there a fix? Many Thanks to all,
  26. I'm having a similar problem with MacOS. I'm running MacOS Big Sur 11.1 and now I can't open RTI Viewer at all. When I try, I get an error that says: "RTIViewer.app" cannot be opened because the developer cannot be verified. macOS cannot verify that this app is free from malware." The manager now only gives me the option of moving the app to the trash or canceling my attempt to open it. As a follow up, I did enough looking around that I know I can control-click to open the app, select open, then I get an option to open the application. I still thought I might report the problem here.
  27. Interesting open-access article on ways to extract features from photogrammetry models: https://www.cambridge.org/core/journals/antiquity/article/3d-contour-detection-a-nonphotorealistic-rendering-method-for-the-analysis-of-egyptian-reliefs/3DF1102C5016098C8D14D203D9D41C7C
  28. It has taken me a long time to try out the beta DLN-CC and DLN-Inspector, but they look to be really promising tools to help organize projects and manage metadata. Some initial observations: The RTI Image Set Details tab titled "General" is currently set up for Highlight-RTIs. It would be helpful to have some fields in the "RTI Properties" section (or on the "Setup" tab, below) for dome-based RTI parameters, such as the radius of the dome (I used string length as a substitute, but it's not exactly the same geometry), the number of lights, etc. The "Setup" tab allows a single photo of the setup to be added, but this might include another section to place parameters for the geometry of the capture setup (which is currently in the "General" tab for "RTI Properties"). For our RTI dome geometry, such measurements could include the height of the stage on which objects are placed, the vertical distance between the stage and the plane of the equator of the dome, the distance between the camera sensor and the plane of the equator of the dome, the number and type of lights; etc. For dome-RTI capture setups, a way to point to LP files and other calibration files (e.g., flat-fields, color checker profiles) that are used for processing multiple image sets would be helpful. An easier way to define custom directory structures for image sets without manually entering details into JSON files would be useful, especially for legacy projects with existing directory structures that don't fit the examples in the user guides. These are just some suggestions that I'd find useful. I realize there's a balance between tracking all the details and keeping the DLN tools simple and easy to use for the majority of people using RTI and photogrammetry. Thanks for all your hard work on these DLN tools! Best wishes, Taylor
  1. Load more activity
×
×
  • Create New...