Jump to content

Dave Martin

  • Content Count

  • Joined

  • Last visited

  • Days Won


Dave Martin last won the day on August 28 2020

Dave Martin had the most liked content!

Community Reputation

4 Neutral

About Dave Martin

  • Rank
    Advanced Member

Profile Information

  • Location
    Isle of Man

Recent Profile Visitors

1,226 profile views
  1. Luke, Sorry no immediate suggestions, just a few questions: Are you saying that none of your previously successful projects will work now? If you download the standard known-to-work fish fossil image set from the CHI website, can you get that to process? What sort of drive is L:? - local? network? Is there any difference if you try and process a project stored on C:? (doesn't need to be big, half a dozen images would be enough to test) In cases like this, it can also be useful to attach the project .xml file as that can sometimes reveal subtle issues with file names etc. Dave
  2. Simon, Thanks for coming back with the update. Dave
  3. Hi Simon, Just a quick thought/suggestion. I have always used images with '.jpg' i.e. lower case. Suggest you maybe try a new project, and to test if that is the problem, you only need a handful of the images. Would suggest start a new project to do this and copy say half-a-dozen images and rename their suffix - if it all works OK then you can try with the full set (I personally tend to use IrfanView even just for renaming like this). cheers/Dave (from the 'other' Mona!)
  4. Thanks Krl, Main thing is that you're working but it might help someone else in the future! cheers/Dave
  5. Thanks Krl, Especially to help anyone who might have similar problems in future, could you share what you did to make it work? cheers/Dave
  6. Thanks Krl, I was worried when you said you were 'cropping' them - I thought you were cropping them out of the image before processing! - those boxes are just to define the search area. Secondly, just looking at your original image, I have a few observations of which only the first couple may relate to the problem you have experienced. 1. The background on which the spheres are sitting (or is behind them) looks very close in colour to the spheres, because you're using a photograph where the grey card or background is in shadow - you're not doing the circle-edge detection any favours, its trying to detect black on near-black! 2. The illumination in the shot you kindly supplied looks rather low - the recommended light positions are no lower than 15 degrees to the plane of the object, and no higher than 65 degrees to the plane of the object. 3. The highlight-detection method of RTI generation depends on accurate detection of the highlight on the sphere as each frame is processed, and using that to deduce the direction from which each frame was illuminated. The light should be reflected as a 'spot', as near to a point - but the light looks to be more like patches on your spheres? (maybe either over-illuminated or something like a flood-light used rather than as close as you can mange to a point source). 'Wide' sources won't affect the sphere detection but it will give less-distinct shadows and hence degrade the final result. 4. The whole process depends on each frame having, as close as you can, only one direction of illumination. Looking at your sample image, it looks as if the are other light sources or something out of shot that are throwing multiple direction illumination on the spheres? - that shouldn't affect the feature detection but it will degrade the final result. I'd highly recommend that you go through the RTI Capture Guide which CHI publish at http://culturalheritageimaging.org/What_We_Offer/Downloads/RTI_Hlt_Capture_Guide_v2_0.pdf - have a look at how the spheres and illumination appear in successful use. CHI also have tutorial data sets on their web site which are known to work, as well general practise they allow you to see how successful capture looks. cheers/Dave
  7. Krl, Can you post one of your source images with the spheres, and then a screenshot of where you say you "crop out the spheres"? Dave
  8. Phil, I have captured room-sized interior spaces, but haven't imaged or scanned a factory as such; so these are a few generic thoughts, and much depends on the context, and what your deliverables are. If it is a factory occupied by equipment, pipes, ducting, conveyors, etc. then it is difficult for both laser scanners and photogrammetry as there will be numerous areas which are occluded (hidden) behind/above/below obstructions, so you'll only be able to capture the nearest face. Photogrammetry can struggle more because the processing software needs to 'see' and 'recognise' something in multiple images. With a laser scanner (at sufficient point density) you can pick up, say, a number of points on the nearest faces of say a duct, sufficient to estimate its diameter or section. Furthermore, you only need space for a single laser shot to get 'between' two items to capture at least a point behind - whereas with photogrammetry to derive that point you need it in at least two images, ideally more, and if you move viewpoint to take the next frame, you may lose sight (and even if you can see it, is there sufficient to triangulate its location?). If it is an empty space, then that is probably more achievable with photogrammetry subject to two factors - illumination and detectable tie points. In small rooms or ships' compartments for example, a ring-flash can be a useful illuminator; but for a cavernous space you will be dependent on existing lighting so you may be advised to use a tripod (so as to have as long an exposure as you can) and shoot from camera stations. Also in a cavernous space, especially if uniform, you may have difficulty when processing in establishing sufficient number and quality of tie points between your images so use of targets (ideally coded) would be worth considering - you could also use them as scale-markers. Although there may be some in the Cultural Heritage world who have imaged/scanned factories or comparable spaces, I would suggest that it might be productive for you to pose your query in more generic fora such as Agisoft's own forum, or FaceBook groups such as that dedicated to MetaShape or the generic 'LiDAR and Photogrammetry Review'. Dave
  9. Interesting open-access article on ways to extract features from photogrammetry models: https://www.cambridge.org/core/journals/antiquity/article/3d-contour-detection-a-nonphotorealistic-rendering-method-for-the-analysis-of-egyptian-reliefs/3DF1102C5016098C8D14D203D9D41C7C
  10. Dag-Øyvind, I can't help directly, but a couple of suggestions, as it looks like it might be a graphics driver related issue: 1) It might help if you gave some details of your laptop make/model, operating system, and what graphics card it has. 2) Is the problem / that screenshot from the laptop's internal screen, or when using an external monitor? 3) If you haven't done so already, might be worth trying with an external monitor 4) It sometimes can be illuminating to temporarily try putting your laptop (internal screen and/or external monitor) into a different lower resolution 5) It would be worth checking for updates for your graphics card - from the operating system, and from both the laptop manufacturer and the graphics card manufacturer. 6) You say this is all projects, do you have a project which you can view OK on one PC but when you copy the files (or access over network) they give this faulty display? MVH Dave
  11. MBennett, I don't know if there is any way you can synchronise or import chunks, but one possibility is if you have network connectivity, it might be possible to have various clients accessing one PostgreSQL database. Dave
  12. Ale, Particularly with PTM models, there are a range of visualisation aids, such as specularity, built in to the viewer already. Re "Metallic" - not sure what you're meaning exactly. RTI cannot perform any analysis of the subject's elemental composition by itself - you may get some clues from the photograph visual wavelength spectra, but you will get more - for bulk composition especially - using, say, XRF (X-Ray Fluorescence) - and hand-held XRF is becoming more common. Re "Roughness", there is work aplenty on machine vision assessment of 'roughness' as a production quality tool, using direct and scattered light and sometimes fractal analysis, but that is usually associated with prior reference data acquired with a similar part of known RA and RZ. I think it would be good if you could maybe explain your target / sample type / size / what you're trying to investigate. Dave
  13. Ale, I've posted an answer in the RTI dome acquisition section of the forum, suggest you might delete this duplicate question. Dave
  14. Ale, The idea of using addressable LEDs is definitely attractive; but I think it would harder than you think, and not very successful, to use those strips. 1) Those LEDs are not bright - you need a reasonably power light source to cast the shadows for the RTI process - or you need very long exposure times and noisy high ISO. 2) If you use such a strip, you will only be using a fraction of the LEDs 3) You can't just use, say, a sequential controller, you will need some form of programmable control - be it Arduino or PC or AVR or even TTL logic - to switch on the appropriate LED, let it settle, then fire the camera, and then repeat. Or, if your off-the-shelf controller allows you to specify switch on LED 74 at R=..G=..B=.. then you could do that manually, fire the camera, switch off LED 74 and then switch on, say, LED 81. 4) If you use such a strip, you will struggle to get the LEDs in the optimum position unless you cut the tape and solder fragments together (not actually difficult at all) and that is an attractive possibility (subject to note 1 above though) I have actually tried using strips similar to this as donors for a mini-dome, chopping into individual LEDS, placed appropriately around a dome and chaining them with soldered jumpers (light for data in/out, slightly heavier for power and ground, with extra power tapped-in periodically). However, results were not great due to low light output from individual LEDs. Dave
  15. Jackie, Regret I can't help directly (my last Apple PC was Apple-II on which I used UCSD Pascal for my PhD research!) but when you say you have removed the newest version of Java, I just wonder have you been able to remove all versions of Java. Certainly on Windows and some -ix operating systems, you can have multiple Java versions installed, so it is possible that although you removed one (the newest), there might be another slightly older, but still too new, version installed. One other thought if you can't resolve the MacOS installation is could you maybe run a virtual PC and clean-install windows thereon to give an environment to host required version of Java and then install RTI software in that virtual machine? Dave
  • Create New...