Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by Graeme

  1. Hi all, we are interested in possibilities for using RTI data for generating contours, largely as part of our scanning pipeline. There are a range of contouring approaches relevant to surface datasets, and rather fewer for point cloud data. The algorithmic rendering tools we have seen presented so far include a range of contouring approaches and we wondered if there are any tools that we could trial on some virtual RTI datasets derived from laser scanning at Portus? We could then automatically (or supervised semi-automatically) vectorise the contours if they were produced as raster datasets. Any ideas very gratefully received. Best wishes, Graeme Earl and James Miles
  2. Graeme


    OK. So here is my updated version. Please do all add in your thoughts. I know I have left out some key points but I would rather they were added by the people concerned - don't be shy! I have simplified the timeline data so it only contains a single date rather than a range. I have also added in an extra item which is an image link. Hembo and I will use this in generating the interactive timeline from these data. Cheers, G &&& Publication of "Enhancement of Shape Perception by Surface Reflectance Transformation" by Tom Malzbender, Dan Gelb, Hans Wolters and Bruce Zuckerman. This work relates to the definiton of BRDF by Nicodemus, Richmond and Hsia 1977, and work by Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokiny and Mark Sagarz in 2000. http://www.hpl.hp.co...L-2000-38R1.pdf March 2000 http://www.hpl.hp.com/research/ptm/images/4tabfig-530.jpg >> Publication of "Polynomial Texture Maps" by Tom Malzbender, Dan Gelb and Hans Wolters http://www.hpl.hp.co.../papers/ptm.pdf and http://www.hpl.hp.co...PtmSig6Talk.pdf August 2001 https://sites.google.com/site/tommalzbender/_/rsrc/1362967457128/home/dome_oblique150.tif >> Publication of first on-line PTM viewer by Clifford Lyon. http://materialobjects.com/ptm/ December 2004 http://materialobjects.com/ptm/sd1_s.jpg >> Publication of "Surface enhancement using real-time photometric stereo and reflectance transformation" by Tom Malzbender, Bennett Wilburn, Dan Gelb and Bill Ambrisco https://docs.google....MTZkN2RiNjkwYzQ June 2006 https://sites.google.com/site/tommalzbender/_/rsrc/1362971573194/home/RealTimeRT150.tif >> First attempt of the Highlight method in the field at the Foz Coa Paleolithic Petroglyph site in Portugal in June 2006. This work by CHI was followed up by other rock art material in Wyoming, USA in August 2006. The highlight method opened the floodgates for capture of RTI. http://culturalherit...tions/vast2006/ June 2006 >> Publication of "New Reflection Transformation Imaging Methods for Rock Art and Multiple-Viewpoint Display" by Mark Mudge, Tom Malzbender, Carla Schroer and Marlin Lum. First us of Reflectance Transformation Imaging. Introduction of Highlight RTI and PTM Object Movies. Release of PTM Builder. http://culturalherit...tions/vast2006/ and http://www.hpl.hp.com/research/ptm/HighlightBasedPtms/ November 2006 https://sites.google.com/site/tommalzbender/_/rsrc/1362971713871/home/Highlight150.jpg >> Publication of "High Quality PTM Acquisition: Reflection Transformation Imaging for Large Objects" by Matteo Dellepiane, Massimiliano Corsini, Marco Callieri and Roberto Scopigno. New method for orientating lights to capture large objects. http://vcg.isti.cnr.it/Publications/2006/DCCS06/ November 2006 http://vcg.isti.cnr.it/Publications/2006/DCCS06/system.png >> Publication of “Illustration of Complex Real-World Objects using Images with Normals” by Corey Toler-Franklin, Adam Finkelstein, Szymon Rusinkiewicz. This work is part of a long standing collaboration between Szymon Rusinkiewicz and CHI around what has come to be known as Algorithmic Rendering – the use of non -photorealistic rendering (NPR) approaches to make clearer information contained in Red, Green, Blue, Normal (RGBN) data.CHI subsequently employed this approach in a range of cultural heritage contexts. http://gfx.cs.princeton.edu/pubs/Toler-Franklin_2007_IOC/index.php and http://culturalheritageimaging.org/Technologies/Algorithmic_Rendering/ August 2007 http://gfx.cs.princeton.edu/pubs/Toler-Franklin_2007_IOC/pinecone.jpg >> David Potts created a side-by-side PTM viewer proof of concept based on the materialobjects PTM viewer. This allowed viewing in stereo of PTMs captured with a suitable eye separation. He also produced a version of the materialobjects browser that could be embedded in a webpage. Subsequently Hembo Pagi produced a WordPress plugin to wrap up the same code. Note: code is no longer online but link below provides the original context, and information about the WP plugin. www.pinan.co.uk/ and http://acrg.soton.ac.uk/blog/467/ January 2008 >> Publication of "Image-Based Empirical Information Acquisition, Scientific Reliability, and Long-Term Digital Preservation for the Natural Sciences and Cultural Heritage" by Mark Mudge, Tom Malzbender, Alan Chalmers, Roberto Scopigno, James Davis, Oliver Wang, Prabath Gunawardane, Michael Ashley, Martin Doerr, Alberto Proenca and João Barbosa. Introduction of Empirical Provenance in the context of RTI, and also of HSH. http://culturalherit...2008/index.html April 2008 http://culturalheritageimaging.org/IMAGES/goldcoin_stripes.jpg >> Release of LpTracker software on SourceForge. This allowed automatic identification of highlights and was the predecessor of RTIBuilder. http://sourceforge.net/projects/lptracker/ August 2008 >> Release of RTIBuilder by University of Minho and CHI. RTIBuilder added a log file, the ability to manage the process and do different crops, and create different size PTMs from the same capture set. http://culturalheritageimaging.org/What_We_Offer/Downloads/Process/ January 2009 https://fbcdn-sphotos-d-a.akamaihd.net/hphotos-ak-prn1/p480x480/560018_537181906321156_617560788_n.jpg >> CHI awarded grant from the National Center for Preservation Technology and Training (NCPTT). Working with rock art experts, the grant funded a comprehensive 2-day workshop for 3D digital rock art documentation and preservation, and the first RTI web-based training materials. http://culturalheritageimaging.wordpress.com/2009/03/19/ncptt-grant-award/ March 2009 http://culturalheritageimaging.org/IMAGES/nyu_training.jpg >> Publication of "Material Classification using BRDF Slices" by Oliver Wang, Prabath Gunawardane, Steve Scher and James Davis. This introduced the use of HSH for image segmentation. http://users.soe.ucsc.edu/~prabath/wango_brdfseg.pdf June 2009 http://zurich.disneyresearch.com/~owang/pub/images/brdfseg.jpg >> Publication of "Optimized Image Sampling for View and Light Interpolation" by Prabath Gunawardane, Oliver Wang, Steven Scher, Ian Rickards, James Davis and Tom Malzbender. Paper compares Polynomial Texture Maps with 6 coefficients against Spherical Harmonics with 9 coefficients and shows that more terms increases the perception of shininess and reduces error. http://users.soe.ucsc.edu/~prabath/prabath_viewlight.pdf September 2009 http://zurich.disneyresearch.com/~owang/pub/images/viewlight.jpg >> Creation of the Leuven minidome. http://www.3d-coform.eu/index.php/tools/minidome and http://www.3d-coform.eu/downloads/3DC_D_4_1_WP4_YR1_FINAL.pdf November 2009 http://www.3d-coform.eu/images/stories/minidome.jpg >> Release of new version of RTIBuilder. This added the ability to use the HSH - Hemispherical Harmonics - algorithm for building RTIs. It also added the ability to reopen an existing "project" and so build the same data set using both algorithms quite easily. There were log file improvements, and other usability improvements as well. http://culturalheritageimaging.org/What_We_Offer/Downloads/Process/ January 2010 >> Blog post by Tom Goskar about Virtual RTI and LiDAR data. First attempt to use PTM fitted data as a means to interact with LiDAR landscape datasets. http://www.wessexarch.co.uk/blogs/computing/2010/08/26/interactive-landscape-relighting August 2010 http://www.wessexarch.co.uk/files/imagepicker/a/admin/thumbs/virtual-ptm-dome-stonehenge-whs-lidar.jpg >> Publication of “Polynomial Texture Mapping and 3D representations” by Lindsay MacDonald and Stuart Robson. This paper compared the results of PTM, photometric stereo and laser scanning. It concluded that photometric stereo produced the best normals. This is further developed in detail by MacDonald 2011. He also produced a PTM fitter (and phometric stereo fitter) in MatLab that crucially is able to fit arbitrarily large input images by tiling. http://www.isprs.org/proceedings/XXXVIII/part5/papers/152.pdf and http://ewic.bcs.org/upload/pdf/ewic_ev11_s8paper4.pdf June 2010 >> Publication of "Polynomial texture mapping and related imaging technologies for the recording, analysis and presentation of archaeological materials" by Earl, Beale, Martinez and Pagi. This published the virtual RTI method for using the PTM fitter and viewer as a means to interact with 3d data. It also provided an example of locating multiple PTMs in capture space. http://eprints.soton.ac.uk/153235/ June 2010 http://www.jcms-journal.com/article/viewFile/56/67/560 >> Publication of "SpiderGL: A JavaScript 3D Graphics Library for Next-Generation WWW" by Marco Di Benedetto, Federico Ponchioy, Fabio Ganovelliz and Roberto Scopigno. This WebGL viewer included a PTM shader. http://vcg.isti.cnr.it/Publications/2010/DPGS10/spidergl.pdf July 2010 http://spidergl.org/img/teaser.jpg >> Publication of "The Stirling Castle wood recording project" by Karten and Earl. This compared PTM and laser scan data and examined the degradation of surfaces during the conservation process, and as originals, casts and moulds. http://eprints.soton.ac.uk/342682/1/065_2010WEB.pdf July 2010 http://eprints.soton.ac.uk/342682/1.haspreviewThumbnailVersion/065_2010WEB.pdf >> CHI and Szymon Rusinkiewicz received funding from the National Science Foundation to develop the Automated Documentation and Illustration of Material Culture through the Collaborative Algorithmic Rendering Engine (CARE) tool. http://culturalheritageimaging.org/What_We_Do/Projects/nsf/index.html October 2010 http://culturalheritageimaging.org/IMAGES/soldiers_nsf_ar.jpg >> Publication of “Archaeological applications of polynomial texture mapping: analysis, conservation and representation” by Earl, Martinez and Malzbender. This paper provided an evaluation of PTM across a range of archaeological applications. It demonstrates the value of the Malzbender and Ordentlich 2005 approach to maximum entropy lighting and the potential for batch processing of a PTM archive to identify the best views. We are building on this in our latest work on mining our RTI archive. http://eprints.soton.ac.uk/156253/1/EarlMartinezMalzbender2010.pdf December 2010 http://www.jcms-journal.com/article/viewFile/56/67/558 >> Creation via the AHRC RTISAD project of an annotation framework for RTI data, based on the addition of bookmarks to the RTI Viewer enabling viewer controls to be set according to the parameters saved in an XML bookmarks file. These files could be shared allowing for collaborative annotation. http://acrg.soton.ac.uk/tag/rtisad/ January 2011 >> Martin Hunt described his method for capturing underwater RTI via the RTISAD wiki. He captured a small set of PTMs in freshwater. He also examined a Cornish marine site but at this stage no clear marine PTMs were gathered. He made a fibreglass dome with apertures for placing a light in consistent positions. Data were then processed via a standard LP file. The work was first made public to my knowledge at the AHRC RTISAD workshop in Oxford. http://acrg.soton.ac.uk/blog/1528/ January 2011 >> First multispectral imaging undertaken by CHI. Subsequent work in multispectral includes ongoing research by Eleni Kotoula. CHI work was presented at CAA2012 amongst other places. https://www.ocs.soton.ac.uk/index.php/CAA/2012/paper/view/633 and http://acrg.soton.ac.uk/blog/1569/ March 2011 http://acrg.soton.ac.uk/files/2012/11/figure3left.jpg >> Poster presentation at CAAUK of “RTI in conservation examination, analysis and documentation” by Kotoula and Earl. This introduces work studying the application of RTI, including microscope RTI, to conservation. http://academia.edu/1175819/RTI_in_conservation_examination_analysis_and_documentation March 2011 http://culturalheritageimaging.files.wordpress.com/2012/04/microdome_panels4.jpg >> Four day training programme by CHI in RTI at the NYU Institute of Fine Arts Conservation Center. CHI have so far delivered their 4 day RTI training more than 20 times to more than 300 participants. The participants come from ~60 different institutions, many of them museums. March 2011 >> Creation of a specification via the AHRC RTISAD project for implementing an RTI repository. http://acrg.soton.ac.uk/tag/rtisad/ June 2011 >> Publication of “Reflectance transformation imaging systems for ancient documentary artefacts” by Earl, Basford, Bischoff, Bowman, Crowther, Dahl, Hodgson, Martinez, Isaksen, Pagi, Piquette and Kotoula. This paper introduced the RTISAD project and also included the use of RTI data in computer graphic rendering (at Catalhoyuk), microscope capture, RTI annotation and a minidome mounted on and controlled by a commodity camera. It also introduced the use of MentalRay for contour shading, enhancement and non-photorealistic rendering of RGBN data derived from RTI. http://eprints.soton.ac.uk/204531/1/ewic_ev11_s8paper3.pdf July 2011 >> Presentation by Lindsay MacDonald of integration of PTM, photometric stereo and laser scan data to analyse the original and cast of the Hunters Palette, an early Egyptian (c. 3100 BCE) stone slab in the British Museum. http://www.cosch.info/documents/10179/30087/Abstract_WG2_Lindsay+MacDonald.pdf/e21bf8cf-004c-4ce6-86eb-1ad597c6ec6c;jsessionid=52E1F1AB6A31E3E001A9AC7282D8BCC2?version=1.0 March 2012 >> Publication of “Printing Reflectance Functions” by paper with specular micro-geometry by Tom Malzbender, Ramin Samadani, Steven Scher, Adam Crume, Douglas Dunn and James Davis. Whilst not directly related to RTI it gives a teasing glimpse of a printed page that reacts to light orientation, using data that could be acquired via RTI. https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnx0b21tYWx6YmVuZGVyfGd4OjcwMGQwNDVkMDM0Y2FhMWI and May 2012 http://graphics.soe.ucsc.edu/prf/Teaser.jpg >> Hembo Pagi wrote a blog post about a simple way of switching between screenshots from RTI Viewer and original input photographs. I find this an extremely useful way of sharing key information. The repository developments that continue at Southampton are designed to support this preview method, and we already make use of it extensively on the ACRG website. http://forums.culturalheritageimaging.org/index.php?/topic/202-example-of-2d-way-of-showing-rti-results-in-web/ and http://acrg.soton.ac.uk/tag/rti-example/ October 2012 http://www.arheovisioon.ee/wp-content/uploads/2012/10/ir-spec.jpg >> Eleni Kotoula introduces False Colour Imaging RTI and Transmitted RTI. http://acrg.soton.ac.uk/blog/2786/ and http://acrg.soton.ac.uk/blog/2796/ February 2013 http://acrg.soton.ac.uk/files/2013/02/Picture11.jpg >> Launch of project by David Selmo to capture underwater RTI data and to examine the hardware and software implications, and the issues imposed by the underwater environment. The first attempted capture of PTM that I am aware of took place in 2010 and used a frame with holes cut to provide a means to place lights in a consistent dome orientation. Turbidity proved a significant issue hence the focus of Selmo’s work. Selmo produced the first highlight RTI capture I know of. http://forums.culturalheritageimaging.org/index.php?/topic/230-rti-underwater-a-research-project-university-of-southampton and http://cma.soton.ac.uk/blog/2013/05/underwater-reflectance-transformation-imaging-a-success/ March 2013 http://cma.soton.ac.uk/files/2013/05/Dave-Notes-change-400x300.jpg >> Publication of "Multi-light Imaging for Heritage Applications" by Sarah Duffy. This provided an overview of RTI and included a range of examples. Demonstrates move to mainstream archaeological community. http://www.english-heritage.org.uk/publications/multi-light-imaging-heritage-applications/Multi-light_Imaging_FINAL_low-res.pdf June 2013 >> Launch of the Leuven minidome online viewer for their data format. http://www.arts.kuleuven.be/info/ONO/Meso/cuneiformcollection July 2013 http://www2.arts.kuleuven.be/info/bestanden-div/images/MB D11 a1.preview.jpg
  3. Graeme


    This is great Carla! Other things to add would be underwater RTI (Dave, George and others) and the Dellepiane et al method for large objects (http://vcg.isti.cnr.it/Publications/2006/DCCS06/) and the first WebGL viewer (SpiderGL). I think that paved the way to more recent attempts to create native browser RTI tools. Also I don't want to get into semantics re: RTI and other approaches but the leuven minidome http://www.minidome.be/v01/home.php and their new WebGL viewer should be in there too: http://www.arts.kuleuven.be/info/ONO/Meso/cuneiformcollection Cheers, G
  4. Hi all, I was planning on submitting a proposal for a workshop at the Digital Heritage conference in Marseille 28 October - 1 November 2013. Details of the proposal process are below. I wondered if anyone had already submitted one on RTI? If not would people like to get involved in this one? Cheers, Graeme >> http://www.digitalheritage2013.org/special-sessions-workshops-panels-tutorials/ "Proposals for workshops and panels will be judged by the ability to bring together key researchers from the heritage as well the ICT domain in the state-of-the-art area, introduce a new area to the overall community, further develop the area, and help establishing a larger research community beyond the area. Special session proposals covering multi-disciplinary areas are particularly encouraged, as well as those proposals regarding common challenges, e.g. (and not restricted to) Methods in Archaeology, Museums and Technology, etc. Special Session submissions should provide a proposal (up to 4 A4-pages) after the mandatory abstract. Proposals for special sessions must indicate its nature, i.e. workshop, tutorial or panel within the ‘Title’ followed by a topics title, rationale, session outline, its motivations, a short description of the material to be covered, contacts information including: name; affiliation; email; mailing address; a short CV for each presenter, participant or authors who have agreed to participate with their results to the session, with a tentative title and short abstract (150 chars) for each presentation."
  5. Graeme


    Hi all, I am giving a talk in York, UK on Saturday 6 July 2013 at Digital Heritage 2013: Interfaces with the Past. http://www.york.ac.uk/digital-heritage/events/cdh-2013/ As part of that I thought I might create an MIT SIMILE "RTImeline" charting the development of RTI, and its components. (I don't think this exists already - of course if it does please let me know.) I realise this might quickly dissolve into disagreement (!) but do people have some ideas of key events in the development of RTI that should go on the timeline? If so respond to this with: - title - one or more URLs if possible - a date - an end date if you want to specify a range I realise this is cheeky as I am the one giving the presentation but of course all contributions will be very gladly acknowledged! Cheers, Graeme >>> Some uncontroversial examples (I think - please let me know if I am wrong!): Publication of "Enhancement of Shape Perception by Surface Reflectance Transformation" by Tom Malzbender, Dan Gelb, Hans Wolters and Bruce Zuckerman http://www.hpl.hp.com/techreports/2000/HPL-2000-38R1.pdf March 2000 N/A >> Publication of "Polynomial Texture Maps" by Tom Malzbender, Dan Gelb and Hans Wolters http://www.hpl.hp.com/research/ptm/papers/ptm.pdf http://www.hpl.hp.com/research/ptm/papers/PtmSig6Talk.pdf August 2001 N/A >> Publication of first on-line PTM viewer by Clifford Lyon. http://materialobjects.com/ptm/ December 2004 N/A >> Publication of "Surface enhancement using real-time photometric stereo and reflectance transformation" by Tom Malzbender, Bennett Wilburn, Dan Gelb and Bill Ambrisco https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnx0b21tYWx6YmVuZGVyfGd4OjI4MzRiMTZkN2RiNjkwYzQ June 2006 N/A >> Publication of "New Reflection Transformation Imaging Methods for Rock Art and Multiple-Viewpoint Display" by Mark Mudge, Tom Malzbender, Carla Schroer and Marlin Lum. First us of Reflectance Transformation Imaging. Introduction of Highlight RTI and PTM Object Movies, http://culturalheritageimaging.org/What_We_Do/Publications/vast2006/ November 2006 N/A >> Publication of "Image-Based Empirical Information Acquisition, Scientific Reliability, and Long-Term Digital Preservation for the Natural Sciences and Cultural Heritage" by Mark Mudge, Tom Malzbender, Alan Chalmers, Roberto Scopigno, James Davis, Oliver Wang, Prabath Gunawardane, Michael Ashley, Martin Doerr, Alberto Proenca and João Barbosa. Introduciton of Empirical Provenance in the context of RTI http://culturalheritageimaging.org/What_We_Do/Publications/eurographics2008/index.html April 2008 N/A
  6. Forgot to add, as per Lena's post I think it would be cool to include measurement options in the viewer. The MOO Viewer allows you to isolate pixel values so this in itself could be useful, but measuring distances and areas would also be fabulous. I wonder if this will come out of the box with the ResearchSpace framework? I hope so! I guess the issue will be providing the pixel scale factor information either as part of the RTI file or via a scale object in the viewer. Cheers, G
  7. Hi Dennis, sorry this has taken so long! I agree entirely. I think that moving the light gradually and interactively is key and I envisage this being exactly the interaction with the new viewer, whatever the resolution or zoom level. What the light position bookmarks allow you to do is to contextualise annotations however. So when you are trying to convince one of your colleagues you can send them the exact light and render mode setup you used, they look at it that way, and then modify it however they like to examine the veracity of your statement. To my mind this combines the best of a published figure in a paper with access to the physical object? In fact I would see this kind of presentation completely replacing published figures in years to come, much as Adobe PDF could with 3d data, but that is getting of the track I guess. Cheers, Graeme
  8. HI mjm, I have just posted an update on the project planning in the other forum http://forums.cultur...hrc-project-uk/. We now have cool developers ready to go so things are happening :-) All ideas gladly received. Cheers, Graeme
  9. We had a good meeting to discuss some elements of the @AHRCRTI project today. For administrative reasons the project now has a revised official start date of 1 August 2013 so expect more regular progress updates from then. Still, the meeting has reminded me that I owe an update to CHI Forums on current planning. As ever we *really* welcome comments and ideas. We are designing this out in the open and so I hope that the results will be a consequence of the RTI community as much as an individual research project. One of the core work-packages for the project is provided an open source RTI viewing framework for the web. Since we also have an interest at Southampton at gigapixel RTI capture and fitting making the web framework support tiled data (as the RTI Viewer and SpiderGL viewers do) has been the driving factor. This in turn has led to us focussing on IIP Image as a mechanism for serving tiles. RTI asset management We have an archive of several 1000 RTIs at Southampton now and managing the data behind these continues to be difficult. As outlined below we have made good progress in frameworks for managing these data in the long term but management during the lifecycle of a project is still complex. We wonder if a new framework is needed or whether existing DAMs can be adapted. Whilst this isn’t in scope for the current project it is something we are looking at alongside. RTI streaming server We want this to be based on the RTI Viewer support of tiled data if at all possible. We really don’t want to fork the format off in a new direction – indeed it wouldn’t be RTI if it did J Interesting stuff to consider here is how best to encode HSH and PTM in tiled formats (and to compress it) and what to do about linking the tiled data to the metadata component of the RTI specification. We also don’t want to look at local solutions for holding the tiled data. The standalone RTI Viewer already does everything needed for interacting with local data. So, for these reasons we are concentrating on modification of IIP Image: http://iipimage.sourceforge.net/ Kirk was a developer of this (see e.g. http://eprints.soton.ac.uk/252876/1/122.html) and the community is active. And IIP Image can be deployed on many server types, has a variety of existing clients, and is now an official Debian and Ubuntu package http://iipimage.sourceforge.net/2012/10/iipimage-now-an-official-ubuntu-package/ RTI ingestion tool We will need a mechanism to allow non server admins to ingest RTI data. We don’t have a sense yet of how this process will work but one option would be to have a user level on the server with permission to upload RTI. We certainly do not intend this server to act as an RTI repository. Whilst such a mechanism could be a side development it is not part of this project. We have already completed some work on RTI data management as part of the JISC DataPool and JISC DepositMORE projects, and the AHRC RTISAD project, which I can describe separately. Basically this facilitates ingestion of RTI data to a repository. Ideally I would like to add an IIP Image server to our Southampton RTI repository to deliver content by the end of this project. RTI viewer Development of this is focused on adding a webGL layer to IIP MOO Viewer: http://iipimage.sourceforge.net/documentation/iipmooviewer/ In the first instance we are assuming webgl support on devices, including tablets. However we also have ideas for a cut down native HTML5 viewer. How far we progress with this will in part depend on how tablet support of webgl changes in the current year. For example, see http://www.techradar.com/news/internet/web/microsoft-s-internet-explorer-11-to-include-webgl-graphics-after-all--1141713 In terms of iOS I would still dearly love to avoid a separate app, particularly as the gyroscope and compass data are so readily accessed from within the browser, but at the moment I think we would still be restricted to very simple RTI rendering options in this case e.g. analogous to hillshade approaches. RTI Annotation We very much hope to be experimenting with the Research Space IIP Image annotation framework soon. This will provide an annotation framework that immediately fits into a wide linked data community and into the associated communities. Leif did some work on the RTISAD project defining a bookmark format for RTI and this has since been developed in discussions with CHI. Our plan is for the new viewer to support this bookmark framework so that a current set of views can be saved as a bookmark and so that a bookmark can be used to set the viewer settings in the browser e.g. the light/s position/s, current filters, etc. One interesting idea that was raised about this at the meeting is that a coherent set of annotations on for example a text artefact would constitute a new publication - this is in fact a standard structure for publishing text artefacts. So, what is the best way to cite these annotations as a coherent argument? A DOI, or via the URI from ResearchSpace? We had thought previously about the ability to link to individual annotations but this feels somehow different to citing a collection of annotations. And what makes it even more exciting will be the ability to change the viewer settings for each annotation in turn, perhaps getting an insight into the reading process (in the case of the text artefacts). Would look nice played as an animation too - cycling through the annotation bookmarks in turn. WordPress plugin to embed the RTI Viewer The final component is easy deployment of the viewer. Hembo already developed a wordpress plugin to use with the MaterialObjects browser and so it is a natural step for us to implement the new RTI Viewer in such a way that it can be easily embedded in wordpress and exposed to allow control from the wordpress page e.g. setting viewer parameters based on a bookmark referenced on the page, or even perhaps using wordpress tags to define preset RTI views.
  10. Thanks Eleni. These are great ideas! We know of ongoing work in most of the areas e.g. the CHI CARE tool and Lindsay Macdonald's application to interact with a pair of PTM files. In terms of the annotations one reason for choosing IIP Image (if we do in the end) will be the ongoing work on the Andrew W Mellon Foundation ResearchSpace project. All the best. Graeme
  11. Hi Taylor, we definitely aren't planning on replacing the RTI viewer. The idea of the project is to promote sharing and commenting upon RTI data and to bring it to new audiences, and given constraints on processing, network access etc. I would see the RTI viewer as the primary means of interaction in the future. COmmetns welcome though. I am actually just writing a post about the plans for the project which starts officially on 1 June. As I said in the first post we are going to do all the development in the open so that way we can hopefully get loads of input from the RTI community. In the application we proposed IIP Image Moo Viewer http://iipimage.sourceforge.net/documentation/iipmooviewer/ as the framework to extend. I would really welcome thoughts on this. So far my thinking has been that IIP image has a broad user base and works well for huge images. The HTML5 viewer works well on tablets and there has been some work to give tablet-specific functionality. So, I hope this can be one direction that this project can take. A big issue of course is the management of tiled RTI data. There has been work on this already, notably using SpiderGL for rendering PTM data. Keep the ideas coming please! All the best. Graeme
  12. The UK Arts and Humanities Research Council (AHRC) have provided follow on funding to support the development of an on-line, open source RTI viewer that works on all platforms. We would welcome all input to the project and we will be sharing all of our progress as the project develops. More details to follow ASAP. There is a round-up of the activities from the previous project here: http://acrg.soton.ac.uk/tag/rtisad/ You can follow the project via @AHRCRTI and my input via @GraemeEarl @AHRCRTI Note concerning privacy: Any content that is posted on this forum is public and will get indexed by search engines. You have to make an account to post here and to see the list of members. There is information about individual members available to other members, but only if you choose to fill in profile information for your account. Only administrators on the system see the member's email addresses.
  13. We have started to use this as the default way of showing RTIs on the ACRG blog thanks to Hembo: http://acrg.soton.ac.uk/tag/rti-example/
  • Create New...