Skip to main content
European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary
Contenu archivé le 2024-05-24

Being There - Without Going

Livrables

The ‘Place Probe’ incorporates a range of stimuli and techniques aimed at articulating a person’s sense of place. It has been developed, used and revised. Drawing on the experiences of the previous empirical studies it was decided to include the following instruments within the probe. The Visitors Book. Research undertaken by Turner and Turner (2003) has highlighted the written reports contained in visitor’s books as a source of rich data about place. Indeed such reports have the advantage that they are often ask open-ended questions e.g. ‘Please tell us about your experience’ rather than ‘Tell us about the lighting’, hence they do not prompt people to provide answers on specific topics. Sketch Maps. Sketch maps provide information on the layout and key features of a location. In this case accuracy of the map is not of prime concern, rather it is the depiction of those aspects of the place that people remember; for example a tree, building or seating area. They can also be used to provide additional information such as where people are standing or their paths through the environment. Salient Features. This section of the probe asks for participants to rate the three most salient features of the environment. The aim of this is establish the most important characteristics of the place in order to help advise the designers of a virtual place and to evaluate how effective the virtual scene is. The Probe asks “Pick 3 features of the environment that you remember and rank them in order of importance”. Semantic Differentials. In this instrument participants were asked to rate various features of the environment This part of the probe combined Osgood’s semantic differentials (Osgood, et al., 1953) and Relph’s (Relph, 1976) three aspects of place (physical features, activities afforded and affect engendered). Participants are asked to rate the environment on the scale. Very Quite Neither Quite Very Attractive Ugly Big Small Colourful Colourless Noisy Quiet Temporary Permanent Available Unavailable Versatile Limited Interactive Passive Pleasant Unpleasant Interesting Boring Stressful Relaxing Table 2: Semantic differentials Select a Photograph. A set of photographs is taken of the real world location. These were then given to the participants in the study who were asked to select the one which best represented their experience of the location they had or were visiting. Six Words. The final part of the probe asked people to write down six words which best described their experience of being in a particular place. Work with the Place Probe version 1 indicated that there was clearly some ‘mileage’ in the approach at least from the perspective of gathering some rich, contextual data that could be used to critique virtual representations of real places. However, the second purpose of the probe, namely to communicate between evaluators, designers and engineers, had not been successful. The data from the Place Probe was too vague. Also, it was felt that there were important aspects of places that were not being captured. The Place Probe version 3 demonstrates a number of developments over version 1. One key aspect that was only implicit in version 1 of the Place Probe was sound. As the project progressed it became increasingly clear that the soundscape that accompanied any visual representation was a key component of the sense of place. Accordingly a separate section of the Place Probe was devoted to sound. The other main finding from version 1 was that the semantic differentials provided an effective and quick method of data collection and analysis. The aim of the probe was to provide data, which would be of use to the designers of virtual environments, which sought to capture particular aspects of the real world. An explicit intention of the probe was to find out what was missing from the experience of a virtual environment when compared to its real world counterpart, rather than simply provide a quantitative score for place or presence. It is contended that such an approach when combined with traditional questionnaire methods such as ITC-SOPI and ITQ and MEC will provide a greater insight into level of presence experienced by people, and how this is affected by their sense of place. The method is qualitative in nature and that of course introduces a series of issues with data interpretation, capture and reporting. However it is believed that by using appropriate methods of inter-rater reliability and that the multiple sources of data within the probe overcome these issues.
An algorithm for the generation of new images was developed. The algorithm starts from a sequence of images taken by a camera, which either moves sideways or rotates around a fixed point in space. By sampling strips from frames in the sequence, where the strip position varies as a function of the input camera location, the algorithm generates images, which describe how the scene looks from new positions in space. The new images thus generated correspond to a new projection model defined by two slits, called the X-Slits projection. In this projection model every 3-D point is projected by a ray defined as the line that passes through that point and intersects the two slits. The intersection of the projection rays with the imaging surface defines the image. By simple manipulations of the strip sampling function, we can change the location of one of the virtual slits, providing a virtual walkthrough of a X-slits camera. All this is done in real time and with phote-realistic quality. These qualities make the technology unique and potentially useful for real time applications of image based rendering.
It is presently technically impossible to supply technically all the information that can possibly be collected by the human visual system. So in creating virtual realities, it is important to concentrate on improving those parameters of the display that are readily detected by the visual system and to neglect those physical imperfections of the display the visual system is unable to detect. In consequence, the relevant parameters of the display should be optimised as far as possible. Moreover, one may use conditions that deteriorate the input characteristics thus assimilating virtual to real reality. Under these deteriorated conditions, even an imperfect simulation of the world will give an acceptable feeling of presence since the visual system cannot detect the imperfections present in the display. One contribution of this project was to generally advise how the necessary imperfections of virtual reality may be hidden and to perform quantitative psychophysical experiments on some parameters of the simulation that may be critical for the feeling of presence. The project started out with the question of how observers experience non-perspective distortions as introduced by a new technique for creating photo realistic 'virtual realities', the two-slit approach which lies at the heart of the BENOGO project. In this project, image-based rendering presentations (IBRs) are the central issue, and psychophysical techniques are used to measure the influence exerted by projections inherently linked to this technique. In addition, two important parameters: luminance and contrast that have a possible influence on the impression of Presence have been investigated quantitatively. The initial aim was to evaluate to what extent, under what circumstances, and, primarily, due to which cues, IBR images created on the basis of the two slit approach differ from traditional perspective images and how these factors may hinder the sense of 'Presence'. More specifically, we measured the influence of geometrical distortions of three-dimensional reconstructions caused by the two-slit technique as well as the influence of variations in both luminance and contrast. Over the course of the project, three different lines of experimentation were pursued. The first was to study the sensitivity of the human visual system in detecting distortions for both stationary and moving objects by using well-defined (and hence relatively simple) stationary and moving visual stimuli of the type typically used in psychophysical experiments. It turned out that moving stimuli allow greater distortions and hence projection discrepancies than stationary ones, an insight that could be used in scenario design. The second aim was to apply these (and other) results to make use of imperfections in human vision when creating virtual realities. The results of these investigations are presented in detail in Deliverable 7.1. The third, most complex approach was to investigate the effect of non-perspective imaging in spatial visualisations to clarify the amount of perspective distortion tolerated for natural stimuli, using complex photo realistic images. The purpose of this study was to inform the demonstrator design process concerning necessary and relevant imperfections of the human visual system, and on how these can be exploited to hide imperfections in the visual renderings of the project's technology as well as to determine the limits of distortion and hence of spatial extrapolation tolerated by different observers. New concepts had to be developed for psychophysical experiments serving this purpose. Since the first approach lays the foundations of the other two, we started with this project, measuring detection thresholds of human observers for distortions in both straight and curved line stimuli. The results were presented in Deliverable 7.1. The present deliverable will therefore concentrate on the results obtained towards achieving the third aim, measuring the sensitivity of observers for geometrical distortions as well as for variations of both image luminance and contrast. To be able to perform sensible psychophysical measurements with ill-defined stimuli such as natural scenes, a new psychophysical approach was developed for measuring the quality of IBRs, as was mentioned above. The results show a relatively high tolerance of observers for geometrical distortions, with a significant effect of perceptual learning in the sense that prior experience exerts a certain influence on stimulus preference, and an unexpectedly large amount of inter-observer variance regarding the optimal level of contrast and luminance. This inter-individual variation between observers suggests that individual observers should be enabled to chose the luminance and contrast levels they prefer. This simple adjustment may considerably increase the subjective impression of Presence.
Techniques have been developed for visually augmenting real scenes with dynamic virtual objects. The techniques are applicable whether the real scenes are visualized to the user with Image-Based Rendering or simply by showing a live video stream of a real scene. The developed techniques center around emulating the illumination conditions of the real scene when rendering the augmented virtual objects. If the virtual objects are not rendered with illumination conditions that are consistent with those of the real scene the augmented objects will stand out as conspicuous. Specifically three different techniques addressing sub problems in this area have been developed. First, a framework for emulating the illumination conditions in a static scene has been developed. The approach applies equally well to indoor and outdoor scenery. The emulation is based on estimating parameters of a set of point light sources simulating the illumination conditions in the scene. Secondly, a technique for relighting images of real scenes has been developed. The technique can completely alter the illumination conditions in images of a scene. The technique requires a full 3D model of the scene, and is therefore only applicable to scenes with man-made structures. Thirdly, a framework for estimating the changing illumination conditions of outdoor scenes has been developed. The framework makes it possible to model the illumination conditions from a video stream of a scene, responding correctly to changes in sun position, the colour of the sun and the skylight, and changes in cloud cover. The main scientific result for all the three aspects is that they are capable of running in real-time on standard computers. All the listed results are applicable to interactive, real-time augmented reality systems. It is envisages that augmented reality will be an important technology in many areas ranging from entertainment, over education to design prototyping and interactive apparatus repair. Based on these results a prototype augmented reality system is being developed for show casing the technology to potential partners (research and/or commercial). The system will enable a user to visually explore a real scene by panning and tilting a flat panel screen mounted on a pole. The screen will show a video stream of the real world as well as arbitrary augmentations of for example historic building or events. The augmentations will look very close to photo-realistic since their illumination conditions will mimic those of the real scene.
The importance of sound in enhancing sense of presence was investigated by augmenting the different scenarios with sonic cues. Several multichannel soundscapes were designed, using sounds recorded in different locations and spatialized in an 8-channels surround sound setup. Results confirmed the importance of auditory cues to enhance sense of presence when added to visual cues. Guidelines were provided concerning sound design for virtual reality. In identifying distance and location cues of different objects, visual cues were dominant versus auditory cues, however auditory memory of objects present in the scenario was stronger than visual memory. Dynamic sounds enhanced navigation and immersion, especially in static scenarios such as the photorealistic ones present in this project. To analyze the importance of interactive sounds controlled by subjects, a footsteps synthesizer based on physical models was designed. The synthetically generated footsteps sounds were created in real-time while subjects were navigating in the VR environment wearing sandals enhanced with pressure sensitive sensors. Results show that self produced sounds significantly enhance sense of presence.
Image acquisition and image volume construction using circular field of view optics (180 degrees view angle) for image based rendering technology has been developed. The technology consists of motion platforms, optics calibration, image remapping and pre-processing software, and special image volume sampling functions. Existing digital cameras, e.g. Canon EOD Ds-1 - 11Mpix, Kyocera FineCam M410R - 4Mpix, can be combined with existing optical, Nikon PC-E8/9, Sigma 8mm f/4.0 EX, and mechanical components and the software developed to acquire circular and linear areas up to 1.5 meters in diameter and 6 meters in length. The main innovative feature is the use of of circular field of view images and their calibration and manipulation that allows capturing and visualizing of complete field of views, which is not possible with conventional perspective optics. The potential applications can be found in simulators, telecommunication, game industry, interactive art, and cultural heritage preservation.
A software platform has been developed for real-time visualization of real scenes using Image-Based Rendering (IBR). The visualization platform enables a user to visually explore a real scene by moving around within a given region-of-exploration (REX). Everywhere inside the REX the user has full freedom to look in all directions (omni-directional viewing potential). The software platform supports a range of display systems, i.e, it supports Head Mounted Displays and multi-surface display systems, such as CAVEs. Regardless of the employed display technology the scene is rendered to the user in response to user tracking information giving the user's position and viewing direction. The scene is visualized in stereo at video frame rate. In addition to rendering the real scene based on IBR the platform can also render virtual objects super-imposed on the IBR imagery, and the platform can render sounds on multi-speaker configurations or using headphones. The size of the REX is limited by the number of images that can be stored on the computer. Since all images have to be stored in memory there is a limit to how many images can be utilized for a given scenario, and thereby there is a limit to the size of the REX. When running the system on a single computer the REX is a disc with a radius of up to 60 centimetres. Alternatively, the system can visualize the scene as viewed "through a window", and in this case the user can move freely around relative to the window (but in this case the window limits the available field-of-view). The platform supports distributing the image data-base and the rendering on any number of networked computers, which in principle enables the system to run with an arbitrarily large image database without causing performance reduction. The main scientific/engineering result of this effort is that it has been demonstrated that real-time image-based rendering can be performed in stereo on a single standard computer, requiring only a head-mounted display and tracking equipment, both of which are commercially readily available pieces of equipment. This makes it feasible for any company or organization to utilize the IBR technology for presenting users/customers with photo-realistic representations of real places, and letting them explore such places interactively. A transportable, stand-alone version of this system has been assembled and taken to exhibitions where hundreds of people have experienced the technology first-hand. A copy of this system has been assembled and brought into operation at a scientific collaborator outside the project.

Recherche de données OpenAIRE...

Une erreur s’est produite lors de la recherche de données OpenAIRE

Aucun résultat disponible