Skip to main content
European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary
Contenu archivé le 2024-05-27

MULTImodal and multiSENSory interfaces for intEraction with muscolo-skeletal Models

Livrables

Optical Object Tracking and Patient Evaluation is an innovative new way of using the same basic technology to achieve several different tasks. First, there's the task of tracking the interaction objects to be used with the Multisense application, second there's the task of evaluating a patient which is being assessed using the Multisense application (examples of possible patient evaluations to be performed with the optical system would be pre- and post-op walking ability, joint range movement estimation, joint location estimation and segment length estimation). The two tasks have very different requirements. The interaction object tracking is typically a small tracking volume, real time task whereas the patient assessment potentially requires a bigger volume - but not necessarily real time. The challenge is twofold: the system needs to perform well in the separate uses, but it also needs to be capable of switching from one use to the other without inconveniencing the user. The use potential of such an optical tracking solution would be significant since it would allow expensive hardware to be used for multiple purposes, thus saving on the cost of installation. Furthermore, the inclusion of the patient evaluation as an integrated part of the Multisense application will mean that this very important part of the clinical process will be an integral part of the whole solution.
The hand exoskeleton device is a system that was developed to provides force feedback to the index, middle and ring fingers and to the thumb. For the first three fingers force feedback is provided for their proximal and distal phalanxes while for the thumb only the distal phalanx is currently active. The hand Exoskeleton structure resides on the dorsal side of the hand and the forces are applied from that direction. The feedback forces are generated by dc motors mounted in a low profile power pack and are transmitted to the fingers by low friction pull-cables. Measurement of the finger flexion is achieved by a combination of flexible resistive sensors integrated in a soft lycra glove and custom made linear electromagnetic sensors embedded in the exoskeleton’s metallic structure. The incorporated glove unlike other systems is part of the exoskeleton structure and thus it is faster to put on and take off. The exoskeleton is designed to fit a range of hand sizes and for this purpose it incorporates adjustment levers that allow fast and easy adjustment of the metallic structure for the three fingers.
The Multimod Display and Interaction framework is a software architecture enabling the easy development of multimodal interfaces and applications integrating speech recognition, haptic feedback, autostereoscopic display and tracking devices. The framework architecture can be devided into three layers: - The multimod foundation layer: A portable SW library providing all classes required accessing with reasonable abstraction and managing efficiently the Multimod storage space, as well as to process the data it contains. This foundation layer contains: a 3D computer graphics and visualisation library, Visualisation Toolkit (VTK) available as public domain software, all the classes necessary to manage the Multimod Storage Format, new high performance visualisation classes, a collision detection library available as public domain software VCollide. All the classes available in the Multimod Foundation Layer (MFL) have been integrated with the VTK library. The library contains also a new data representation called Virtual Medical Entity (VME). It is a collection of MFL objects organised following a pre-defined structure. Such structure provides an effective abstraction of time and space independent from the resolution of the available data. It provides also a general registration framework, in which each base entity pose (position plus orientation) is defined with respect to one global reference system directly or through a hierarchical set of transformations. Such system copes also with the possibility that the shape of one object changes over time. - The low abstraction layer: A portable SW library providing all classes to combine multiple MFL classes into objects of higher abstraction functionality. The library provides a variety of services that are necessary to create a complete application starting from the Multimod Foundation Layer (MFL): complete GUI elements, undo, management of the display area etc. These services are called MAF Operations and MAF services. The graphic user interface of the MAF has been abstracted by adopting the WxWindows library, which is non-commercial and ensures portability between operation systems. - Currently the framework is in place and the multimodal interaction interface has been designed. The multimodal interaction will be implemented in the next months.
To support the two hand haptic operation a desktop haptic device was developed that consists of two 3DOF closed chain mechanisms with each mechanisms being described as a classic five bar planar device. The haptic display was primary constructed from aluminium with some sections (joints) made from steel. Six 25W motors provide actuation at the six active joints of the mechanism. Joint position is measured by a digital optical encoder. The power from the motors to the joint pulleys is transmitted through a tensioned cable drive (capstan drive) that also provides a gear ratio of 11:1. In addition the gear ratio provided by the tensioned cable drive enables the use of smaller motors reducing the overall inertia of the system. To enable the execution of two hand operations as required by the surgical access task the device can be configured to work in two different modes. Double Independent mode Configuration, 2 x (6DOF input/, 3DOF feedback) or Coupled mode Configuration, 1 x (6DOF input, 5DOF feedback). In the first mode of operation the left and the right mechanism are independent providing two separate haptic access points. In the second mode the two mechanisms are connected together to form a single system.
A framework for rapid design of speech interfaces aimed for control of visualization software has been developed. The speech framework is compatible with low-level speech technology components that are considered as pre-existing know-how of KTH, but it is not explicitly dependent on these. In particular, any speech recogniser supporting context free grammar and any speech synthesis supporting SAPI may be connected with some effort. The speech interface is developed as a module compatible with MAF. Thus, exploitation will take place within the framework of MAF exploitation. An additional result is a speech utterance detector capable of determine the start and end point in time of a spoken utterance. Such a detector is often used as a pre-processor to a speech recognizer whose task it is to determine the textual contents of the utterance. The result is a new algorithm for speech utterance detection.
An arm based haptic interface has been designed, constructed and tested for applications in VR based simulation of “open” surgical procedures. The exoskeletal uses novel actuator design methods to produce a system that has high dexterity (7 dof from shoulder to wrist), low mass (arm mass 1.2kg) and high power and force (peak force at shoulder 125Nm). The range of exoskeletal movement is extensive and covers most of the human arm work volume. Sensory feedback is provided from each joint in the form of positional, force and muscle pressure data. Software has been developed and tested for co-ordination and control of the joint motions and using this code in conjunction with the sensory capacities the exoskeleton can simultaneous control both position and force (joint stiffness/compliance). This control is regulated to provide restraining/constraining and augmentation feedback forces to the joints of the shoulder, elbow and wrist providing a generalised arm restraint sensation. The use of the novel actuators provides for intrinsic compliance regulation which is well suited to simulation of human tissue, and safety in close location to the operator. The first prototype has been completed and installed at CINECA. Some initial levels of integration have been achieved ahead of schedule.
A complete musculo-skeletal model of the lower limb (both right and left side) has been developed starting from the data of the Visible Human Project. The model is completed with the surfaces of all bones, muscles and tendons. The model includes also the origin and insertions of all muscles on the bones, and the direction of the main muscles fibers. A spring model of a normal human muscle and ligament anatomy around the hip was created to represent the muscle deformation. For each muscle of ligament, there is a surface model, a curvilinear medial axis, and a line of action axis that would be used to configure the springs.
HipOp_MS is pre-operative planning environment for hip replacement. The system presents a flexible, versatile and articulated software system able to integrate stereoscopic visualisation, non stereoscopic visualisation, speech recognition system, tracking functionalities, haptic functionalities, innovative interaction paradigms. The system has integrated advanced visualisation algorithms, and the support for musculo-skeletal modelling (skin incision and muscle retraction), positioning of the prtosthetic components, simulation modules for the evaluation of functional indicators, and modules for the simulation of some surgical phases, such as the femoral neck resection and the dislocation. Moreover the system is a test bed for user validation requiring flexible configurations to meet the necessary experimental conditions.
The MML pre-processing unit is a software application, which allows the creation of patient-specific musculo-skeletal model. The software allows importing the CT scan of the patient (in DICOM format) and from that to obtain the surfaces of bones, muscles, and skin, the origin and insertion points of all muscles together with the main muscle action line. The musculo-skeletal model is obtained registering to the patient¿s CT dataset a complete musculo-skeletal atlas using both a virtual palpation procedure and an advanced operation for the adapting of surfaces to the desired shape.
The visualisation software will cover a variety of forms: surface, volume, point, hybrid, X-ray, to produce a flexible package that will be useful for many forms of medical data. It will be written to be compatible with the MAF. Thus, exploitation will take place within the framwork of MAF exploitation; currently this is designed to have a public-domain core, with (possibly) compatible commercial extensions for the more specialised modules. Most of the modelling will relate to muscles and other deformable soft tissue, where specific models to account for geomtery, deformation during active and passive movement, and surface texturing will be considered. Also included will be behaviour during cutting. The overall work will feed back to inform other results within the project.
The outcome of this work was the construction of a finger mounted tactile feedback array. This is a surface shape display consisting of a 4x4 pin array actuated by miniature remotely (forearm) located motors. The display introduced has a capacity for force generation at up to 1.3N per tactor and pin displacements up to 2mm. The bandwidth at this 2mm displacement is greater than 12Hz with significantly higher frequency at lower displacements needed in texture simulation. The total weight of the device is approximately 275g of which less than 15g are loading the finger. Moreover the device is using Bluetooth wireless technology for truly portable operation and easy interfacing. Because this tactile display is wearable and demonstrates high portability and easy interfacing with other devices it constitutes a good candidate as a tactile feedback component in immersive and virtual reality environments.

Recherche de données OpenAIRE...

Une erreur s’est produite lors de la recherche de données OpenAIRE

Aucun résultat disponible