Final Report Summary - CS-ORION (Compressed Sensing for Remote Imaging in Aerial and Terrestrial Surveillance)
http://www.cs-orion.eu/
Project Objectives:
Remote sensing systems such as unmanned aerial vehicles (UAVs) and terrestrial-based sensor networks have been increasingly used in a surveillance, reconnaissance, and intelligence-gathering role both at the civilian and battlegroup levels. These systems benefit from advances in communications and computing technology which enable the design of low-cost devices that incorporate multimodal sensing, processing, and communication capabilities. Modern remote sensing systems carry payloads providing high resolution day and night imagery, target geolocation with high accuracy, communications relay, and SAR.
In CS-ORION, our focus was on the design, testing, and evaluation of compressive sensing architectures for enhancing the high-quality video acquisition and delivery capabilities of remote sensing devices that will enable them to provide efficient remote imaging in aerial and terrestrial surveillance. The project addressed limitations to current video coding methods which restrict the use of remote sensing devices to offering only a low-quality streaming video to the user. In a nutshell, the technical objectives of this project were to employ, implement, and validate the concepts of compressed sensing in the capture, coding, transmission, and reconstruction of image and video for power constrained remote surveillance systems. Our goal was to consider a long-term, multi-layer approach that combines expertise from statistical signal processing, data representation theory, and video coding and transmission techniques, for enabling robust and high-quality remote imaging. Our approach was to employ compressed sensing signal acquisition principles and Bayesian reconstruction methods so as to address the need for novel video compression techniques that can achieve good performance with a computationally light encoder, possibly shifting some of the system complexity to the decoder.
Project Achievements:
CS-ORION achieved significant results in a number of technical areas, most notably in:
(i) Design of a Compressive Video Sensing (CVS) Architecture for Remote Sensing Applications; (ii) Compressed Video Classification; (iii) Active Range Imaging; (iv) High Dynamic Range Imaging; (v) Efficient Location Sensing using Compressed Sensing Signal-Strength Fingerprints; (vi) Software Engineering Methods on Deploying CS Algorithms Utilizing GPU Hardware; (vii) Compressed Hyperspectral Sensing; and (viii) Low Light Image Enhancement via Sparse Representations.
More specifically:
• We designed a video compression technique to address the limitations of MPEG and MJPEG compression techniques, by introducing a scheme, which could be integrated in onboard video sensing devices with restricted resources. The proposed compressive video sensing method combines a simplified encoding process by embedding a CS module in an MJPEG-like encoder, along with a refinement phase based on inter-frame prediction, by transferring motion estimation and compensation in the compressed measurement domain at the decoder.
• We addressed the problem of video classification from a set of compressed features. We designed and implemented a novel approach to achieve video classification by directly exploiting the properties of linear random projections in the framework of CS. This can be of great importance in decision systems with limited power, processing, and bandwidth resources, since the classification is
performed without handling the original high-resolution video data.
• We designed an active range imaging system which is able to achieve high quality depth map reconstruction from significantly less frames. In our designed system, the depth map is constructed by taking a small number of frames, where each frame accumulates a large number of returning laser pulses. The developed system employs a random gating function where the shutter opens and closes
at random intervals during each frame.
• We explored a novel approach in high dynamic range (HDR) imaging that significantly reduces the necessary number of images. The proposed system employs a random exposure mechanism where each pixel of a single frame collects light for a random amount of time. By collecting a small number of such images, the full sequence of low dynamic range images can be reconstructed and
subsequently used for HDR generation.
• We exploited the framework of CS to perform accurate localization based on signal-strength measurements, while reducing significantly the amount of information transmitted from a wireless device with limited power, storage, and processing capabilities to a central server.
• We implemented the porting of the CPU implementation of the CVS-module of the M-JPEG-based encoder into an integrated and self-contained GPU library and we compared them so as to illustrate the effectiveness of compressed sensing in GPU devices. The optimized GPU version of the CVS encoder achieved impressive execution acceleration, when compared to the single and multicore versions of the prototype algorithm.
• We proposed a novel hyperspectral imaging (HSI) architecture that can achieve high quality reconstruction of the hypercube from a limited number of frames, without resorting to moving parts, by exploiting the theory of Compressed Sensing. The proposed HSI system is composed of the following elements: (i) a coding mask that can allow or block the incoming light according to a specific dynamically changing sampling pattern. Such a sampling mechanism can be implemented using a Digital Micromirror Device (DMD), (ii) an array of optical filters that filter the incoming light allowing only a specific set of spectral bands to propagate and (iii) an array of lenses, also called a Lenslet array, that focus the filtered light onto the imaging sensor such as a CCD or a CMOS device.
• Finally, we proposed a novel approach for enhancing images captured under low illumination conditions based on the mathematical framework of Sparse Representations (SR). In our model, we utilize the sparse representation of low light image patches in an appropriate dictionary to approximate the corresponding day-time images. We consider two dictionaries; a night dictionary for low light conditions and a day dictionary for well illuminated conditions. The effectiveness of our system was evaluated by comparisons against ground truth images while compared to other methods for image night context enhancement, our system achieves better results both quantitatively and qualitatively.