15th European Signal Processing Conference EUSIPCO 2007

Special Sessions


Special Sessions Chair Prof. Maciej Niedzwiecki, Gdañsk University of Technology

E-mail: maciekn@eti.pg.gda.pl


Multidimensional Signal Processing and Systems

Organizers:
Krzysztof Ga³kowski, University of Wuppertal, Germany
Anton Kummert, University of Wuppertal, Germany

The session is aimed to present the progress in various recent applications of multidimensional systems and signal processing. These include:

Implementations of MIMO Wireless Communication Systems

Organizer:
Andreas Burg, Integrated Systems Laboratory, ETH Zurich, Switzerland

Multiple-input multiple-output (MIMO) technology has started to appear in numerous standards for wireless communication. For example, MIMO is part of IEEE 802.16e and will be part of IEEE 802.11n. It is also considered for 3GPP Release 7 and for 3GPP LTE. However, to ensure the commercial success of MIMO technology, some critical implementation aspects remain to be solved. In the past, Moore's Law has certainly helped silicon technology to keep up with the more complex algorithms employed for wireless communication systems. Nevertheless, it has also become clear that the tremendous growth in computational complexity associated with MIMO requires considerable research efforts towards developing low-complexity algorithms and corresponding optimized hardware architectures.

A particularly challenging area of research is the real-time implementation of multiantenna systems on reconfigurable hardware architectures such as FPGAs or softwaredefined radio (SDR) platforms. However, on the one hand, such experimental systems have been recognized as an essential tool for the evaluation and verification of systemlevel performance and implementation aspects, especially for the exploration of new concepts such as multi-user MIMO systems. On the other hand, software-defined radio platforms are the next step in technology, taking reconfigurable architectures from a tool for prototyping to a technology that can be deployed in a product.

For this session proposal, we have collected five excellent contributions from leading research groups discussing real-time implementations of MIMO communication systems on reconfigurable hardware platforms. The invited papers report on algorithms optimized for hardware implementation, on corresponding efficient system and hardware architectures, and on measurement results for emerging multi-antenna wireless communication systems, including experiments with multiuser MIMO concepts. The contributions highlight the performance of real-world systems and the impact of implementation trade-offs and constraints.

Since experience shows that efficient implementations of next generation wireless communication systems require joint consideration of algorithm and VLSI implementation aspects, we believe that such an interdisciplinary session dealing with actual implementations of MIMO systems would constitute a most valuable contribution to EUSIPCO 2007.

The Semantic Gap in Visual Information Retrieval

Organizers:
Ebroul Izquierdo, Queen Mary University of London, United Kingdom

In order to get closer to the vision of useful multimedia-based search and retrieval, the annotation and search technologies need to be efficient and use semantic concepts that are natural to the user. Finding a specific image of the last holidays stored some where in the hard disk of the personal computer can become a time consuming task, searching for a specific picture in a large digital archive is even more difficult, and looking for an image on the web can be extremely difficult if not impossible. If images have been manually labelled with an identifying string, e.g. "My holidays in Africa, wild life", then the problem may appear to have been finessed. However, the adequacy of such a solution depends on human interaction, which is expensive, time consuming and therefore infeasible for many applications. Even the annotation of few hundreds of personal images captured during the last years is a tedious task that nobody wants to do.

Furthermore, such semantic based annotation is completely subjective and depends on semantic accuracy in describing the images. While one person could label an image as "wild life" someone else might prefer "Elephant and trees". This last mentioned problem could be alleviated by constraining the annotation to words extracted from a pre-defined dictionary, taxonomy or advanced ontology. Nevertheless, the challenge of automatic annotation and retrieval using semantic structures natural to humans remains critical.

To develop the technology able to produce accurate levels of abstraction in order to annotate and retrieve content using queries that are natural to humans is the breakthrough needed to bridge the semantic gap. To bridge this gap is a challenge that has captured the attention of researchers in computer vision, pattern recognition, image processing and other related fields, evidencing the difficulty and importance of such technology and the fact that the problem is unsolved. The underlying technology offers the possibility of adding the audiovisual dimension to well-established text databases enabling multimedia based information retrieval. In this session selected talks on advanced research aimed at bridging the semantic gap in visual information retrieval will be presented.

Bayesian Localization and Data Fusion Methods for Distributed Systems

Organizer:
Joaquin Miguez, Universidad Carlos III de Madrid, Spain

Distributed systems consisting of a potentially large number of wireless nodes with communication and sensing capabilities will soon become ubiquitous because of their suitability for a broad range of emerging applications, such as environmental monitoring, surveillance and security, vehicle navigation, tracking, logistics, etc... Virtually all of these applications involve the detection, classification and tracking of signals, with the outstanding peculiarity that large amounts of possibly multimodal data must be handled and integrated in order to perform the prescribed tasks.

Due to this perspective, there has been a surge of interest in the development of new signal processing techniques suitable for the distributed scenario. Two important features are common to most of the applications of wireless distributed systems and have, therefore, become particularly hot topics:

The aim of the proposed special session is to bring together researchers working in the topics of data fusion and localization (including positioning, navigation and tracking) for wireless distributed systems, in order to provide an up-to-date picture of this rapidly advancing field of research. Emphasis is put on powerful Bayesian techniques, including sequential Monte Carlo methods and recent extensions of the Kalman filtering methodology, which have become state-of-the-art tools because of their flexibility and accuracy.

Advanced Digital Signal Processing for Future Wireless System

Organizers:
Krzysztof Weso³owski, Poznañ University of Technology, Poland
Hanna Bogucka, Poznañ University of Technology, Poland

Future wireless systems are envisaged to embody diversified radio access technologies related to various standards and coverage areas, user mobility patterns and service classes. In order to ensure the required data rates, network coverage, reliability and Quality of Service requirements several special means have to be undertaken in different layers of the future wireless systems and networks. Among them, advanced signal processing algorithms applied in several tasks of physical and higher layers of wireless networks are of primary importance.

This two-part special session is focused on several topics associated with them. The first part is devoted to:

The second part of this special session is devoted to the challenging topics that concern research of the efficient user-centric access to the future systems. They are related to the development of flexible terminal platforms that will be able to roam seamlessly among different wireless systems, to their situational awareness, learning capabilities, management and control of their adaptation means including the study of effective radio resource allocation schemes, arranging communication traffic over the heterogeneous wireless network.

An important direction of research on flexible radio platform design encompasses new approaches to digital signal waveform description such as Generalized Multi-Carrier (GMC) and efficient time-frequency description of wireless channels applied in flexible radios.

In the future wireless communications cognitive radio concept will play an important role. This concept defines the sensing-learning-decision circle for the radio terminals (radio nodes) allowing for an efficient and 'cognitive' access to a network, spectrum acquisition and reliable communication without causing interference to the licensed users. To do that, the cognitive radio nodes will include specialized sensors, learning algorithms and a large number of adaptation and reconfiguration features in order to learn and react to changing environmental conditions, to acquire required radio resources.

The second part of this special session will tackle the above mentioned problems and challenges.

Multiple Description Coding (MDC)

Organizers:
Peter Schelkens, Vrije Universiteit Brussel, Belgium
Adrian Munteanu, Vrije Universiteit Brussel, Belgium

Multicast applications require several representations of the same content to be sent to a variety of end users with different bandwidth provisions and playback capabilities. In conjunction to that, the data delivered through best-effort packet-switched networks may be corrupted or incomplete. Furthermore, participants to multicast sessions cannot not rely on retransmission mechanisms as a large number of requests and the consequent reiterated packet transmissions would burden the network, thus increasing the risk of congestion. In such application scenarios, it is essential to develop scalable multimedia coding techniques that are capable of adapting the transmitted bitstream to the available channel capacity and user demands and, at the same, time provide a controllable degree of resiliency against erasures. In an error-free transmission setting, scalable coding technologies enable the media providers to generate a single compressed bitstream from which appropriate subsets can be extracted to meet the preferences and the bit-rate requirements of a broad range of clients. From a complementary perspective, multiple description coding (MDC) has been introduced to efficiently overcome the channel impairments over diversity-based systems allowing the decoders to extract meaningful information from an incompletely received bitstream. MDC systems are able to generate more than one description of the source such that (1) each description independently describes the source with certain fidelity, and (2) when more descriptions are available at the decoder, they can be combined to enhance the quality of the decoded signal.

This special session addresses researchers active in the area of multiple description coding and encourages submission of contributions ranging from theoretical analysis to practical instantiations of novel MDC systems for images, videos and multimedia data. Additionally, in light of the increasing popularity of scalable video coding (SVC), contributions are solicited on the implementation of the MDC concept into SVC-AVC-compliant codecs as well as other SVC architectures.

Biometrics Recognition

Organizer:
Paulo Lobato Correia, Instituto Superior Tecnico, Portugal

Reliable and secure access control systems are required for many applications, such as access to restricted areas, border control, etc. Traditionally, passwords or ID cards have been used for restricting user access. However, these types of security control methods present serious disadvantages: passwords can be easily stolen or lost and ID cards can also be forged. The increased demand for secure access control systems has been accompanied by a growing research interest in biometric technologies. Biometric recognition systems refer to the automatic recognition, or verification, of a person’s identity based on human physical, physiological or behavioural characteristics. Biometric features are typically based on characteristics such as fingerprints, face, hand geometry, palmprints, iris, voice, signature, etc. Also the usage of multiple biometrics in the same system is often considered to increase the reliability of th recognition results. The major advantage over traditional methods is that biometric features cannot be lost or easily stolen, while being unique for each person. This Special Session addresses biometrics research topics, targeting reliable and practical solutions for reliable recognition of people.

Quality Evaluation in Image and Video Processing

Organizer:
Paulo Lobato Correia, Instituto Superior Tecnico, Portugal

Nowadays there are numerous alternative published algorithms for performing a given image or video processing task, such as image and video coding or segmentation. In order to determine the adequacy of a given solution for an envisaged application, or to select one among a set of candidates, an evaluation of the proposed processing algorithms is required. Often the assessment of algorithms for a given image or video processing task is done using subjective evaluation methodologies, requiring the participation of human evaluators under a very well controlled evaluation environment - this is an expensive procedure and subjective by nature. Many efforts have been made to automate the evaluation procedures, and if a consensual objective metric is available, it is often preferred, even if it presents some limitations (as the case of PSNR for image and video quality evaluation). Lately, there have been many developments in this area, both concerning objective evaluation methodologies that compare the results against a reference (taken as the ground truth) - full reference or relative evaluation - as well as evaluation without a reference - no reference or standalone evaluation. Also the operation environment (application, screen size, network limitations) needs to be taken into account when evaluating the quality perceived by the user. This session addresses novel methods to automatically estimate the quality of images and video processing tasks.

New Methods for the Analysis of Deformation Signals of Natural and Artificial Structures

Organizer:
Andreas Eichhorn, Institute for Geodesy and Geophysics, Vienna University of Technology, Austria

Monitoring and analysis of short- and long-term deformation signals of buildings (e.g. dams, bridges) and natural structures (e.g. local and regional geodynamic processes, slopes, etc.) is a central task for geodetic applications in environmental monitoring. It is an essential precondition for the development of reliable alert systems and the prevention of human and material damage.

In the last years a fundamental change has happened concerning the methodology:
  1. High sensitive sensor networks enable the installation of area-wide monitoring networks. The data is used as input for real-time operating alert systems (e.g. in tunneling).
  2. The pure geometrical analysis dealing with kinematic displacement fields has changed to so called "causative" approaches which also quantify monitored trigger events like traffic loads, temperature, wind etc. One challenge is now to find non-or parametric deformation transfer functions for complex structures which combine the influence quantities with the deformation signals and to separate the (often considerable) disturbances. In this context also AI-methods (e.g. neural networks) are trained and used.
  3. Development of new methods for modelling of sensor errors. The classical stochastic view is extended to fuzzified approaches quantifying uncertainties in the error distribution. The fuzzyfication enables new concepts for error propagation and statistical tests .

The proposed session will give an overview over the current state of the art and newest developments in geodetic deformation analysis.

Optimization of Wireless Multiuser MIMO Communication Systems

Organizer:
Christoph Mecklenbräuker, Vienna University of Technology, Austria

The goal of this special session is to highlight optimization techniques for multi-user MIMO communications both in cellular networks and in ad-hoc mode. This session will highlight recent advances in multi-user MIMO techniques on the physical-, medium access-, and radio link control layers. Currently, little is published about how to optimally leverage the new degrees of freedom resulting from MIMO terminals in a multi-user context.

The wireless industry has started to integrate single-user MIMO techniques into existing multi-user cellular standards and to define new cellular standards based on MIMO. Wireless networks should not be regarded as a set of concurrent point-to-point links. More appropriate models are many-to-one links where many users transmit to a single base station and one-to-many links where a single base station transmits to many users.

Sparse Modelling Approaches to Audio and Music Signal Processing

Organizer:
Simon Godsill, Cambridge University, United Kingdom

Recent advances in methodology for modelling of signals using sparse representations selected from overcomplete dictionaries of elementary atoms, such as wavelets or Gabor atoms, are having strong impact in signal processing applications for audio. When combined with state-of-the-art analysis and inference methods, these techniques are providing powerful developments in topics such as source separation, restoration, coding and enhancement, to list but a few. Methodologies are being successfully developed from several points of view, ranging from fully Bayesian mnodels to nonparametric classical approaches, with crossover between disciplines in many cases.

LTE Signal Processing

Organizer:
Thomas Kaiser, Leibniz University of Hannover, Germany

"Long Term Evolution" (LTE) is the next release of UMTS (more precisely E-UTRA) and was initiated to ensure the competitiveness of 3G technology during the next 10 years and beyond. LTE will be a completely packet-optimized radio-access technology with a new air interface. Similiar to other wireless standards "MIMO" meanwhile outgrows its research childhood and becomes a mandatory wireless technology - also for LTE. Moreover, OFDMA is applied for the downlink in order to achieve data rates of up to 100 Mbps and with SC-FDMA in the uplink the power efficiency of a mobile device remains high. The aim of this special session on "LTE Signal Processing" is to discuss the impact of the mentioned modulation schemes for LTE and to point out further special features from algorithms to architectures.

Signal Processing in Wireless Testbeds and Prototyping

Organizer:
Markus Rupp, Vienna University of Technology, Austria

While the research of modern communication systems in particular in connection with multiple antenna techniques has blossomed over the past decade, measurements and experimental work is seriously lagging behind. The general belief in this field of research is that simple channel models like iid Rayleigh are sufficient to describe wireless scenarios.

The complexity of systems is estimated by the order of Multiply-Add instructions in the required algorithms. Matlab floating point simulations solely build the base for wireless system design. However, once a system is build with real hardware components, the true challenges show up that were not addressed in the simulation. Conducting experiments with real-time measurement equipment as it is common in wireless testbeds allows to asses the influence of physical wireless channels and RF components like antennas, mixers and low-noise amplifiers.

On the other hand prototyping allows for building first samples of real-time hardware typically running in fixed rather than in floating point. Here precision of AD and DA converters, cost of hardware, and energy consumption become visible and may have an equal impact of a future wireless product.

This session allows experimental researchers in this field to share experience in wireless testbeds and prototyping and informs theoretical researchers in this field about unexpected hurdles when converting algorithmic ideas into working products.

Multimodal Biometrics and Smart Cards

Organizer:
Andrzej Drygaj³o, Swiss Federal Institute of Technology at Lausanne (EPFL), Switzerland

With an increase in identity fraud and the emphasis on security, there is a growing and urgent need to efficiently identify humans both locally and remotely on a routine basis. This special session addresses the issues of the global breakthrough by biometrics in identity verification technology that is imminent in terms of its use in identity documents and corresponding applications. Biometric person identity verification (biometric authentication) is a multimodal signal processing technology in its own right, with many potential applications.

The major technological aspect in this area is that there is a critical need to understand and exploit a variety of biometric modalities with respect to modelling biometric signals, creating templates of these models and scaling them to the needs and limits of smart cards. Acquiring biometric signals of sufficient quality and suitability and using them for reliable decision making is of critical importance for automatic multimodal biometric authentication systems. The quality of biometric data, fusion of modalities and performance of the recognition systems for each modality are of pivotal importance for large-scale applications.

Nonstationary Process Analysis, Estimation, and Applications

Organizer:
Antonio Napolitano, Parthenope University of Naples, Italy

Nonstationary signal analysis has been considered from several different points of view. Among them there are Time-frequency and time-scale analysis, Wavelets analysis, Fractional Fourier transform analysis, Cyclostationary signal analysis, Self-similar signal analysis, Evolutionary spectral analysis, Adaptive system and signal analysis. These approaches, although different, have similarities and overlaps. The aim of this special session is to allow comparisons of different points of view for nonstationary signal analysis. Overviews, theoretical results, and applications contributions are welcome.

Coding and Processing of Multi-View Video

Organizer:
André Kaup, University of Erlangen, Germany

Next generation video systems will offer the sensual experience of immersive video, a new concept for future television entertainment. Immersive media can be regarded as extension of the successful stereoscopic widescreen presentation of panoramic views known from IMAX technology. Using image-based rendering techniques of multi-view recorded real-world scenes the user gets the impression of being actively present at a remote event. This principle, which is similar to lightfield recording and rendering, will give the user the possibility to navigate through the event with the projected scene changing according to his actual position.

At the same time no model generation of the scene is required as it would be the case for the classical model-based scene rendering. A variety of challenges are associated with multi-view video, among these are efficient coding of the multiple camera data streams, rendering of intermediate views from given camera positions, estimation of depth information, camera rectification and calibration as well as 3D scene analysis and reconstruction. This special session will address these issues in selected talks.

Hypercomplex Digital Signal Processing: Fundamentals and Applications

Organizer:
Heinz Göckler, Ruhr University Bochum, Germany

Hypercomplex representation and processing of signals is a gradually emerging extension of commonly used real and complex DSP. Hence, hypercomplex DSP is an active field of fundamental research encompassing current applications in signal and (colour) image processing and coding, computer graphics, signal and system representation and analysis, digital filter banks, biomedical signal processing, seismology, etc. Moreover, in physics, hypercomplex algebras have found widespread application, for instance, in special relativity, quantum mechanics, gauge theory, motion analysis. Hence, new vistas and novel applications in multimedia and general DSP are encouraged. This Special Session serves as a forum for related discussion and exchange of peer experience.

Since hypercomplex algebras (e.g. Quaternions, Octonions, Grassmann-Clifford algebra, etc.) and processing of hypercomplex signals are not really familiar to great parts of the DSP community, this Special Session starts with a tutorial contribution presenting a survey of hypercomplex algebras in conjunction with some beneficial applications in DSP. Potential advantages and drawbacks of hypercomplex DSP are discussed. Subsequently, the utility of these concepts in DSP is shown by contributions discussing applications in fields such as digital filtering based on hyperbolic number system and multirate filter banks.

Content and Network Aware Video Streaming

Organizers:
Béatrice Pesquet-Popescu, École Nationale Supérieure des Télécommunications (ENST), France
Markus Rupp, Vienna University of Technology, Austria

Multimedia applications have experienced explosive growth in the last decade and have become a pervasive application over the Internet. Despite the growth in availability of broadband technology, and progress in video compression, the quality of video streaming systems is still not on par with SDTV and even further from HDTV. This is due to the best-effort nature of the network which does not offer any guaranteed quality of service (QoS) and where bandwidth, delay and losses may vary unexpectedly. The advent of wireless last hops, and the emergence of multi-hop ad-hoc networks bring as additional challenges interference, shadowing and mobility. It is a daunting task to achieve high and constant quality video, and low startup delays and end-to-end latencies, in such environments. The aim of this special session is to focus on advances related to error-resilience, congestion control and multi-path video delivery.