Göm meny

Completed projects

MBVC (Model Based Video Coding) 2011-2017
This project studies the interaction of structure from motion and video coding, and specifically how representations of 3D structure and camera motion can be used to improve video coding efficiency.
VGS (Virtual Global Shutters for CMOS Cameras) 2009-2014
This project studies how to model, and to correct for rolling-shutter distortions. Such distortions are present in most CMOS image sensors, e.g. in nearly all cellphones and most camcorders. We also study push-broom sensors for hyper-spectral aerial imaging, as these have a similar geometry.
EVOR (Embodied Visual Object Recognition) 2009 - 2012
This project studies visual object recognition on robot platforms. Object recognition is an enabling competence for human-robot interaction in environments designed for people (e.g. homes, and offices). We do research on active learning for object model acquisition, active object recognition and visual search.
DIPLECS (Dynamic Interactive Perception-action LEarning in Cognitive Systems) 2007 - 2010
The DIPLECS project aims to design an Artificial Cognitive System capable of learning and adapting to respond in the everyday situations humans take for granted. The primary demonstration of its capability will be providing assistance and advice to the driver of a car. The system will learn by watching humans, how they act and react while driving, building models of their behaviour and predicting what a driver would do when presented with a specific driving scenario.
CAIRIS (Contents Associative Indexing and Retrieval of Image Sequences) 2007-2008
Many smaller TV broadcasting companies face the problem of managing their digitized broadcast material and of accessing suitable contributions from different media providers and agencies. One major problem is the management of the vast amount of information involved. Often a substantial amount of multimedia information (including audio, video and still images) has to be manually processed before the material can be compiled for a new broadcast project. An automatic content based processing of information would be desirable.
Garnics (Gardening with a Cognitive System)
The GARNICS project aims at 3D sensing of plant growth and building perceptual representations for learning the links to actions of a robot gardener. Plants are complex, self-changing systems with increasing complexity over time. Actions performed at plants (like watering), will have strongly delayed effects. Thus, monitoring and controlling plants is a difficult perception-action problem requiring advanced predictive cognitive properties, which so far can only be provided by experienced human gardeners.
IVSS (Image processing, test Vehicle, SimulatorS) 2005-2009
The IVSS program was set up to stimulate research and development for the road safety of the future. The end result will probably be new, smart technologies and new IT systems that will help reduce the number of traffic-related fatalities and serious injuries. IVSS projects shall meet the following three criteria: road safety, economic growth and commercially marketable technical systems.
COSPAL (COgnitive Systems using Perception-Action Learning) 2004-2007, special issue
In the COSPAL architecture we combine techniques from the field of artificial intelligence (AI) for symbolic reasoning and learning of artificial neural networks (ANN) for association of percepts and states in a bidirectional way. We establish feedback loops through the continuous and the symbolic parts of the system, which allow perception-action feedback at several levels in the system. After an initial bootstrapping phase, incremental learning techniques are used to train the system simultaneously at different levels, allowing adaptation and exploration. We expect the COSPAL architecture to allow the design of systems that show to a large extent autonomous behavior.
MATRIS (Markerless real-time Tracking for Augmented Reality Image Synthesis) 2004-2007
Augmented reality (AR) is a growing field, with many diverse applications ranging from TV and film production, to industrial maintenance, medicine, education, entertainment and games. The central idea is to add virtual objects into a real scene, either by displaying them in a see-through head-mounted display, or by superimposing them on an image of the scene captured by a camera. Depending on the application, the added objects might be virtual characters in a TV or film production, instructions for repairing a car engine, or a reconstruction of an archaeological site.
VISATEC (Vision-based Integrated Systems Adaptive to Task and Environment with Cognitive abilities) 2002-2005
The overall objective of the proposed project is to design a learning-based cognitive vision architecture and to implement essential generic components for the automatic adaptation to underlying tasks and environments. The objectives include adequate representation schemes for multi-dimensional images and object shapes, reliable feature extraction and multi cue integration, dynamic adaptation and learning mechanisms, and their purposive use for the detection of 3D objects in images and attentive object/situation analysis.
COMETS (Real-time coordination and control of multiple heterogeneous unmanned aerial vehicles) 2002-2005
This project aims at the development of technologies and tools for real-time coordination and control of multiple heterogeneous unmanned aerial vehicles (UAVs). It will exploit the complementarities of different aerial systems (helicopters and airships) in missions where the only way to guarantee the success is the cooperation between several vehicles and where each aerial system can benefit from the data gathered by the other. This approach leads to redundant solutions offering greater fault tolerance and flexibility. The project will demonstrate the capabilities of the system in real-time forest fire detection and monitoring. Major innovations will be a multi-UAV decentralized control system, a new hybrid control architecture, new UAV control techniques, real-time fault tolerance communications, cooperative environment perception, and a new relevant application.
WITAS (Wallenberg laboratory on Information Technology and Autonomous Systems) 1997-2003
The Wallenberg Laboratory on Information Technology and Autonomous Systems (WITAS) consists of three research groups at the Department of Computer and Information Science, and Computer Vision Laboratory. WITAS has been engaged in goal-directed basic research in the area of intelligent autonomous vehicles and other autonomous systems since 1997. The major goal is to demonstrate, before the end of the year 2003, an airborne computer system which is able to make rational decisions about the continued operation of the aircraft, based on various sources of knowledge including pre-stored geographical knowledge, knowledge obtained from vision sensors, and knowledge communicated to it by radio.
AIIR (Autonomous Inspection and Intervention Robots) 1993-1999
The purpose of the project is to develop procedures to allow a land based robot to navigate, find objects and servo in onto objects to move or manipulate. The project contains a study of several mechanisms for robots to work in complex environments, while a demonstrator has been developed for industrial AGV/LGV docking systems.
VAP (Vision As Process) 1989-1995
VAP aims at continually operating and observing computer vision systems which are capable of interpreting actions in a dynamically changing scene. Active vision (purposively control camera motion) and goal directed control of processing, is employed to simplify and accelerate visual perception. Coupled with the exponential growth in processing power of common micro-processors, the use of these techniques has made it possible to build relatively low cost real time vision systems.

Senast uppdaterad: 2021-09-22