Vision, Graphics and Interactive Systems at Aalborg University is a two-year master's programme (120 ECTS). Below are short descriptions of the four semesters. See also the curriculum for the master’s programme in Vision, Graphics and Interactive Systems. Here, you may find details on courses and projects as well as information on the programme’s legal basis, etc.
COMPULSORY FOR ALL NON-AAU BACHELORS
All bachelors enrolled in the programme who have not obtained their bachelor's degree from Aalborg University must take part in a course on problem based learning (PBL) as part of the 1st semester project. In case non-AAU bachelors get credit transfer for the 1st semester, they will be asked to take part in a course ensuring that they are trained in working according to the PBL-model. Read more about PBL here.
1st Semester; computer graphics and problem based learning
The objective of the semester project under the heading "Computer graphics and Problem based learning" is two-fold: 1) to provide you with core competencies within the area of real time 3D computer graphics, enabling you to design and implement software systems that use synthetically generated images as output modality, and 2) to train you in working according to a scientific method and to report results in scientific forms such as papers and posters.The PBL part is focused on training you in working according to the PBL concept at Aalborg University.
1st semester courses
- Computer graphics programming
- User experience desing for multi-modal interaction
- Machine learning
The course in Computer graphics programming provides an introduction to real time computer graphics concepts and techniques. Focus is on programmable functionalities as offered by graphics APIs (Application Programming Interfaces), supplemented by a presentation of the relevant underlying theories. The course also introduces the concepts of Virtual Reality and Augmented Reality, and how computer graphics is used in the context of these application areas.
On the course in User experience design for multi-modal interaction, you are trained to research, analyse, prototype and conceptualise design considering all system aspects including the social and cultural contexts of use. The course gives a comprehensive knowledge about user involvement in the design process going beyond traditional methods such usability lab testing. The objectives are realised by presenting methods and tools in a case-based framework and through the students’ active participation in workshops and assignments.
The Machine learning course gives a comprehensive introduction to machine learning which is a field concerned with learning from examples and has roots in computer science, statistics and pattern recognition. The objective is realised by presenting methods and tools proven valuable and by addressing specific application problems.
2nd Semester; computer vision
The 2nd semester project has as its overall theme "Computer vision". The purpose of the project is to provide you with core competencies within the field of computer vision, enabling you to design and implement software systems for automatic or semi-automatic analysis of an image or sequence of images.
2nd semester courses
- Image processing and computer vision
- Robot vision (elective)
- Computer graphics and visualisation (elective)
- Scientific computing and sensor modelling (elective)
Cameras capture visual data from the surrounding world. Building systems which can automatically process such data requires computer vision methods. Through the course in Image processing and computer vision, you will understand the nature of digital images and video and have an inside into relevant theories and methods within computer vision and an understanding of their applicability.
In the elective course, Robot vision, you will be presented with the basics of robotics: Danavit-Hartemberg coordinate transformations, forward and backward kinematics, etc. Also, there will be lectures about image processing such as colour detection, shape detection, orientation detection, filtering, blob analysis, etc. This course also presents several graph theory concepts as well as fuzzy logic programming. The best part is the project: you will design a system that detects Lego bricks, picks them up with an industrial robot and builds simple stacks of 3 blocks (2013 theme). You can see the results from one group here:
The goal of the elective course in Computer graphics and visualisation is to provide the foundations necessary to perform advanced work in computer graphics and visualisation on the 3rd and 4th semesters. You will explore state-of-the-art theories and techniques in a formalised manner by analysing a selection of research texts fundamental to computer graphics and visualisation through e.g. critical annotations, paper presentations, reproduction of experiments, etc.
The elective course in Scientific computing and sensor modelling covers various topics in scientific computing and behavioural sensor modelling. The course is composed of three parts: 1) computation and programming; 2) mathematical background; and 3) modelling and simulation. The first part of the course includes an introduction to modern state of the art computer and software platforms (CPUs, GPUs, multi-core, etc.), an introduction to the Python programming language (data types, programming style, packages and libraries, unit testing, profiling, etc.), scientific computing aspects (floating point representation, algorithmic complexity, condition numbers etc.), parallel computing methodologies (classification, memory models, load balancing, Amdahl/Gustafson-Barsis' laws, etc.) and Python multiprocessing programming (pools and processes, asynchronous computation, shared data, etc.). The second part of the course is devoted mainly to the mathematical representation of signals of different types (bandwidths of signals, Fourier series descriptions, passband and complex baseband representations, signal transformations, signal power, resampling, etc.). The third and final part of the course includes behavioural simulation techniques (simulation process, behavioural models, computation models, software platform, etc.), system simulation framework (signal types, functional block representation, signal decomposition, etc.), generators (sinusoidal, random, passband etc.), linear functional blocks (filters, amplifiers, etc.) and nonlinear functional blocks (power amplifiers, etc.). Upon this course, you will be able to map algorithms to sequential and parallel CPU architectures, develop high quality scientific software in Python and perform behavioural-based simulations of various functional blocks. During the course, you will develop good software coding skills including proper structuring, good code development procedures with emphasis on readability, maintenance and performance as well as knowledge of using profiling, debugging etc. for testing and validation of software.
3rd Semester; interactive systems
The objective of the 3rd semester project under the theme "Interactive systems" is to equip you with the abilities to design, build and test advanced multi-modal user interfaces, integrating the more traditional information sources with information derived from e.g. computer vision techniques, speech recognition and contextual knowledge, such as location. Information visualisation and presentation must be considered and integrated as well. You get to choose your focus freely within the above-mentioned fields, however, interaction design issues must be considered and elements of user involvement, such as user requirements gathering and end user tests must be treated.
3rd semester courses
- Platforms and methods for multi-modal system architectures
- Research in Vision, graphics and interactive systems
The course in Platforms and methods for multi-modal system architectures will enable you to understand the principles of multi-modal user interaction, including speech-based interaction and computer vision, and to extend the methods for HCI GUI design to analyse, design and synthesise multi-modal user interaction.
On the course in Research in Vision, graphics and interactive systems, you will be introduced to state-of-the-art theories and methods within the core topics of the program, i.e., vision, graphics and interactive systems.
4th Semester; Master’s thesis
This is the end-up project of the programme. You will work on a subject alone or together with 2-3 fellow students. During the master’s thesis work, you will learn to independently initiate and perform collaboration within the discipline and interdisciplinary as well, and to take professional responsibility regarding your choices.
Master's thesis examples
Title: “The Virtual Window Wall"
by Casper Pedersen
The thesis described the design and implementation of the prototype of a system called the Virtual Window Wall. It made it possible to manoeuvre a virtual camera in a 2D space producing virtual views of a scene from a number of reference cameras. The prototype program was implemented as a C++ application utilising the OpenCL framework for computational speed by using the GPU to execute algorithms in parallel. The application works by capturing a number of calibration images with the reference cameras which are used to do a camera calibration. The camera calibration determines maps used to rectify input images from the cameras and camera calibration matrices. This is done offline. We rectified an image pair that we used to determine disparity maps for these rectified images. Different algorithms including some novel ones were implemented to determine the disparity maps. The disparity maps in-directly give us the depth which makes us able to produce the virtual view. This was done by the novel algorithm of Backward 3D warping with disparity search which utilises the epipolar constraint to determine disparities for the virtual view. Perceptual realistic results were achieved with only minor artefacts within a reasonable region of the reference cameras. The most noticeable artefact was due to occlusion. By executing the algorithms on the GPU, an interactive manoeuvring of the virtual camera was achieved by 17 FPS. The cost space aggregation step was computationally heavy and hindered the whole program from running interactively, even though this part ran 17 times quicker on the GPU than on the CPU.
Title: “Gaze Directed Hybrid Rendering using Photon Mapping"
by Jeppe Jensen & Simon J.K. Pedersen
It is computationally prohibitive to render photo-realistic computer graphics in real-time, so the field of computer graphics is divided in two subfields, focusing on real-time performance and physical realism, respectively. This thesis merged the two fields by taking advantage of the fact that humans only have high acuity vision in a very narrow cone in the centre of their eyes. The cornea of the human eye have two “areas”: 1) the outer area taking care of the peripheral vision has a field-of-view of approximately 180 degrees, but has very low spatial resolution, sees in black and white, does not see any details, and 2) the fovea area, which sees in colour and in very high resolution. In this project, a hybrid computer graphics rendering approach was developed and tested. The hybrid combined a relatively low quality, but real-time, rendering for the main part of the image, with a very realistic rendering for the part of the image the user is actually looking at. The area being looked at by the user was tracked with a gaze-direction tracker which can determine the point on the screen being looked at by the user at any given time. Results showed that the approach yields a speed-up of 5 times compared to rendering the whole image in high quality, without sacrificing subjectively perceived visual quality.
Title: “Automatic Ship Identification using Computer Vision"
by Gabrielle Tranchet
This thesis presented a system that solves the problem of identifying an unknown ship in a picture, compared to a database of pictures of known ships. This is a problem that the Danish Navy faces on a daily basis, each time a ship enters Danish seas, as they need to have a positive identification of the ships. Firstly, this thesis described precisely the context and the concept of this problem. From this, the different methods of the existing related works were discussed which lead to the choice of designing the system using the Scaling-Invariant Feature Transform for object detection and the GrabCut algorithm to prepare the database by segmenting the images. Finally, the system was completed with those two methods being further combined with user interactivity. The system was evaluated by applying stress testing on it to find its capabilities and limits. The testing and results showed that the system gave a better success rate without the segmentation step, with 86.45 % of the images identified out of a database of 105 images. It was also shown that the system is still efficient with images of lower quality, and that the results are better for a bigger database. False positives were unproblematic as they could be discarded with a dynamic threshold and thus highly improve the results.
Title: “Building an Analysis and Visualisation Tool for Media Research Purposes"
by Maxime Coupez
The aim of this thesis was to provide a tool for a media researcher which would allow him to analyse high-level data recorded from a brain-computer interface. DR (Denmark’s Radio) wants to conduct media research about the emotional states of an audience watching e.g. a pilot from a TV show. Mediathand is a company providing interactive services about on-demand and live media on mobile devices, and they have been asked to produce a system which collects this information. This thesis was carried out in close collaboration with DR and Mediathand and described the development of a tool for analysing and visualising data integrated into an overall system which gather and treat information about a user's emotional response via a brain-computer interface. The thesis took its starting point in a cross disciplinary field which involved design, psychology, computer science and interaction design. Several prototypes (including a final fully functional prototype) were developed, employing a novel mix of usage- and user-centred design approach and all prototypes were validated in end-user tests.
Title: “Eye Gaze Tracking for Tracking Reading Progress"
by Julia Alexandra Vigo
The purpose of this thesis was to follow the reading process of subjects thanks to an eye gaze tracker. To do this, tests were conducted on dyslexic persons. For these tests, we used an eye gaze tracker with an already implemented reading tutor using speech recognition. The baseline of this thesis was to map gaze data points to the corresponding words in the text. Algorithms were implemented in order to compensate issues that are inherent when using eye gaze tracking for reading process. To evaluate the accuracy of the system, the tracking error rate was calculated. We used as references the manual transcriptions of the speech recorded during the experiments and as hypothesis the gaze data points mapped. The resulting tracking error rate was 114.6% which is really high. But when using speech recognition fused with eye gaze tracking for the reading tutor, the accuracy is better. Eye gaze tracking actually improves the already implemented reading tutor using only speech recognition.