Research at PARsE
We perform research in the field of parallel processing, which covers the following tightly connected areas:
- The processor architecture area includes topics such as (multi)-processor design, instruction set architectures, and accelerators (GPUs, FPGAs).
- We research the code generation and tuning process for parallel architectures, including (semi)-automatic code generation, compilation, and optimization techniques.
- We perform architecture-aware mapping of applications, mainly in the area of computer vision and image processing.
Contents of the research section
The research section of this website provides the following:
- The projects page lists current and past research projects. A short explaination is provided for each project. Project duration varies from several weeks upto multiple years.
- On the publications page, an overview is shown of all official publications from our group, including theses, conference papers and journal articles. Additionally, unofficial publications such as internship reports are available.
- The algorithms and tools page contains executables and source code for tools and mapped applications developped at our group.
- On the conferences page, you will find an up-to-date list of conferences including call for papers and important dates.
Parallel processor architecture research
The shift towards parallel processing
Traditional processor design was driven by frequency scaling, while nowadays, a trend towards hardware multithreading and computationally dense SIMD architectures can be observed. In other words, the shift replaces one complex high frequency processor by many simple parallel processors. Along with the trend towards parallel processors comes a shift within the programming model. Sequential programming languages are replaced by parallel programming languages.
General purpose GPU programming
Graphics processing units (GPUs) are a hot topic in computer architecture design, since they provide a higher raw computation potential than traditional CPUs. The GPU is an example of an architecture consisting of multiple SIMD processors supporting hardware multithreading. For GPUs, NVIDIA introduced CUDA as a parallel programming environment for general purpose applications. With CUDA, the programmer can exploit the GPU's parallelism and accelerate general purpose applications.
GPU research in Eindhoven
Currently, thorough hardware and algorithm knowledge is required to efficiently use a GPU's compute power. The goal of our group is to enable the use of this massively parallel processing power for a larger group. We mainly focus on computer vision and image processing applications.
Parallel architecture research in Eindhoven
Apart from GPUs, we perform research in other parallel architectures (such as FPGAs), but also for parallel computing in general. We focus on the architecture, the code generation and optimization process, and the mapping of applications onto these architectures.