RSS feed
Best of EuroPar '14
Posted on 1-9-2014 by Cedric Nugteren Tags: GPU, programming, compiler, conference

This year’s EuroPar was held in Porto, a city on the mouth of the river Douro in the north of Portugal.

The conference started with 2 full days of workshops, including the 12th HeteroPar, the 7th MuCoCoS, and the 7th UCHPC. Some related highlights of the workshops are:

  • “A visual programming model to implement coarse-grained DSP applications on parallel and heterogeneous clusters”. An image-processing language to create flows of kernels.
  • “An Empirical Evaluation of GPGPU Performance Model”. A summary of several existing GPU models, including a couple of test cases.
  • A Study of the Potential of Locality-Aware Thread Scheduling for GPUs. This is my own work on optimising thread scheduling for multi-threaded architectures such as the GPU.
  • “Exploiting Hidden Non-uniformity of Uniform Memory Access on Manycore CPUs”. The point of this work was to demonstrate the non-uniformity effects in the Xeon Phi co-processors. Some effects where not fully understood.

The main program included the following interesting talks:

Oh, and here is a picture of my presentation:

006-day2_01
A Detailed GPU Cache Model Based on Reuse Distance Theory
Posted on 28-2-2014 by Gert-Jan van den Braak Tags: GPU, cache model

Last week we presented our paper on a GPU cache model at the 20th IEEE International Symposium On High Performance Computer Architecture in Orlando, Florida. The slides of the presentation are now available. Also the source-code of the cache model is available on GitHub. You can find the full publication at our publications page.

Abstract:
As modern GPUs rely partly on their on-chip memories to counter the imminent off-chip memory wall, the efficient use of their caches has become important for performance and energy. However, optimising cache locality systematically requires insight into and prediction of cache behaviour. On sequential processors, stack distance or reuse distance theory is a well-known means to model cache behaviour. However, it is not straightforward to apply this theory to GPUs, mainly because of the parallel execution model and fine-grained multi-threading. This work extends reuse distance to GPUs by modelling: 1) the GPU’s hierarchy of threads, warps, threadblocks, and sets of active threads, 2) conditional and non-uniform latencies, 3) cache associativity, 4) miss-status holding-registers, and 5) warp divergence. We implement the model in C++ and extend the Ocelot GPU emulator to extract lists of memory addresses. We compare our model with measured cache miss rates for the Parboil and PolyBench/GPU benchmark suites, showing a mean absolute error of 6% and 8% for two cache configurations. We show that our model is faster and even more accurate compared to the GPGPU-Sim simulator.

Download attachment: Pdf
Computing Laws: Origins, Standing, and Impact
Posted on 10-1-2014 by Zhenyu Ye Tags: architecture, computing laws

In the last group meeting, we had a casual discussion about the unreasonably effectiveness of simple laws in computing. It turns out that IEEE Computer Dec. 2013 has a special section on Computing Laws: Origins, Standing, and Impact. It covers several classic laws:

Three Fingered Jack: Productively Addressing Platform Diversity
Posted on 29-11-2013 by Zhenyu Ye Tags: CPU, GPU, FPGA, programming, architecture, compiler, vision, OpenCL, multicore, SIMD, High Level Synthesis

Three Fingered Jack: Productively Addressing Platform Diversity, the PhD thesis of David Sheffield from ParLab. This thesis addresses the issue of implementing computer vision applications (among other applications) on different targets, including multicore processor, data-parallel processor, custom hardware, etc. This work is related to some of our on-going research projects.

History of GPGPU architectures
Posted on 28-10-2013 by Gert-Jan van den Braak Tags: GPU, architecture

As daylight saving time has ended, days get shorter and evenings get longer, more time becomes available for some reading by the fireplace. In case you would like to read up on GPU architectures, you may find an introduction on GPGPU architectures of the last couple of years below.

Programmable GPU architectures have been around for about seven years now. In November 2006 NVIDIA launched its first fully programmable GPU architecture, the G80 based GeForce 8800. In June 2008 a major revision was introduced, the GT200. This first architecture is described in detail in IEEE Micro volume 28, issue 2 (March-April 2008). The NVIDIA Tesla: A Unified Graphics and Computing Architecture article describes not only the history of NVIDIA GPUs from dedicated graphic accelerators to a unified architecture suitable for GPGPU workloads, but also the CUDA programming model. Many architecture details of the GT200 have been revealed by benchmarks in the paper Demystifying GPU Microarchitecture through Microbenchmarking (PDF).

In 2010 NVIDIA’s launched its next big architecture: Fermi. Many details are described in the Fermi White paper and in the AnandTech article NVIDIA’s GeForce GTX 480 and GTX 470. Later that year an update of the Fermi architecture, oriented more at gaming rather than GPGPU compute, was introduced, the GF104 in the GeForce GTX 460. More (architecture) details are described by AnandTech in NVIDIA’s GeForce GTX 460.

The latest GPGPU architecture by NVIDIA, Kepler, was released in 2012. Another whitepaper by NVIDIA describes this GK110 architecture used in the Tesla K20 GPGPU compute card. Also a gaming version of Kepler has been made: the GK104 used in the GeForce GTX 680. A couple of articles on AnandTech describe the architecture in more detail: the GK104 and the GK110.

For the history of AMD’s programmable GPGPU architecture the best place to start is the AMD Graphics Core Next (GCN) Architecture Whitepaper. It describes the evolution of AMD GPUs from fixed function GPUs to the programmable VLIW5 and VLIW4 GPUs and finally the GCN architecture. Again AnandTech gives some nice insights in the transition from VLIW5 to VLIW4 in the article AMD’s Radeon HD6970 & Radeon HD 6950, and from VLIW to GCN in AMD’s Graphics Core Next Preview.

Top conferences and journals
Posted on 1-8-2013 by Cedric Nugteren Tags: conferences, journals

Google Scholar has released its 2013 list of top conferences/journals. In the category Computing Systems the following conferences/journals of interest are ranked in the top 20:

1. ISCA
2. Transactions on Parallel and Distributed Systems
3. ASPLOS
4. Supercomputing (SC)
5. IPDPS
12. HPCA
13. IEEE Micro
15. PPoPP

Rankings of interest are:

How to Build a Bad Research Center
Posted on 17-6-2013 by Zhenyu Ye Tags: architecture research

David Patterson has recently published a tech report titled How to Build a Bad Research Center. David Patterson’s research centers date back to the X-Tree in 1977, and include the famous RISC, RAID, and Network of Workstations. His recent research centers include the Par Lab, the AMP Lab, and the ASPIRE Lab. In this report David Patterson summarizes eight pitfalls of building research centers, and provides suggestions to avoid these pitfalls.

p.s. The Par Lab has an end of project celebration on May 31 2013, with talk slides available. A book titled The Berkeley Par Lab: Progress in the Parallel Computer Landscape will be published soon.

What to Expect for the Coming Top500 Supercomputer List
Posted on 30-5-2013 by Zhenyu Ye Tags: architecture supercomputer top500 cpu

Update: the June 2013 Top500 list is available now.

The Top500 Supercomputer List will be updated in the International Supercomputing Conference on June 16. From many sources, we may expect an Intel MIC (i.e., Xeon Phi) based system topping the list. According to HPCWire, the new system will have a peak performance of 53-55 Petaflop and a LINPACK performance of 27-29 Petaflop. At the moment, these numbers stay as rumours, until the official announcement, if the system can be tested in time to make it there.

Update: Details of the system are now available in HPCWire. It also links to a report on Tianhe-2 by Jack Dongarra.