David Patterson has recently published a tech report titled How to Build a Bad Research Center. David Patterson’s research centers date back to the X-Tree in 1977, and include the famous RISC, RAID, and Network of Workstations. His recent research centers include the Par Lab, the AMP Lab, and the ASPIRE Lab. In this report David Patterson summarizes eight pitfalls of building research centers, and provides suggestions to avoid these pitfalls.
p.s. The Par Lab has an end of project celebration on May 31 2013, with talk slides available. A book titled The Berkeley Par Lab: Progress in the Parallel Computer Landscape will be published soon.
Update: the June 2013 Top500 list is available now.
The Top500 Supercomputer List will be updated in the International Supercomputing Conference on June 16. From many sources, we may expect an Intel MIC (i.e., Xeon Phi) based system topping the list. According to HPCWire, the new system will have a peak performance of 53-55 Petaflop and a LINPACK performance of 27-29 Petaflop. At the moment, these numbers stay as rumours, until the official announcement, if the system can be tested in time to make it there.
A paper, titled Cache-aware Roofline model: Upgrading the loft, is to appear in Computer Architecture Letters. The ideas and experiments in this paper are relevant and similar to the ongoing research of PARSE members.
NVIDIA has updated their GPU and Tegra roadmaps at the GPU Technology Conference (GTC), held in the last week of March 2013.
The GPU roadmap includes Volta as the successor of Maxwell, in turn being the successor of the current Kepler architecture. Volta will be NVIDIA’s first 3D stacked GPU, stacking DRAM chips on top of the logic. Using this technology, Volta is said to achieve a bandwidth of 1TB/s.
The Tegra roadmap introduces the Tegra 5 (Logan) and Tegra 6 (Parker) SoCs. Logan will for the first time include a desktop GPU (Kepler architecture), allowing it to run CUDA programs. Parker will feature NVIDIA’s own ARM-based CPU architecture (Denver) and an updated GPU core.
The proceeding of the ACM/SIGDA International Symposium on Field-Programmable Gate Arrays 2013 is now available on the ACM Digital Library. There are two papers on polyhedral optimization:
The programme of ASPLOS 2013 was recently published online. There are a number of interesting publications related to GPUs:
Co-located with ASPLOS is the 6th edition of the GPGPU workshop, of which the program is also expected to be published in the coming weeks.
Location, Location, Location—The Role of Spatial Locality in Asymptotic Energy Minimization (PDF), by André DeHon, a seminal paper in ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA) 2013. It is an elegant paper with great insight! We need more papers like this.
Here is the article:
Redefining the Role of the CPU in the Era of CPU-GPU Integration (PDF), in IEEE Micro Nov.-Dec. 2012.
This article points out that, in the context of CPU-GPU integration, the workload characteristics on the CPU side change significantly. This article provides great insights into the design issues on CPU architecture. To name a few: