10.30 - 12.30 (including coffee break)
Meeting of Scientific Committee
12:30 - 1:30 - Lunch
1.30 - 5.00
Workshop on Advances in High Performance Scientific Computing
1.30 - 1.35 Opening remarks
1.35 - 2.20
Stan Scott (Queens University Belfast)
High performance scientific computation: numerical music or numerical noise? Slides
2.20 - 3.05
George A Constantinides (Imperial College London)
Computer Arithmetic in High Performance Reconfigurable Computing. Slides
3.05 - 3.30: Coffee break
3.30 - 4.15
Simon McIntosh-Smith (University of Bristol)
The impact of many-core computer architectures on numerical libraries: past, present and future. Slides
4.15 - 5.00
Peter Jimack (University of Leeds)
Development and application of parallel numerical tools for the adaptive multilevel simulation of phase-change problems. Slides
Research in high performance computation is often focused solely on performance with scant attention paid to the accuracy of the results. Arguable, the correctness of the results is equally if not more important than the speed of computation: there is little merit in being able to compute numerical noise faster than anyone else. It will be argued that scientific computation is inherently flawed and that it is essential that computational scientists understand and control the approximations used in the mathematical model, the computational model and, most importantly, in the computer implementation. Arising from the use of fixed size floating point arithmetic the latter is a significant source of error which is exacerbated in high performance environments where its insidious and poorly understood effect can cause a serious reduction in accuracy of scientific computation. These issues will be illustrated through a case.
This talk will look at the role of computer arithmetic in achieving high performance computation for low energy or silicon area requirements. We will discuss the standards and deviations from those standards, as well as some more exotic alternatives. The implications of computer arithmetic properties for future algorithm specification, compilation, and implementation, will be considered.
Computer architectures are going through their most significant changes in decades. Rapidly increasing core counts and vector widths have replaced increases in clock speeds as the drivers of performance. More recently there has been an increase in heterogeneity of core types, with some processors integrating two or three different types of programmable core. All of these changes make it much more difficult to design and develop high performance numerical libraries.
In this talk we shall review these hardware trends and analyse recent developments in numerical libraries, including the PLASMA and MAGMA projects at ICL in Tennessee. We shall also discuss the implications for numerical libraries in the future, and touch on issues such as auto-tuning, active libraries and parallel composability ("who owns the parallelism?").
A widely used class of mathematical models for the description of phase-change problems is based around the phase-field formulation. In this approach the the mathematically sharp interface between the solid and liquid phases is assumed diffuse, allowing the definition of a continuous (differentiable) order parameter which represents the phase of the material (typically −1 in the liquid and +1 in the solid regions). The evolution of this phase variable is governed by a free energy functional which can be solved using standard techniques for partial differential equations (PDEs) without explicitly tracking the solid-liquid interface, thus allowing the simulation of arbitrarily complex morphologies. In this talk we will consider one such class of model, based upon a system of highly nonlinear parabolic PDEs, for the simulation of the solidification a non-isothermal binary alloy. The challenges of this model include the need to resolve a moving feature (the solid-liquid interface) at very small length scales, the existence of vastly different time scales (leading to severe stiffness) and the desire for simulations to be in three space dimensions. In order to develop efficient and reliable simulation software it has been necessary to incorporate mesh adaptivity (for locally enhanced spatial resolution), an implicit stiff integrator in time, and the use of multigrid methods to solve the resulting nonlinear algebraic system of equations at each time step. Furthermore, applications in three space dimensions require the use of parallel implementations of the above on the high performance computing (HPC) facilty at Leeds.