Semantics-aware optimization framework for multi-scale computational methods

Muhammad Hasan Jamal, Purdue University


An important emerging problem domain in computational science and engineering is the development of multi-scale computational methods for complex problems that span multiple spatial and temporal scales. An attractive approach to solving these problems is recursive decomposition: the problem is broken up into a tree of loosely coupled sub-problems which can be solved independently at different scales and granularity and then coupled back together to obtain the desired solution. Given a mesh decomposition, a particular problem can be solved in myriad ways by coupling the sub-problems together in different tree schedules. As we argue in this thesis, the space of possible schedules is vast, the performance gap between an arbitrary schedule and the best schedules is potentially quite large, and the likelihood that a domain scientist can find the best schedule to solve a problem on a particular machine is vanishingly small. Additionally, a given undecomposed mesh can be decomposed into exponentially large number of decompositions. Effective mesh partitioning is essential for good performance of multi-scale computational methods. The computational cost associated with different scales can vary by multiple orders of magnitude. Hence the problem of finding an optimal partitioning of the mesh, choosing appropriate timescales for the partitions, and determining the number of partitions at each timescale is non-trivial. Existing partitioning tools, such as METIS, overlook the constraints posed by multiscale methods, leading to sub-optimal partitions with a high performance penalty. To handle multi-scale problems appropriately, partitioners and schedulers need to be equipped with domain-specific knowledge that helps generate near optimal partitions and coupling schedules. In this thesis, we present a semantics-aware optimization framework that exploits domain-specific knowledge to produce optimized mesh partitioning automatically, and generate efficient coupling schedules to solve these complex multi-scale computational methods using recursive decomposition. Our Framework adopts the inspector executor paradigm, where the problem is inspected and a novel heuristic finds an effective implementation, i.e. decomposition and its scheduling, based on domain properties evaluated by a cost model. Experimental results show that the derived implementation achieves optimal sequential and parallel performance when executed by a parallel run-time system (Cilk). We demonstrate that our cost model is highly correlated with actual application runtime. Mesh decompositions produced by our approach perform as well as, if not better than, decompositions produced by state-of-the-art partitioners, like METIS, and even those that are manually constructed by domain scientists. The schedule generated by our domain-specific heuristic also outperforms alternate scheduling strategies, as well as over 99% of randomly-generated recursive decompositions sampled from the space of possible solutions. We explore two problem domains under solid mechanics, structural dynamics and peridynamics, and show that by using our framework a good domain-specific cost model is all that is required for a broad range of computational applications in each domain without having to rewrite libraries for each domain.




Kulkarni, Purdue University.

Subject Area

Computer Engineering|Engineering

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server