ZISC symposium on Algorithms for Linear Algebra

Symbolic picture for the article. The link opens the image in a large view.

it is our pleasure to announce an ad-hoc ZISC-Symposium on

Algorithms for Linear Algebra  (aka: Frankonian “Sparse Days”)

with three guests speakers visiting the Lehrstuhl für Simulation (LSS, Inf 10) this week.

 

Time: Thursday, November 28,  13:15 – 17:00 o’clock

Room: ZISC Seminar Room (Casa Huber)

The abstracts of the talks are included below.

 

Program:

13:15    Parallel performance of an iterative solver based on the Golub-Kahan bidiagonalization

Carola Kruse, CERFACS, Toulouse

 

14:00    Doubly stochastic scaling for the detection of block structures in sparse matrices

Daniel Ruiz,  ENSEEIHT-IRIT, Toulouse

 

14:45    Break

 

15:00    Parallel solution of large sparse systems by direct and hybrid methods

Iain Duff, STFC RAL, UK and CERFACS

 

15:45    HyTeG: A multigrid framework for systems with a trillion (10^12) degrees of freedom and beyond

Nils Kohl and Dominik Thönnes, LSS-FAU, Erlangen

 

16:30   Open Discussion

 

All guests will also be available on Friday for interaction and discussion. For the abstracts see below.

Uli Ruede, LSS, Inf 10

 

Abstracts

 

Parallel performance of an iterative solver based on the Golub-Kahan bidiagonalization

Carola Kruse, CERFACS, Toulouse

 

Abstract:

We present an iterative method based on a generalization of the Golub-Kahan bidiagonalization for solving indefinite matrices with a 2×2 block structure. We focus in particular on our recent implementation of the algorithm using the parallel numerical library PETSc. Since the algorithm is a nested solver, we investigate different choices for parallel inner solvers and show its strong scalability for two Stokes test problems. The algorithm is found to be scalable for large sparse problems.

 

Doubly stochastic scaling for the detection of block structures in sparse matrices.

Daniel Ruiz,  ENSEEIHT-IRIT, Toulouse

 

Abstract:

The detection of block structures in matrices is an important challenge. First in data analysis where matrices are a key tool for data representation, as data tables or adjacency matrices. Indeed, for the first one, finding a co-clustering is equivalent to finding a row and column block structure of the matrix. For the second one, finding a structure of diagonal dominant blocks leads to a clustering of the data. Moreover, block structure detection is also usefull for the resolution of linear systems. For instance, it helps to create efficient Block Jacobi precoditioners or to find groups of rows that are strongly decorrelated in order to apply a solver such as Block Cimmino.

In this work, we focus our analysis on the detection of dominant diagonal block structures by symmetrically permuting the rows and columns of matrices. Lots of algorithms have been designed that aim to highlight such structures. Among them, spectral algorithms play a key role. They can be divided into two kinds. The first one consists of algorithms that first project the matrix rows onto a low-dimensional space generated by the matrix leading eigenvectors, and then apply a procedure such as a k-means on the reduced data. Their main drawbacks is that the knowledge of number of clusters to uncover is required. The second kind consists of iterative procedures that look for the k-th best partition into two subblocks of the matrix at step k. However, if the matrix structure shows more than two blocks, the best partition into two blocks may be a poor fit to the matrix groundtruth structure. Hence, we propose a spectral algorithm that deals with both issues described above. To that end, we preprocess the matrix with a doubly-stochastic scaling, which leverages the numerical information across blocks. First we show the benefits of using such a scaling by using it as a preprocessing for the Louvain’s algorithm, in order to uncover community structures in networks. We also investigate several global modularity measures designed for quantifying the consistency of a block structure. We generalise them to make them able to handle doubly-stochastic matrices, and thus we remark that our scaling tends to unify these measures.

Then, we describe our algorithm that is based on spectral elements of the scaled matrix. Our method is built on the principle that leading singular vectors of a doubly-stochastic matrix should have a staircase pattern when their coordinates are sorted in the increasing order, under the condition that the matrix shows a hidden block structure. Tools from signal processing-that have been initially designed to detect jumps in signals-are applied to the sorted vectors in order to detect steps in these vectors, and thus to find the separations between the blocks. However, these tools are not specifically designed to this purpose. Hence procedures that we have implemented to answer the encountered issues are also described.

We finally illustrate with some applications of matrix block structure detection, such as community detection in networks for instance.

 

Parallel solution of large sparse systems by direct and hybrid methods

Iain Duff,  STFC RAL, UK and CERFACS

 

Abstract:

We discuss a range of algorithms and codes for the solution of sparse systems that we have developed in an EU Horizon 2020 Project, called NLAFET, that finished on 30 April 2019.

We used two approaches to get good single node performance. For symmetric systems we used task-based algorithms based on an assembly tree representation of the factorization. We then used runtime systems for scheduling the computation on both multicore CPU nodes and GPU nodes [4]. The second approach was to design a new parallel threshold Markowitz algorithm [2] based on Luby’s method [5] for obtaining a maximal independent set in an undirected graph. This was a significant extension since our graph model is a directed graph.

We then extended the scope of these two approaches to exploit distributed memory parallelism. In the first case, we base our work on the block Cimmino algorithm [3] using the ABCD software package coded by Zenadi in Toulouse [6]. The kernel for this algorithm is the direct factorization of a symmetric indefinite submatrix for which we use the above symmetric code. To extend the unsymmetric code to distributed memory, we use the Zoltan code from Sandia [1] to partition the matrix to singly bordered block diagonal form and then use the above unsymmetric code on the blocks on the diagonal.

We show the performance of our codes on industrial strength large test problems on a heterogeneous platform. Our codes that are available on github are shown to outperform other state-of-the-art codes.

 

References

[1] E. BOMAN, K. DEVINE, L. A. FISK, R. HEAPHY, B. HENDRICK- SON, C. VAUGHAN, U. CATALYUREK, D. BOZDAG, W. MITCHELL, AND J. TERESCO, Zoltan 3.0: Parallel Partitioning, Load-balancing, and Data Management Services; User’s Guide, Sandia National Lab- oratories, Albuquerque, NM, 2007. Tech. Report SAND2007-4748W http://www.cs.sandia.gov/Zoltan/ug html/ug.html.

[2] T. A. DAVIS, I. S. DUFF, AND S. NAKOV, Design and implementation of a parallel markowitz threshold algorithm, Technical Report RAL-TR-2019-003, Rutherford Appleton Laboratory, Oxfordshire, England, 2019. NLAFET Work- ing Note 22. Submitted to SIMAX.

[3] I. S. DUFF, R. GUIVARCH, D. RUIZ, AND M. ZENADI, The augmented block Cimmino distributed method, SIAM J. Scientific Computing, 37 (2015), pp. A1248–A1269.

[4] I. S. DUFF, J. HOGG, AND F. LOPEZ, A new sparse symmetric indefinite solver using a posteriori threshold pivoting, Tech. Rep. RAL-TR-2018-012, Rutherford Appleton Laboratory, Oxfordshire, England, 2018. NLAFET Working Note 21. Submitted to SISC.

[5] M. LUBY, A simple parallel algorithm for the maximal independent set problem, SIAM J. Computing, 15 (1986), pp. 1036–1053.

[6] M. ZENADI, The solution of large sparse linear systems on parallel computers using a hybrid implementation of the block Cimmino method., These de Doctorat, Institut National Polytechnique de Toulouse, Toulouse, France, Decembre 2013.

 

 

HyTeG: A multigrid framework for systems with a trillion (10^{12}) degrees of freedom and beyond

Nils Kohl and Dominik Thönnes, LSS-FAU, Erlangen

 

Abstract:

Insightful, finely resolved simulations of physical models such as Earth-mantle convection require the solution of systems of equations of enormous size.  A global resolution of the Earth-mantle of about 1km results in more than a trillion (10^{12}) unknowns. Only solvers with optimal complexity – such as multigrid methods – can achieve that scalability.

In this talk we present results from the SPPEXA priority program of the DFG in the project TerraNeo. The extreme scale framework HyTeG implements parallel and matrix-free finite-element multigrid solvers for extreme-scale simulations as they are required for modern geophysical applications. The framework combines excellent performance, scalability, and geometric flexibility through structured refinement of unstructured meshes and fully distributed domain partitioning.