News‎ > ‎

Schedule and Abstracts - Monash Workshop on Numerical PDEs, 15-19 Feb, 2016


Schedule
(abstracts are included below the schedule)
(all talks are held in auditorium S14 on the Monash Clayton Campus) 
Monday 9:00-10:00 registration and coffee (maths building, ground floor lobby)
10:00-11:00 Ian Sloan, UNSW PDE with random coefficients – a high-dimensional problem (invited) SLIDES
11:00-12:00 Rob Falgout, Livermore Multigrid Methods: Scalable algorithms for extreme-scale computing (invited) SLIDES
12:00-1:30 lunch (maths building, ground floor lobby)
1:30-2:00 William McLean, UNSW Subdiffusion in a nonconvex polygon SLIDES
2:00-2:30 Kyle Talbot, Monash Uniform temporal convergence of numerical schemes for miscible displacement through porous media SLIDES
2:30-3:00 coffee (maths building, ground floor lobby)
3:00-3:30 Johannes Reiner, U Queensland Progressive Failure Modelling in Composite Laminates SLIDES
3:30-4:00 Krishna Saxena, Auckland Finite Element Modelling of Auxetic Metamaterials
4:30-6:00 welcome reception (maths building, 3rd floor lobby)
Tuesday 9:30-10:30 Remi Abgrall, Zuerich Recent advances in numerical approximation of hyperbolic systems: some history and future (invited)
10:30-11:00 coffee (maths building, ground floor lobby)
11:00-12:00 Jacob Schroder, Livermore Multigrid Reduction in Time: A flexible and scalable approach to parallel-in-time (invited)
12:00-1:30 lunch (maths building, ground floor lobby)
1:30-2:00 Duy Minh Dang, U Queensland Optimal mean-variance portfolio allocation: a Hamilton-Jacobi-Bellman PDE approach
2:00-2:30 Quoc Thong Le Gia, UNSW Higher order Quasi-Monte Carlo integration for Bayesian Estimation SLIDES
2:30-3:00 coffee (maths building, ground floor lobby)
3:00-3:30 Michael Feischl, UNSW A posteriori error estimates for the Eddy-Current-LLG equations SLIDES
3:30-4:00 Alexander Howse, Waterloo Nonlinearly Preconditioned Optimization on Grassmann Manifolds for Tucker Tensor Approximations SLIDES
Wednesday 9:30-10:30 Neela Nataraj, IIT Bombay Finite element methods for Plate Bending Problems (invited)
10:30-11:00 coffee (maths building, ground floor lobby)
11:00-12:00 Clinton Groth, Toronto High-Order Anisotropic Adaptive Mesh Refinement Finite-Volume Schemes for Multi-Scale Physically-Complex Flows (invited) SLIDES
12:00-1:30 lunch (maths building, ground floor lobby)
1:30-2:00 Linda Stals, ANU Adaptive refinement recovery after fault simulation SLIDES
2:00-2:30 Jesse Chan, Virginia Tech GPU-accelerated Bernstein-Bezier DG methods for wave problems SLIDES
2:30-3:00 coffee (maths building, ground floor lobby)
3:00-3:30 Ryan McClarren, Texas A&M High Fidelity, Moment-Based Methods for Particle Transport: The confluence of PDEs, Optimization, and HPC SLIDES
3:30-4:00 Tony Roberts, Adelaide Modeling, analysis and scientific computation of complex multiscale systems SLIDES
4:00-4:30 Yahya Alnashri, Monash A generic Framework for Variational Inequalities
4:30-5:00 Alexander Gilbert, UNSW Applying quasi-Monte Carlo methods to an eigenproblem with a random coefficient
6:00-8:00 BBQ (rock garden, in front of maths building)
Thursday 9:30-10:30 Markus Hegland, ANU A review of the sparse grid combination technique for the solution of partial differential equations (invited) SLIDES
10:30-11:00 coffee (maths building, ground floor lobby)
11:00-12:00 Jörg Frauendiener, Otago Computational Gravity (invited) SLIDES
12:00-1:30 lunch (maths building, ground floor lobby)
1:30-2:00 Jordan Pitt, ANU Numerically Solving the 1D Serre Equations in the Presence of Discontinuities SLIDES
2:00-2:30 Zhenquan Li, Charles Sturt U A new computational technique for fluid flows SLIDES
2:30-3:00 coffee (maths building, ground floor lobby)
3:00-3:30 Santosh Kumar, Thapar U Finite volume approximation and analysis of conservation laws arising in neuronal variability SLIDES
3:30-4:00 Hans De Sterck, Monash High-Order Finite Volume Methods for Magnetohydrodynamics on Adaptive Cubed-Sphere Grids SLIDES
Friday 9:30-10:30 Gianmarco Manzini, IMATI CNR The Mimetic Finite Difference Method and its application to diffusion problems (invited)
10:30-11:00 coffee (maths building, ground floor lobby)
11:00-11:30 Marian Moldenhauer, ZIB Berlin Optimal Hip Implant Positioning SLIDES
11:30-12:00 Jerome Droniou, Monash An arbitrary-order scheme for convection-diffusion equations SLIDES

Abstracts – invited speakers (in the order of the presentations)

Ian Sloan (I.Sloan@unsw.edu.au) University of New South Wales, Australia

Title: PDE with random coefficients – a high-dimensional problem

Abstract: This talk describes recent computational developments in partial differential equations with random coefficients treated as a high-dimensional problem. The prototype of such problems is the underground flow of water or oil through a porous medium, with the permeability of the material treated as a random field. (The stochastic dimension of the problem is high if the random field needs a large number of random variables for its effective description.) There are many approaches to the problem, ranging from the polynomial chaos method initiated by Norbert Wiener to the Monte Carlo and (of particular interest to my group) Quasi-Monte Carlo methods. In recent years there have been significant progress in the development and analysis of algorithms in these areas.

Rob Falgout (rfalgout@llnl.gov) Lawrence Livermore National Laboratory, USA

Title: Multigrid Methods: Scalable algorithms for extreme-scale computing

Abstract: Multigrid methods are important techniques for efficiently solving huge linear systems and they have already been shown to scale effectively on parallel computers with millions of cores. Future exascale architectures will require solvers to exhibit even higher levels of concurrency (1B cores), minimize data movement, exploit machine heterogeneity, and demonstrate resilience to faults. While considerable research and development remains to be done, multigrid approaches are ideal for addressing these challenges. In this talk, we will introduce the multigrid method and discuss its essential features. We will begin with classical geometric multigrid and then move on to algebraic multigrid (AMG). We will also discuss the added complexity of developing parallel multigrid methods and software, especially in the context of the exascale machines on the horizon, and touch on some current research topics as well.

Remi Abgrall (remi.abgrall@math.uzh.ch) Universität Zürich, Switzerland

Title: Recent advances in numerical approximation of hyperbolic systems: some history and future

Abstract: In this talk, I will explain some problems arising from the numerical approximation of hyperbolic systems by means of finite element-like techniques. In order to give the ideas, I will mostly focus on simple scalar problems, and show the extension to systems in the end. I will first start by some simple facts on these problems, recall some classical techniques and explain what are the issues. In a second part, I will provide some hints on recent advances: parameter free methods, unsteady problems, shock stabilisation.

Jacob Schroeder (schroder2@llnl.gov) Lawrence Livermore National Laboratory, USA

Title: Multigrid Reduction in Time: A flexible and scalable approach to parallel-in-time

Abstract: The need for parallel-in-time algorithms is currently being driven by the rapidly changing nature of computer architectures. Future speedups will come through ever increasing numbers of cores, but not faster clock speeds, which are stagnant. Previously, increasing clock-speeds could compensate for traditional sequential time stepping algorithms as the problem size increased. However this is no longer the case, leading to the sequential time integration bottleneck and the need to parallelize in time. In this talk, we examine an optimal-scaling parallel time integration method, multigrid reduction in time (MGRIT). MGRIT applies multigrid to the time dimension by solving the (non)linear systems that arise when solving for multiple time steps simultaneously. The result is a versatile approach that is nonintrusive and wraps existing time evolution codes. MGRIT allows for various time discretizations (e.g., Runge-Kutta and multistep) and for adaptive refinement/coarsening in time and space. Nonlinear problems are handled through full approximation storage (FAS) multigrid. Some recent theoretical results, as well as practical results for a variety of problems will be presented, e.g., explicit/implicit time integration, nonlinear diffusion and compressible Navier-Stokes.

Neela Nataraj (neela@math.iitb.ac.in) Indian Institute of Technology Bombay, India

Title: Finite element methods for Plate Bending Problems

Abstract: In this talk, after giving a brief introduction to some  plate models, we consider the von Karman equations that describe the bending of thin elastic plates. Conforming and non-conforming finite element methods are employed to approximate the displacement and Airy stress functions. Techniques for deriving optimal order theoretical error estimates are explained. The results of numerical experiments which justify the theoretical estimates are presented.

Clinton Groth (groth@utias.utoronto.ca) University of Toronto, Canada

Title: High-Order Anisotropic Adaptive Mesh Refinement Finite-Volume Schemes for Multi-Scale Physically-Complex Flows

Abstract: A family of high-order central essentially non-oscillatory (CENO) finite-volume schemes with adaptive mesh refinement (AMR) are described for the prediction of a range of multi-scale physically-complex flows having both disparate and anisotropic spatial scales.  The CENO schemes are based on a hybrid solution reconstruction procedure that combines an unlimited high-order k-exact, least-squares reconstruction technique, following from a fixed central stencil, with a monotonicity preserving limited piecewise linear least-squares reconstruction.  Switching in the hybrid procedure is determined by a solution smoothness indicator that detects whether or not the solution is accurately represented on the mesh. The solution smoothness indicator can also used in the formulation of refinement criteria for directing mesh adaptation.  The proposed approach avoids some of the complexities associated with the original essentially non-oscillatory (ENO) and other weighted ENO (WENO) schemes and is thereby well suited for solution reconstruction on irregular and unstructured mesh.  The development of the high-order finite-volume approach for both multi-block body-fitted and more generally unstructured meshes in three dimensions is considered.  In the case of the former, the scheme has been developed and applied in conjunction with an efficient and highly scalable anisotropic AMR that uses an unstructured binary tree hierarchical data structure to permit local anisotropic refinement of the grid in a preferred coordinate direction.  The anisotropic AMR scheme and block connectivity permits coarsening of the grid blocks in a manner that is independent of refinement history and allows the mesh to rapidly re-adapt for unsteady applications.  Applications will be discussed for a range of problems including high-speed compressible flows, viscous incompressible and compressible flows, as well as reactive flows. The potential of the combined CENO and anisotropic AMR schemes for the simulation of physically-complex flows in an efficient and accurate manner will be demonstrated.

Markus Hegland (Markus.Hegland@anu.edu.au) The Australian National University, Australia

Title: A review of the sparse grid combination technique for the solution of partial differential equations

Abstract: The sparse grid combination technique uses extrapolation to enhance the performance for given numerical solvers. In particular the approximation takes the form

where u(γis the given numerical solution with numerical parameters γ and where cγ are scalar factors. This method is in particular well suited for parallel computing and high-dimensional problems. In the talk this approach will be illustrated for the solution of the gyrokinetic equations of plasma physics based on the given solver GENE developed at the TU Munich. The performance of the method depends on the choice of the parameters γ and the coefficients. It will be seen how the choice of the γ and the cγ leads to quasi optimal approximations and may even be used to deal with computer hardware faults. We will also consider the solution of elliptic PDEs and eigenvalue problems and if time permits to PDEs originating in the determination of density estimators and machine learning.

Jörg Frauendiener (joergf@maths.otago.ac.nz) University of Otago, New Zealand

Title: Computational Gravity

Abstract: Computational gravity is the part of computational physics that is concerned with the solution of Einstein's field equations of general relativity. This is a theory describing space, time and matter on a very fundamental and geometrical level. The geometric aspect of the theory entails several problems for the numerical treatment. In this talk I will discuss these fundamental issues and show some of the applications that have been developed over the recent years.

Gianmarco Manzini (marco.manzini@imati.cnr.it) IMATI CNR, Italy

Title: The Mimetic Finite Difference Method and its application to diffusion problem 

Abstract: Mimetic discretizations provide a mathematical framework that allows the construction of families of schemes for the numerical resolution of partial differential equations. From an historical viewpoint, the mimetic approach dates back to the work on Geometric Integration of Whitney and the discretization differential operators by a duality principle from the Russian school of Samarskii.  In this talk we review the Mimetic Finite Difference (MFD) method and its application to diffusion problems [1]. This MFD method works on unstructured meshes with cells of very general geometric shape, polygons in 2D and polyhedra in 3D. The construction of the method is based on the two main concepts of polynomial consistency and local stability, which respectively determine the order of accuracy of the approximation and the well-posedness of the discrete problem. In particular, any order of accuracy can be attained by changing the degree of the polynomials that are used to formulate the method. The duality principle is used to establish a variational form of the method, which allows an efficient implementation by a local construction on each cell and a global assembly as in the finite element method. A variational reformulation in a finite element setting is possible and provides an equivalent family of methods called "virtual elements".  The MFD method also presents very strong connections with other families of numerical schemes whose design is based on similar principles. These connections are also in discussed in this talk.

[1] L. Beirao da Veiga, K. Lipnikov, G. Manzini, "The Mimetic Finite Difference Method for Elliptic Problems", Modeling Simulations & Applications Series, Springer 2014


Abstracts – contributed talks (in the order of the presentations)

William McLean (w.mclean@unsw.edu.au) Univeristy of New South Wales, Australia

Title: Subdiffusion in a nonconvex polygon

Abstract: We consider the spatial discretisation of a time-fractional diffusion equation in a polygonal domain using continuous, piecewise-linear finite elements. If the domain is convex, then the method is known to be second-order accurate in L2 , but if the domain has a re-entrant corner then the error analysis breaks down because the associated Poisson problem is no longer H2 -regular. For a quasi-uniform family of triangulations with mesh parameter h, the error is of order h1+β if largest re-entrant corner has angle π/β with 1/2 < β < 1, but a suitable local refinement strategy restores h2 convergence. Analogous results for the classical heat equation were proved in 2006 by Chatzipantelidis, Lazarov, Thomée and Wahlbin.

This is joint work with Bishnu Lamichhane (Newcastle) and Kim-Ngan Le (UNSW).

Kyle Talbot (kyle.talbot@monash.edu) Monash University, Australia

Title: Uniform temporal convergence of numerical schemes for miscible displacement through porous media

Abstract: The single-phase, miscible displacement through a porous medium of one incompressible fluid by another is described by a nonlinearly-coupled elliptic parabolic system. Convergence analyses exist for a variety of methods for the numerical approximation of the solution to this system, including finite elements, finite volumes and discontinuous Galerkin. These analyses typically demonstrate that the approximation to the concentration variable converges in a space-time averaged sense, e.g. in . I will illustrate that for a family of numerical methods that includes hybrid finite volumes, mixed finite volumes and mimetic finite differences, the concentration can be approximated uniformly in time, i.e. in , thereby providing an admissible approximation to the concentration at any given point in time. This convergence is possible without assuming uniqueness or regularity of the solution to the continuous problem.

Johannes Reiner (j.reiner@uq.edu.au) University of Queensland, Australia

Title: Progressive Failure Modelling in Composite Laminates

Abstract: Reliable and efficient failure simulation within finite elements (FE) is an ongoing and challenging task. The Phantom Node Method (PNM) allows for arbitrary modelling of discontinuities while preserving elemental locality and standard FE techniques. It accounts for geometrical and material nonlinearities by incorporating the total Lagrangian formulation for large deformation and the cohesive concept at discontinuity surfaces respectively. PNM is further extended to simulate different failure modes and their interaction in composite laminates. It is shown that the advanced PNM is able to predict typical fracture measures such as crack density or stiffness reduction with good accuracy. Furthermore, progressive failure interaction is quantitatively and qualitatively evaluated. Results agree well with experimental findings.

Krishna Saxena (ksax995@aucklanduni.ac.nz) University of Auckland, New Zealand

Title: Finite Element Modelling of Auxetic Metamaterials

Abstract: Auxetic materials exhibit negative Poisson’s ratio i.e. they display a lateral expansion when stretched longitudinally and vice versa. The classical theory of elasticity restricts the negative Poisson’s ratio to be -1 for isotropic solids. On the other hand, metamaterials can display extreme negative Poisson’s ratio by manipulating the geometry of unit cell.

A family of 2D and 3D auxetic structures were created using computer aided design. They were simulated using Finite Element analysis package Abaqus to determine the presence of negative Poisson’s ratios and to test if they are a possible solution for potential applications. The effect of element type on the negative Poisson’s ratio of these structures was studied. The effect of geometry of unit cell on the negative Poisson’s ratio of these structures was also investigated using FEM. These structures were then tested for syncelasticity (out of plane bending) using FEM. With the use of finite element approach, we will demonstrate how material properties can be tuned by changing the geometry irrespective of their composition. The presentation will discuss an overview of finite element modelling of auxetic materials and structures.

Duy Minh Dang (duyminh.dang@uq.edu.au) University of Queensland, Australia

Title: Optimal mean-variance portfolio allocation: a Hamilton-Jacobi-Bellman PDE approach

Abstract: In this talk, we discuss a numerical Hamilton-Jacobi-Bellman partial differential equation approach for the mean-variance portfolio allocation problem under jump diffusion models. The focus of this talk is on how to handle realistic portfolio constraints, jumps in the risky asset, and a semi-self-financing strategy which involves positive cash withdrawals but gives superior results in terms of mean-variance criteria. Tests based on estimation of parameters from historical time series show that the strategy is robust to estimation ambiguities.

Quoc Thong Le Gia (qlegia@unsw.edu.au) University of New South Wales, Australia

Title: Higher order Quasi-Monte Carlo integration for Bayesian Estimation

Abstract: We analyze Quasi-Monte Carlo numerical integration methods in Bayesian estimation of solutions to parametric operator equations with holomorphic dependence on the parameters. Such problems arise in numerical uncertainty quantification and in Bayesian inversion of operator equations with distributed uncertain inputs, such as uncertain coefficients, uncertain domains or uncertain source terms and boundary data.

We establish error bounds for higher order, Quasi-Monte Carlo quadrature for the Bayesian estimation. It implies, in particular, regularity of the parametric solution and of the parametric Bayesian posterior density in SPOD weighted spaces. This, in turn, implies that the Quasi-Monte Carlo quadrature methods are applicable to these problem classes, with dimension-independent convergence rates O(N-1/p) of N-point HoQMC approximated Bayesian estimates where 0<p<1 depends only on the sparsity class of the uncertain input in the Bayesian estimation.

This is a joint work with Josef Dick (UNSW) and Robert Gantner and Christoph Schwab (ETH)

Michael Feischl (m.feischl@unsw.edu.au) University of New South Wales, Australia

Title: A posteriori error estimates for the Eddy-Current-LLG equations

Abstract: We analyze a numerical method for the coupled system of the eddy current equation in three space dimensions with the Landau-Lifshitz-Gilbert equation in a bounded domain. The unbounded domain is discretized by means of finite-element/boundary-element coupling. Even though the considered problem is strongly nonlinear, the numerical approach is constructed such that only two linear systems per time step have to be solved. We prove unconditional weak convergence (of a subsequence) towards a weak solution as well as strong convergence with a priori error estimates if a sufficiently smooth strong solution exists. In this case, the strong solution is unique and coincides with each weak solution.

Alexander Howse (ajmhowse@gmail.com) University of Waterloo, Canada

Title: Nonlinearly Preconditioned Optimization on Grassmann Manifolds for Tucker Tensor Approximations

Abstract: Two new accelerated optimization algorithms are presented for computing approximate Tucker tensor decompositions by minimizing error measured in the Frobenius norm, subject to orthonormality constraints on factor matrices.

The first is a nonlinearly preconditioned conjugate gradient (NPCG) algorithm, wherein a nonlinear preconditioner is used to generate a direction which replaces the gradient in the standard nonlinear conjugate gradient (NCG) iteration. The second is a nonlinear generalized minimal residual (N-GMRES) algorithm, in which a linear combination of past iterates and a tentative new iterate, generated by a nonlinear preconditioner, is minimized to produce an improved search direction. The higher order orthogonal iteration (HOOI), the standard workhorse algorithm for computing approximate Tucker decompositions, is used as the nonlinear preconditioner in NPCG and N-GMRES.

The Euclidean versions of these methods are extended to the manifold setting, where optimization over a Cartesian product of Grassmann manifolds is used to handle orthonormality constraints and to allow isolated minimizers. A Grassmann manifold, Gr(n,p), is the set of all p-dimensional subspaces of R^n, and a given element may be represented by an orthonormal matrix. Several modifications are required for use on manifolds: logarithmic maps are used to determine required tangent vectors, retraction mappings are used in the line search update step, vector transport operators are used to compute linear combinations of tangent vectors, and the Euclidean gradient is replaced by the manifold equivalent.

Several variants are provided for the update parameter in NPCG, two for each of the Polak-Ribiere, Hestenes-Stiefel, and Hager-Zhang formulae. NPCG and N-GMRES are compared to HOOI, NCG, a limited memory BFGS quasi-Newton algorithm, and a manifold trust region algorithm using randomly generated and real life tensor data with and without noise, arising from applications in computer vision and handwritten digit recognition. Numerical results show that N-GMRES and NPCG with update parameters determined by modified Polak-Ribiere and Hestenes-Stiefel rules accelerate HOOI significantly for large tensors, in cases where there are significant amounts of noise in the data, and when high accuracy results are required, and are a clear improvement in state-of-the-art methods.

Linda Stals (linda.stals@anu.edu.au) Australian National University, Australia

Title: Adaptive refinement recovery after fault simulation

Abstract: The use of adaptive refinement techniques in combination with finite element methods is well established. Furthermore, iterative techniques that incorporate information about the grid structure, such as the multigrid method, have been shown to be a very efficient approach to solving various types of partial different equations. These techniques now form an integral part of many sophisticated parallel software packages. However, the advent of larger and larger parallel machines leads to a very modern twist of this tale, and that is how to recover if a fault occurs in one of the processors.

In this talk we present a parallel adaptive multigrid method that uses dynamic data structures to store a nested sequence of meshes and the evolving solution. After a fail-stop fault, the data residing on the faulty processor will be lost. However, the neighboring processors contain enough information such that a consistent mesh can be reconstructed in the faulty domain with the goal of resuming the computation without having to restart from scratch.

I will briefly introduce the foundation, the proposed mesh refinement methods, accuracy and reliability verifications, and computational complexity of the new computational methods in my presentation. The comparisons for singular points and asymptotic lines between exact and numerical results for analytical velocity fields will be presented by illustrations. Some of the comparisons between the benchmarks and numerical results for lid-driven flow will also be provided. A number of examples and demonstrations are used for explanations. Possible applications in practice and future research in computational science, computing science and other relevant disciplines will be introduced at the end. If the time provided is not enough, I will delete some of the parts listed above in my presentation.

Jesse Chan (jchan985@gmail.com) Virginia Tech, USA

Title: GPU-accelerated Bernstein-Bezier DG methods for wave problems

Abstract: The computationally intensive nature of time-explicit nodal discontinuous Galerkin methods is well-suited to implementation on Graphics Processing Units (GPUs). We evaluate the use of Bernstein-Bezier bases as an alternative to nodal polynomials for discontinuous Galerkin discretizations and show how to exploit properties of derivative and lift operators specific to Bernstein polynomials. Issues of efficiency and numerical stability are discussed in the context of a model wave propagation problem, and computational experiments comparing high-order nodal bases and high-order Bernstein bases are presented.

Ryan McClarren (rgm@tamu.edu) Texas A&M, USA

Title: High Fidelity, Moment-Based Methods for Particle Transport: The confluence of PDEs, Optimization, and HPC

Abstract: The calculation of the transport of particles is important in many applications including rarefied gas dynamics, plasma physics, and the nuclear energy systems. In this talk I will motivate the choice of moment-based methods for solving particle transport problems, and discuss the difficulties such approaches have. To obtain physically-meaningful solutions the discretization of the original PDEs can depend on the solution to an optimization problem. I will show how these optimization problems arise, what methods perform best in terms of cost and accuracy, and how these problems can be well-suited for high performance computing.

Tony    Roberts (anthony.roberts@adelaide.edu.au) University of Adelaide, Australia

Title: Modeling, analysis and scientific computation of complex multiscale systems

Abstract: We are developing a systematic approach, both analytic and computational, to extract compact, accurate, system level models of complex physical and engineering systems. The wide ranging methodology is to develop and support the patch scheme which empowers large scale simulation and prediction through computations on only small well-separated patches of microscale simulators. The continuing challenge is to couple these microscale simulations on microscale patches across un-simulated space and establish efficiency, accuracy, consistency and stability on the macroscale. Comprehensively accounting for multiscale interactions between subgrid processes among macroscale variations ensures stability and accuracy. In particular, I will discuss meso-time coupling between patches designed for exascale computing. Based on dynamical systems theory and analysis, our approach empowers systematic analysis and understanding for optimal macroscopic simulation for forthcoming exascale computing.

Yahya Alnashri (yahya.alnashri@monash.edu) Monash University, Australia

Title: A generic framework for variational inequalities

Abstract: Gradient schemes is a generic framework, which offers the unified convergence analysis of many conforming and non conforming numerical methods for 2nd order PDEs.

In this talk I will apply the gradient schemes framework to different kinds of elliptic variational inequalities which have various applications, such as fluid dynamics, elasticity, biomathematics. With the theoretical results of this framework, we can recover the convergence rate for some methods previously performed for the variational inequalities. I will also focus on completely new results coming from establishing the Hybrid Mixed Mimetic method (HMM method) for variational inequalities. Finally, beside providing test-cases taken for the literature, I will present an entirely different test-case, which is of an available exact solution.

Alexander Gilbert (alexander.gilbert@student.unsw.edu.au) University of New South Wales, Australia

Title: Applying quasi-Monte Carlo methods to an eigenproblem with a random coefficient

Abstract: In this talk the coefficient depends on a finite, but possibly high, number of stochastic parameters and as such the eigenvalues and corresponding eigenfunctions will also depend on this stochasticity. The aim is to approximate the expected value of the principal eigenvalue by formulating it as a high-dimensional integral, so that quasi-Monte Carlo (QMC) quadrature may be used. First we discretise in space using finite element (FE) methods and then apply QMC methods to the FE approximations. We show that the principal eigenvalue belongs to the spaces required for QMC theory and provide numerical results.

Jordan Pitt (jordan.pitt@anu.edu.au) The Australian National University, Australia

Title: Numerically Solving the 1D Serre Equations in the Presence of Discontinuities

Abstract: The Serre equations are a shallow water approximation to the incompressible Euler equations that retain the terms of the Shallow Water Wave Equations while introducing dispersive terms that make the Serre equations more relevant when wave amplitude is significant compared to water depth. Most of the literature numerically solves these equations for smooth initial conditions, however, in real world applications such as the Dam-Break problem it is important to handle discontinuous initial conditions.

Thus the numerical scheme of O. Le Metayer, et.al. (2010) has been extended to build second- and third-order methods to investigate the capabilities of this scheme in the presence of discontinuities, which we expect to be good since it utilises a Finite Volume Method.

These methods were validated and their order of convergence was confirmed for smooth initial conditions using the analytic soliton solution. The methods also compared well with the experimental results of Hammack and Segur (1978) which contains a discontinuity.

To further investigate the behaviour of discontinuities a smooth approximation of the Dam-Break problem was used to observe how the numerical solutions of the smooth dam break problem behaved as the smooth initial conditions approached a discontinuous change in water depth. The results of these methods were compared to the results of two second-order finite difference methods. One being the second-order centred finite difference approximation to the Serre equations and the other from Grimshaw, et.al (2006). These schemes showed the same behaviour in the presence of steep gradients as those derived from the numerical scheme of interest and included all the observed behaviour for discontinuous and smooth initial conditions thus observed far in the literature.

References:

Hammack, J. L. and Segur, H. (1978). “The Korteweg-de Vries equation and water waves.

Part 3. Oscillatory waves.” Journal of Fluid Mechanics, 84(2), 337–358.

El, G., Grimshaw, R. H. J., and Smyth, N. F. (2006). “Unsteady undular bores in fully206 nonlinear shallow-water theory.” Physics of Fluids, 18(027104).

Le Metayer, O., Gavrilyuk, S., and Hank, S. (2010). “A numerical scheme for the Green-Naghdi model.” Journal of Computational Physics, 229(6), 2034–2045.

Zhenquan Li (jali@cdsu.edu.au) Charles Sturt University, Australia

Title: A new computational technique for fluid flows

Abstract: Mathematicians and physicists believe that the explanation and prediction of flows can be found through an understanding of solutions to the Navier-Stokes equations or their extensions such as k-ϵ model for turbulence. Currently the analytical solutions of the Navier-Stokes equations or their extensions are not available. After extensive accuracy analysis, it was found that meshing is one of the main issues in finding accurate numerical solutions of differential equations. I have proposed two mesh refinement methods (one for both 2D and 3D) and two streamline tracking methods for computational (or CFD) velocity fields based on the qualitative theory of differential equations. I have obtained positive results when verifying the computational accuracy of these proposed methods with analytical velocity fields. I have also conducted a sensitivity analysis which examines if the same results for analytical velocity fields are kept by numerical solutions of the Navier-Stokes equations. The comparisons of the outputs from proposed methods with analytical velocity fields and numerical benchmarks show that proposed methods can identify singular points, asymptotic lines (planes) and separation curves. In Summary, we have achieved positive outcomes on accuracy and reliability of the proposed methods. After the development of the computer programs, the proposed methods can be widely applied to many problems related to fluid flows in our daily life.

I will briefly introduce the foundation, the proposed mesh refinement methods, accuracy and reliability verifications, and computational complexity of the new computational methods in my presentation. The comparisons for singular points and asymptotic lines between exact and numerical results for analytical velocity fields will be presented by illustrations. Some of the comparisons between the benchmarks and numerical results for lid-driven flow will also be provided. A number of examples and demonstrations are used for explanations. Possible applications in practice and future research in computational science, computing science and other relevant disciplines will be introduced at the end. If the time provided is not enough, I will delete some of the parts listed above in my presentation.

Santosh Kumar (santosh2365@gmail.com) Thapar University, India

Title: Finite volume approximation and analysis of conservation laws arising in neuronal variability

Abstract: The objective of this paper is to present and analyze numerical approximation of a single neuronal model. Firstly we derive a hyperbolic conservation law for the distribution of neuronal firing interval containing pointwise delay as well as advance. We have modified the classical neuronal model and included the intensity of the incoming current. Thereafter we propose a numerical approximation based on the finite volume scheme for conservation laws with source term. In this scheme homogeneous part is solved by finite volume approximation and the source term is approximated by a linear interpolation. The developed numerical method is analyzed for stability and convergence. We have proved the bounded variation stability and also find the convergence estimates. In the last section, we perform some numerical experiments to verify the predicted theory of the numerical approximation constructed in this paper.

Hans De Sterck (Hans.DeSterck@monash.edu) Monash University, Australia

Title: High-Order Finite Volume Methods for Magnetohydrodynamics on Adaptive Cubed-Sphere Grids

Abstract: Simulations in spherical geometries have important applications in space physics and geoscience. We describe highly accurate finite-volume methods for hyperbolic conservation laws on parallel dynamically adaptive cubed-sphere grids. In their most simple form, cubed-sphere grids are obtained starting from a Cartesian grid on a cube that is deformed into a sphere, resulting in a grid with quasi-uniform spacing and without polar singularities. We develop a fourth-order central essentially non-oscillatory (CENO) finite-volume scheme for hyperbolic conservation laws on these adaptive cubed-sphere grids. Specific challenges include formulating high-order accurate discretizations on computational cells with non-planar surfaces, maintaining high-order accuracy at the sector boundaries and corners of the adaptive cubed-sphere grid, and maintaining the divergence-free property of the magnetic fields in the magnetohydrodynamics equations that are of special interest in our applications. The 3D CENO scheme is implemented in a parallel dynamically adaptive simulation framework. Numerical tests demonstrate accuracy and efficiency of the approach and show excellent parallel scalability on thousands of computing cores. This is joint work with Lucian Ivan and Clinton Groth.

Marian Moldenhauer (moldenhauer@zib.de) Konrad-Zuse-Zentrum für Informationstechnik (ZIB), Germany

Title: Optimal Hip Implant Positioning

Abstract: In an aging society where the number of joint replacements rise, it is important to also increase the longevity of implants. In particular hip implants have a life-time of at most 15 years. This derives primarily from pain due to migration, wear, inflammation, and dislocation, which is affected by the positioning of the implant during the surgery. Current joint replacement practice uses 2D software tools and the experience of surgeons. Especially the 2D tools fail to take the patients natural range of motion as well as stress distribution in the joint induced by different daily motions into account. Optimizing the hip joint implant position for all possible parametrized motions under the constraint of a dynamic contact problem is prohibitively expensive as there are too many motions and every position change demands a recalculation of the contact problem. For the reduction of the computational effort, we use adaptive refinement on the parameter domain. A coarse initial grid is to be locally refined using goal-oriented error estimation. This approach will be combined with multi-grid optimization such that numerical errors are reduced.

Jerome Droniou (jerome.droniou@monash.edu) Monash University, Australia

Title: An arbitrary-order scheme for convection-diffusion equations

Abstract: Convection-diffusion equations permeate a variety of fluid flows models, including in particular flows in porous media. In such models, the natural diffusion can be in some places much smaller than the convection driven by the Darcy velocity, and it is therefore essential to dispose of numerical methods that can automatically and locally adapt to the flow regime (diffusion-dominated or convection-dominated). Some practical constraints must also be taken into account, such as e.g. the capacity for the method to be efficiently implemented in a parallel environment.

In this talk, we will present a numerical scheme of arbitrary order to deal with convection-diffusion equations. This scheme uses separate degrees of freedom on cells and faces, and has a local connectivity (each cell is only connected to its faces) which makes it a good candidate for parallel implementations. The discretisation of the convective terms uses a stabilisation which automatically adjusts to all regimes (including vanishing viscosity). The error estimates we obtain are optimal in all regimes, thanks to the usage of local Péclet numbers.

This is a joint work with D. Di Pietro and A. Ern.