Title: Geometry Processing Tools for Hexahedral Meshing
Biography: Alla Sheffer is a professor of Computer Science at the University of British Columbia. She investigates algorithms for geometry processing focusing on computer graphics applications. She is particularly interested in geometric interpretation of designer intent when conveying shape. Alla regularly publishes at selective computer graphics venues such as SIGGRAPH and SIGGRAPH Asia. She holds 5 recent patents on methods for shape communication and hexahedral mesh generation. She received an IBM faculty award, Killam Research Fellowship, NSERC DAS, NSERC I2I, and the Audi Production Award. She served on the PCs for SIGGRAPH, SIGGRAPH Asia, Eurographics, and other key graphics conferences; co-chaired the PCs for SGP'06, Sketches & Posters at SIGGRAPH Asia'10, IEEE SMI'13, and IMR'2001; and will co-chair the PC for Eurographics'18. She served on the editorial boards of ACM TOG, IEEE TVCG, Computer Graphics Form, Graphical Models, Computers & Graphics, and CAGD.
Abstract: Automatic, quality hexahedral mesh generation had been considered the holy grail of finite element meshing for several decades. While hexahedral mesh elements are preferred by a variety of simulation techniques, automatic construction quality all-hex meshes of general shapes had remained elusive. My talk will present recent techniques for hexahedral mesh generation and mesh optimization that significantly improve the state of the art, enabling better quality, fully automatic hex-meshing of complex shapes. Our hexing method is centered around three key observations. First we note that given a low distortion mapping between the input model and a PolyCube (a solid formed from a union of cubes), one can hex-mesh the input model by simply transferring a regular hex grid from the PolyCube to the input model using this mapping. For a given input model our challenge therefore is to construct a suitable PolyCube and a corresponding volumetric map. Second, we note that for a given PolyCube base-complex, PolyCube geometry and mapping computation can be cast as a distortion minimizing constrained deformation problem, which can be solved using classical geometry processing techniques. Lastly, we observe that, given an arbitrary input mesh, the computation of a suitable PolyCube base-complex can be formulated as associating, or labeling, each input mesh triangle with one of six signed principal axis directions. Most of the criteria for a desirable PolyCube labeling can be satisfied using a multi-label graph-cut optimization with suitable local unary and pairwise terms. However, the highly constrained nature of Poly-Cubes, imposed by the need to align each chart with one of the principal axes, enforces additional global constraints that the labeling must satisfy. To enforce these constraints, we develop a constrained discrete optimization technique, PolyCut, which embeds a graph-cut multi-label optimization within a hill-climbing local search framework that looks for solutions that minimize the cut energy while satisfying the global constraints. We further optimize our generated PolyCube base-complexes through a combination of distortion-minimizing deformation, followed by a labeling update and a final PolyCube parameterization step. Our approach enables fully automatic generation of high-quality hexahedral meshes for complex shapes and improves on the state of the art in hexahedral meshing.
The usability of hexahedral meshes depends on the degree to which the shape of their elements deviates from a perfect cube; a single concave, or inverted element makes a mesh unusable. While a range of methods exist for discretizing 3D objects with an initial topologically suitable hex mesh, their output meshes frequently contain poorly shaped and even inverted elements, requiring a further quality optimization step. I will describe our novel framework for optimizing hex-mesh quality capable of generating inversion-free high-quality meshes from such poor initial inputs. We recast hex quality improvement as an optimization of the shape of overlapping cones, or unions, of tetrahedra surrounding every directed edge in the hex mesh, and show the two to be equivalent. We then formulate cone shape optimization as a sequence of convex quadratic optimization problems, where hex convexity is encoded
via simple linear inequality constraints. We validate our algorithm by comparing it against previous work, and demonstrate a significant improvement in both worst and average element quality.
Title: Image-based anatomy reconstruction - a core tool for the realization of the digital patient
Biography: Hans-Christian Hege is head of the Visual Data Analysis Department at Zuse Institute Berlin (ZIB). After studying physics and mathematics, he performed research in computational physics and quantum field theory at Freie Universität Berlin. Then, he joined ZIB as a scientific consultant for high-performance computing and then as head of the Scientific Visualization Department, which he started. His group performs research in data analysis, anatomy reconstruction and visualization and develops software such as Amira/Avizo. He is also the co-founder of Mental Images – now NVIDIA Advanced Rendering Center –, Indeed-Visual Concepts – now Visage Imaging –, and Lenné3D. He is member of the Editorial Boards of “Computing and Visualization in Science” and of the book series “Mathematics + Visualization” (Springer). He has taught as guest professor at Universitat Pompeu Fabra, Barcelona, and as honorary professor at the German Film School (University for Digital Media Production). His research interests include visual computing and applications in life sciences, natural sciences and engineering.
Abstract: In healthcare, the focus is increasingly on "precision medicine", i.e. on medical decisions, practices, or products that are significantly more tailored to the individual patient than has hitherto been the case. One focus of this effort is the creation of an increasingly faithful digital representation of the patient, serving the selection, simulation and optimization of the treatment. While the majority of techniques used in precision medicine work at the molecular level, there are also cases in which the organ and system levels are relevant. Prominent examples are surgical simulations and physiological simulations. Here the individual anatomy of the patient comes into play. The geometric reconstruction of anatomies from medical 3D image data has been possible for more than 20 years, but it is only now that the necessary techniques are developed so far that routine clinical application becomes possible. In the talk the complete anatomy reconstruction pipeline will be presented, from raw image data to volumetric meshes for finite element simulations. Current developments will be discussed, such as the effective use of statistical shape models and the reconstruction of 3D anatomy models from only a few 2D X-ray projections. Furthermore, examples of numerical simulations and optimizations of therapies will be presented.
Title: Anisotropic mesh adaptation, from the lab to the end-user
Biography: Prof. Simona Perotto received her Ph.D. in Computational Mathematics and Operations Research from the University of Milano in 1999. After a postdoc position in Scientific Computing and Mathematical Modeling at the EPFL, Lausanne, she has been appointed as Assistant Professor and then as Associate Professor at the Department of Mathematics of Politecnico di Milano. Her primary research fields cover anisotropic mesh adaptation for Computational Fluid Dynamics problems and adaptive model reduction in the framework of a finite element approximation of Partial Differential Equations. Professor Perotto has been Co-PI of the project FIRB2008, “Advanced Statistical and Numerical Methods for the Analysis of High-Dimensional Functional Data in Life Sciences and Engineering.” She is currently Co-PI of the project NSF Project, DMS 1419060, “Model Reduction Techniques for Incompressible Fluid-Dynamics and Fluid-Structure Interaction Problems.” She is also guest editor of the volume “New Challenges in Grid Generation and Adaptivity for Scientific Computing,” SEMA SIMAI Springer Series (2015). One of her papers, published in 2014 on Journal of Scientific Computing, has been listed among the most notable papers 2014 for the class of Mathematics of Computing (in a pool of six) by the Association of Machine Computing. Finally, she has organized several international and national mini-symposia, workshops and conferences, among which we cite the last edition of the International Conference on Adaptive Modeling and Simulation (ADMOS 2017) and the International Conference on Finite Elements in Flow Problems (FEF 2017).
Abstract: Anisotropic mesh adaptation has proved to be a powerful strategy for improving the quality and the efficiency of scientific modeling, essentially due to the guaranteed computational saving. Examples of anisotropic phenomena are present in several contexts of Computational Science and Engineering, e.g., shocks in compressible flows, steep boundary or internal layers in viscous flows around bodies, fronts of different nature to be sharply tracked. The intrinsic directionalities of such phenomena demand a strict control of the shape, the size and the orientation of mesh elements in contrast to standard isotropic meshes where only the size is tunable by the mesher.
Metric-based techniques usually drive anisotropic mesh adaptation, the metric being derived by either heuristic or theoretical approaches. In the first case, the metric is identified by a numerical approximation of the Hessian or of the gradient of the discrete solution, coupled with an a priori error estimator. More rigorous - theoretically based - approaches move from an a posteriori error analysis, i.e., from an explicit control of the discretization (or of a functional) error. The latter is enriched by the main directional features of the problem at hand.
In this presentation, we focus on both anisotropic heuristic and rigorous error estimators, applied to three different contexts relevant to the engineering practice, i.e., propagation of cracks in brittle materials, topology optimization of structures and image segmentation.
Title: Mesh generation to support the Aerospace Industry
Biography: Bill Dawes is the Francis Mond Professor of Aeronautical Engineering and Director of the Whittle Laboratory at Cambridge University. After completing a PhD at Cambridge he worked for the Central Electricity Generating Board where he developed and applied early computer-based flow simulation methods to steam turbine operational problems. Returning to Cambridge in 1984 as a Lecturer, he then worked on a range of numerical methods – by now called Computational Fluid Dynamics (CFD) - aimed at predicting fully 3D viscous flow in turbomachines. His structured blade-to-blade Navier-Stokes code (BT0B3d) became an industry standard design tool and was licenced to over 50 companies & organisations around the world. He then developed a state-of-the-art, solution-adaptive, unstructured version of this (NEWT) and broadened the application base beyond blading to shroud flows, secondary air systems and blade cooling. Bill’s research then expanded into CFD process integration and automatic design optimisation – especially using coupled sets of modelling hierarchies. Research draws inspiration from advanced computer graphics and physics-based animation and attempts to enable seamless and tactile integration between solid modelling, mesh generation, geometry editing and flow simulation. The resulting software & environment, BOXER, and the BOXER vision of an integrated End-to-End Parallel Simulation System, are being further developed in Cambridge Flow Solutions Ltd. Bill is a Fellow of the Royal Academy of Engineering and of the Royal Aeronautical Society and is a Chartered Engineer.
Abstract:The role of a mesh is to deliver geometry to aero-thermal-mechanical simulation. Engineering designers change geometry in response to that simulation to deliver the required functional product performance. The key to this enterprise is scope & speed. On complex products like gas turbines the earlier a higher fidelity fully-featured geometry can be created and simulated the better the functional performance can be understood and the lower the risk associated with the product. Design now is moving on from a component-based focus to a system-based approach. Faster meshing & simulation means that more design cycles can performed per unit time, more of the final product can be simulated earlier - and in some cases the overall product itself can actually be brought within the economic simulation time-frame. This talk will discuss these issues and reflect on the current & likely status of mesh generation in this context.