"Topics in Applied Algebraic Topology with a Measure-Theoretic Perspect" by Patrick D. Gillespie
 

Doctoral Dissertations

Date of Award

12-2024

Degree Type

Dissertation

Degree Name

Doctor of Philosophy

Major

Mathematics

Major Professor

Vasileios Maroulas

Committee Members

Conrad Plaut, Vyron Vellis, Henry Adams

Abstract

There have been a growing number of applications of algebraic topology in recent years to data analysis and machine learning. Persistent homology, one of the main such applications, constructs a filtered simplicial complex from a finite dataset, often using the Vietoris--Rips complex, and then computes algebraic invariants of the filtered simplicial complex which can yield insight on the dataset. Beyond the application of persistent homology in data analysis, ideas from topology have also been applied to machine learning. Neural network architectures, such as simplicial convolutional networks, have been designed to use a topology aware message-passing operation. Building on this, cellular sheaves have been used to define a sheaf convolution operation which combats the problem of over-smoothing encountered in graph and simplicial convolutional networks.

In this dissertation, we study Vietoris complexes and cellular sheaves from a measure-theoretic perspective. First, we strengthen the known relationship between Vietoris complexes and Vietoris metric thickenings: a metric analogue of the Vietoris complex which is equipped with a Wasserstein metric obtained by regarding the points of the Vietoris complex as probability measures. This allows Vietoris complexes to be studied through their metric thickening counterparts, and as an example of this, we are able to prove a Hausmann-like result for Vietoris--Rips complexes of Euclidean submanifolds. We also investigate Vietoris complexes and metric thickenings of absolute neighborhood retracts.

Next, we turn our attention to cellular sheaves and their use within sheaf neural networks. Though it has been demonstrated that sheaf neural networks have advantages for certain types of data, they are sensitive to the choice of cellular sheaf used within their architecture. To mitigate issues arising from this fact, we replace the cellular sheaf in the network with a distribution over a space of cellular sheaves, which is then sampled from during inference. We train the resulting network using variational Bayesian inference, yielding what we refer to as a Bayesian sheaf neural network. As part of this work, we strengthen a result on the linear separation power of sheaf diffusion processes and define a novel family of probability distributions on special orthogonal groups via the Cayley transform.

Files over 3MB may be slow to open. For best results, right-click and select "save as..."

Share

COinS