Loading Events

« All Events

  • This event has passed.

PhD Preliminary Examination of Christina Cole

May 9, 2023 @ 2:00 pm - 3:00 pm

Advisor:  Dr. Michael Kirby Co-Advisor:Dr. Margaret Cheney

Committee:  Dr. Chris Peterson, Dr Nathaniel Blanchard

Title: The Geometry and Topology of Learning

Abstract: The /manifold hypothesis/ refers to the conjecture that
high-dimensional data have low intrinsic dimension, meaning they can be
expressed (without loss of information) with fewer parameters than the
number of basis vectors that span the high-dimensional space in which
the data live. Although this idea was first introduced in the 1990s,
explicit characterization of these data manifolds has not yet been done.
We believe that ReLU neural networks can shed light on these objects.

ReLU neural networks have been characterized as partitioning schemes of
their input space, partitioning them into polyhedral decompositions
based on their neuron firing patterns. Each polyhedron in these
partitions defines an affine mapping from the input space to the output
space. The structure of the resulting mesh is not well understood, but
we consider it as a ReLU neural network’s implicitly-learned
representation of the data manifold. With a series of experiments using
natural and adversarial images, we show that polyhedra are smaller
around original images than around their perturbed counterparts. We have
begun to further probe these representations of the underlying data
manifold for additional geometric and topological information. From
preliminary experiments, we show that these representations for a given
dataset vary based on neural network architecture– variations that we
plan to explain mathematically and we hope will inform the adversarial
vulnerability of some networks over others.

Additionally, fixing a particular network architecture, many
configurations of its weights can fit the training data of interest.
Basic questions about this collection remain unanswered: (1) What
geometric structure does this collection have? (2) Is it convex or even
entirely connected? (3) What is the volume of each piece? We explore a
particular architecture’s parameter space and represent the training
loss of each of the model configurations described by a collection of
points in the parameter space as a surface called a loss landscape. We
have begun to seek answers to the three previous questions in the
topology of a neural network’s loss landscape toward understanding model
trainability and robustness. Based on the limited relevant literature,
we hypothesize that there is a correlation between loss landscape
topology and architectural features of the associated neural network and
hope that these correlations can explain why certain architectures are
naturally more robust and trainable than others.

You may also attend on Zoom.

Join Zoom
Meeting:https://us06web.zoom.us/j/89375936619?pwd=ck1wejJxcGticS93MVdhM2NEMDNOQT09
Meeting ID: 893 7593 6619
Passcode: 438308

Details

Date:
May 9, 2023
Time:
2:00 pm - 3:00 pm

Venue

Engineering E-206

This calendar is used exclusively for events or announcements sponsored by the Department of Mathematics, the College of Natural Sciences or Colorado State University.

Have an event you'd like to add? Submit your request here.