Date/Time
Date(s) - 15/09/2023
3:30 pm - 4:30 pm
Location: HH 305
Speaker: Miroslav Lovric
Title: Is 1+1 always equal to 2?
Abstract: The belief that transferring abstract mathematical knowledge learnt in school or university into concrete, authentic, real-life contexts (which is referred to as numeracy) will happen automatically and without effort has been abandoned many decades ago. As a matter of fact, the evidence from the last few decades suggests that many people, both children and adults, are lacking basic levels of numeracy. The difference between the “necessary numeracy” and the “actual numeracy” – called the numeracy gap – needs to be understood, explained, and ultimately eliminated. After briefly conceptualizing numeracy, I will focus on the projects that my collaborators Andie Burazin (UTM) and Taras Gula (George Brown College) and myself have been working on, both in terms of research and teaching, to close this gap. I will argue that by engaging with numeracy tasks students can improve their logical reasoning, thinking, and communication skills.
Speaker: Pratheepa Jeganathan
Title: Spatial Distortion – based Spatial Latent Dirichlet Allocation (SD-SLDA)
Abstract: Spatial omics analysis is crucial in identifying complex patterns in multi-type spatial data, particularly at the single-cell level. These patterns, often represented as spatial point patterns (SPP), reveal substantial heterogeneity within biological systems. One key objective in SPP analysis is the characterization of neighborhoods within spatial partitions. Recent studies have introduced the spatial latent Dirichlet allocation (SLDA) to identify these neighborhoods as mixtures of point types grouped into topics. However, a significant limitation of SLDA lies in selecting the spatial weight matrix (SWM), which fails to account for local spatial dependence variations. In this talk, we introduce a spatial distortion-based (SD) analysis, which enhances SLDA by computing tessellations and specifying the SWM for more accurate topic estimation. The simulation study shows that the SD-SLDA method outperforms in uncovering spatial partitions and their associated neighborhoods. Finally, we showcase the practical application of SD-SLDA in studying the tumor micro-environment within triple-negative breast cancer patients.
Speaker: Anastasis Kratsios
Title: Deep Kalman Filters Can Filter
Abstract: In this talk, I’ll showcase some of my recent work on the mathematical foundations of geometric deep learning and its applications to finance. Together, we’ll take a glance at the following problem:
Deep Kalman filters (DKFs) are a class of neural network models that generate Gaussian probability measures from sequential data. Though DKFs are inspired by the Kalman filter, they lack concrete theoretical ties to the stochastic filtering problem, thus limiting their applicability to areas where traditional model-based filters have been used, e.g.\ model calibration for bond and option prices in mathematical finance. We address this issue in the mathematical foundations of deep learning by exhibiting a class of continuous-time DKFs which can approximately implement the conditional law of a broad class of non-Markovian and conditionally Gaussian signal processes given noisy continuous-times measurements. Our approximation results hold uniformly over sufficiently regular compact subsets of paths, where the approximation error is quantified by the worst-case 2-Wasserstein distance computed uniformly over the given compact set of paths.
Coffee and cookies will be served in HH 216 at 3pm – All are welcome