Title: Model-free inference and zero-shot forecasts in dynamical systems
Abstract: How much can we learn from a time series when models are unavailable and data is limited? Surprisingly, quite a lot. In the first half of this talk, I will introduce a model-free approach to infer causal hypergraphs directly from time series data. This method can provide new insights into the importance of nonpairwise interactions in the brain, as demonstrated through its application to neurophysiological datasets. In the second half, I will explore the use of large language models (LLMs) for forecasting chaotic systems. Unlike the traditional approach of training a model specifically on the system to be predicted, LLMs demonstrate the remarkable ability to generate zero-shot forecasts for entirely new systems without requiring retraining or fine-tuning. I will discuss the mechanisms utilized by LLMs for such tasks and compare them with classical strategies from nonlinear dynamics.
Bio: Yuanzhao is an Omidyar Fellow at the Santa Fe Institute, where he studies collective dynamics on networks and data-driven modeling of nonlinear systems. Previously, he was a Schmidt Science Fellow at Cornell's Center for Applied Mathematics. He received his Ph.D. in Physics from Northwestern in 2020. Yuanzhao’s work develops mathematical and computational tools to understand complex systems, which includes questions in mathematical biology (how do circadian clocks re-synchronize as we recover from jet lag?), neuroscience (how important are nonpairwise interactions in shaping macroscopic brain dynamics?), data-driven modeling (when can digital twins generalize to previously unseen conditions?), and scientific machine learning (is zero-shot forecasting of chaotic systems possible?).
Abstract: How much can we learn from a time series when models are unavailable and data is limited? Surprisingly, quite a lot. In the first half of this talk, I will introduce a model-free approach to infer causal hypergraphs directly from time series data. This method can provide new insights into the importance of nonpairwise interactions in the brain, as demonstrated through its application to neurophysiological datasets. In the second half, I will explore the use of large language models (LLMs) for forecasting chaotic systems. Unlike the traditional approach of training a model specifically on the system to be predicted, LLMs demonstrate the remarkable ability to generate zero-shot forecasts for entirely new systems without requiring retraining or fine-tuning. I will discuss the mechanisms utilized by LLMs for such tasks and compare them with classical strategies from nonlinear dynamics.
Bio: Yuanzhao is an Omidyar Fellow at the Santa Fe Institute, where he studies collective dynamics on networks and data-driven modeling of nonlinear systems. Previously, he was a Schmidt Science Fellow at Cornell's Center for Applied Mathematics. He received his Ph.D. in Physics from Northwestern in 2020. Yuanzhao’s work develops mathematical and computational tools to understand complex systems, which includes questions in mathematical biology (how do circadian clocks re-synchronize as we recover from jet lag?), neuroscience (how important are nonpairwise interactions in shaping macroscopic brain dynamics?), data-driven modeling (when can digital twins generalize to previously unseen conditions?), and scientific machine learning (is zero-shot forecasting of chaotic systems possible?).
Speakers
Yuanzhao Zhang
University programs and activities are open to all eligible participants without regard to identity or other protected characteristics. Sponsorship of an event does not constitute institutional endorsement of external speakers or views presented.
View physical accessibility information for campus buildings and find accessible routes using the Princeton Campus Map app.