Hypergraphs
Hypergraph SIS
Hypergraph SIS
Published in NeurIPS 2022, Machine Learning and the Physical Sciences Workshop, 2022
This paper is about PINNS for branched flows.
Published in NeurIPS 2025, 2025
This paper is about higher-order information in relational learning.
Published:
You can find the slides here.
Published:
Published:
An introduction to tropical geometry.
Published:
Talk about my Bachelor project. A Gröbner basis is a subset of a polynomial ideal with desirable algorithmic properties. Every set of polynomials can be transformed into a Gröbner basis through a process that generalises Gaussian elimination for solving linear systems of equations as well as the Euclidean algorithm for computing the greatest common divisor of two univariate polynomials. Introduced in Bruno Buchberger’s 1965 dissertation, the ideas behind Gröbner bases date back to earlier sources, including a paper written in 1900 by the invariant theorist Paul Gordan. Buchberger named his method after his advisor, Wolfgang Gröbner, and devised an algorithm to compute a Gröbner basis from any generating set of an ideal I: this is what is now known as Buchberger’s algorithm. However, even the best implementations of classical Buchberger algorithms do not succeed in computing Gröbner bases for complicated problems. Major improvements are due to Jean-Charles Faugère, who introduced the F4 algorithm in 1999 and the F5 algorithm in 2002. The idea of the F4 algorithm remains similar to Buchberger’s original algorithm —the novelty is the reductions of multiple S-pairs at once. F5 uses a whole new approach with the idea of signature reductions.
Extension course, Harvard University (online), 2022
Bedrock Data Science is an introduction to Data Science that provides students the fundamental skills in math, statistics, and programming that one needs in order to undertake an undergraduate course in machine learning, data science, or AI.
Undergraduate course, Harvard University, 2022
CS181 provides a broad and rigorous introduction to machine learning, probabilistic reasoning and decision making in uncertain environments. Topics include: supervised learning, ensemble methods and boosting, neural networks, support vector machines, kernel methods, clustering and unsupervised learning, maximum likelihood, graphical models, hidden Markov models, inference methods, reinforcement learning.