Machine Learning

Graph Metanetworks for Processing Diverse Neural Architectures

Neural networks efficiently encode learned information within their parameters. Consequently, many tasks can be unified by treating neural networks themselves as input data. When doing so, recent studies demonstrated the importance of accounting for …

Individualized Dosing Dynamics via Neural Eigen Decomposition

Abstract Dosing models often use differential equations to model biological dynamics. Neural differential equations in particular can learn to predict the derivative of a process, which permits predictions at irregular points of time.

Optimization or Architecture: How to Hack Kalman Filtering

Since the KF assumptions are often violated, noise estimation is not a proxy to MSE optimization. Instead, our method (OKF) optimizes the MSE directly. In particular, neural network models should be tested against OKF rather than the non-optimized KF – in contrast to the common practice in the literature.

Train Hard, Fight Easy: Robust Meta Reinforcement Learning

We introduce RoML - a meta-algorithm that takes any meta-learning baseline algorithm and generates a robust version of it. A test task corresponding to high body mass, which is typically more difficult to control.

Expressive Sign Equivariant Networks for Spectral Geometric Learning

Recent work has shown the utility of developing machine learning models that respect the symmetries of eigenvectors. These works promote sign invariance, since for any eigenvector the negation is also an eigenvector. In this work, we demonstrate that sign equivariance is useful for applications such as building orthogonally equivariant models and link prediction. To obtain these benefits, we develop novel sign equivariant neural network architectures. These models are based on our analytic characterization of the sign equivariant polynomials and thus inherit provable expressiveness properties.

Norm-guided latent space exploration for text-to-image generation

Text-to-image diffusion models show great potential in synthesizing a large variety of concepts in new compositions and scenarios. However, their latent seed space is still not well understood and has been shown to have an impact in generating new …

Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image Models

Abstract Text-to-image (T2I) personalization allows users to guide the creative image generation process by combining their own visual concepts in natural language prompts. Recently, encoder-based techniques have emerged as a new effective approach for T2I personalization, reducing the need for multiple images and long training times.

Equivariant Architectures for Learning in Deep Weight Spaces

Designing machine learning architectures for processing neural networks in their raw weight matrix form is a newly introduced research direction. Unfortunately, the unique symmetry structure of deep weight spaces makes this design very challenging. …

Equivariant Polynomials for Graph Neural Networks

Graph Neural Networks (GNN) are inherently limited in their expressive power. Recent seminal works (Xu et al., 2019; Morris et al., 2019b) introduced the Weisfeiler-Lehman (WL) hierarchy as a measure of expressive power. Although this hierarchy has …

Graph Positional Encoding via Random Feature Propagation

Two main families of node feature augmentation schemes have been explored for enhancing GNNs: random features and spectral positional encoding. Surprisingly, however, there is still no clear understanding of the relation between these two …