ACM Transactions on Graphics (to be presented at SIGGRAPH 2026) Paper
Gabor Fields introduce a novel volume representation that supports continuous frequency filtering without extra memory overhead. By selectively pruning primitives and stochastically sampling frequencies and orientations, the method improves rendering performance, reduces aliasing, and enables efficient applications such as procedural cloud rendering.
A novel no-reference metric to spatially assess 3D reconstructed view quality using perceptual embeddings, through an efficient patch-wise similarity computation between the training dataset and the evaluated view. Surpasses even direct reference metrics in human assessment correlation tests. We leverage it in automatic, recursive in-painting for artifact restoration.
In my master’s thesis, I improved novel Scene Reconstruction methods, such as Gaussian Splatting. I introduced a new approach that assesses reconstruction quality by leveraging the input multiview content as priors to evaluate novel views. I then demonstrated the method’s effectiveness by devising a strategy that hallucinates incorrectly reconstructed parts of a scene.
In this paper, we replicate PolyGCL, which frames self-supervised graph contrastive learning as a spectral-polynomial fusion of high- and low-pass graph filters. Our study mostly reproduced the reported gains on both homophilic and heterophilic graphs—confirming its ability to learn expressive node embeddings—while noting that results still depend on careful hyper-parameter calibration relying on labeled data, compromising the self-supervised approach.
In this project, we built a small simulator and implemented Markov localization to maintain a full probability grid of robot poses after each motion and sensor reading. For larger or continuous maps, we switched to Monte Carlo localization, replacing the exhaustive grid with a compact set of weighted particles (see blue points in the animation).
In this study, we explore whether users prefer interacting with virtual avatars that align with their visual preferences. In a VR experiment with 13 participants we found that avatar choices were influenced primarily by color and style. Results indicate a clear preference for visually congruent avatars.