In my master’s thesis, I improved novel Scene Reconstruction methods, such as Gaussian Splatting. I introduced a new approach that assesses reconstruction quality by leveraging the input multiview content as priors to evaluate novel views. I then demonstrated the method’s effectiveness by devising a strategy that hallucinates incorrectly reconstructed parts of a scene.
In this paper, we replicate PolyGCL, which frames self-supervised graph contrastive learning as a spectral-polynomial fusion of high- and low-pass graph filters. Our study mostly reproduced the reported gains on both homophilic and heterophilic graphs—confirming its ability to learn expressive node embeddings—while noting that results still depend on careful hyper-parameter calibration relying on labeled data, compromising the self-supervised approach.
In this project, we built a small simulator and implemented Markov localization to maintain a full probability grid of robot poses after each motion and sensor reading. For larger or continuous maps, we switched to Monte Carlo localization, replacing the exhaustive grid with a compact set of weighted particles (see blue points in the animation).
In this study, we explore whether users prefer interacting with virtual avatars that align with their visual preferences. In a VR experiment with 13 participants we found that avatar choices were influenced primarily by color and style. Results indicate a clear preference for visually congruent avatars.
In this paper, we present a compact pipeline that trains a bespoke generative adversarial network to create convincing “Fakémon” sprites. Using a curated, uniformly pre-processed corpus of original Pokémon graphics, the model learns franchise-specific aesthetics and outputs novel, game-ready creatures. The paper details data acquisition and network design, showcases fan-project applications, and highlights future optimisation paths to boost fidelity and training efficiency.