Nikolaos Giakoumoglou

Nikolaos Giakoumoglou

Research Postgraduate at Imperial College London

Nikolaos Giakoumoglou is currently a postgraduate researcher at Imperial College London in the Department of Electrical and Electronic Engineering (EEE) in the Communications and Signal Processing (CSP) group, where he is pursuing his PhD under the guidance of Professor Tania Stathaki. Before his current role, he worked as a Research Assistant at the Centre for Research and Technology Hellas (CERTH) in the Information Technologies Institute (ITI) department. He obtained his Diploma in Electrical and Computer Engineering in 2021 from the Department of Electrical & Computer Engineering at Aristotle University of Thessaloniki. Nikolaos' research is primarily focused on Artificial Intelligence, Machine Learning, and Deep Learning, with a special interest in applications within the field of Computer Vision.

News

Mar 11, 2026Started working on Joint Embedding Predictive Architectures following the vision of Yann LeCun
Feb 1, 2026Started working on Vision-Language Models
Jan 13, 2026Paper accepted (oral) at ISBI 2026 (London, UK)
Dec 19, 2025Paper accepted at VISAPP 2026 (Marbella, Spain)
Nov 4, 2025Paper accepted at MedEurIPS 2025 (Copenhagen, Denmark)
Aug 26, 2025Paper accepted at ICCVW 2025 (Hawaii, US)
May 20, 2025Paper accepted at ICIP 2025 (Alaska, US)
Apr 20, 2025Paper accepted at CVPRW 2025 (Nashville, US)
Apr 2, 2025Paper presented at Imperial Research Computing Showcase Day 2025 (London, UK)

Research Interests

Education

Imperial College London, London, United Kingdom
PhD in Electrical and Electronic Engineering, supervised by Prof. Tania Stathaki
January 2024 — Present
Aristotle University of Thessaloniki, Thessaloniki, Greece
Integrated MSc in Electrical and Computer Engineering, GPA: 8.91/10.00
September 2016 — November 2021

Teaching Experience

Graduate Teaching Assistant
Lab demonstrator for Digital Image Processing (ELEC70078)
September 2024 — Present

Accepted Papers for Publication

2026

  1. ISBI
    Expert Clustering and Knowledge Transfer for Whole Slide Image Classification
    K. M. Papadopoulos, N. Giakoumoglou, A. Floros, P. L. Dragotti, T. Stathaki
    In ISBI (Oral), 2026
    Multiple Instance Learning (MIL) is widely adopted for Whole Slide Image (WSI) classification. Existing MIL methods suffer from representation bottlenecks where slide-level aggregation compresses diverse patch information, limiting performance. Our proposed Divide-and-Distill (D&D) framework addresses this by partitioning the feature space into representation-coherent clusters, training specialized expert models on each cluster, and distilling their collective knowledge into a unified model. Experiments across three datasets and six MIL methods demonstrate consistent performance gains without added inference cost.
  2. VISAPP
    A Multimodal Approach for Cross-Domain Image Retrieval
    L. Iijima, N. Giakoumoglou and T. Stathaki
    In VISAPP, 2026
    Cross-Domain Image Retrieval (CDIR) is a challenging task in computer vision, aiming to match images across different visual domains such as sketches, paintings, and photographs. This paper introduces a novel unsupervised approach to CDIR that incorporates textual context by leveraging pre-trained vision-language models. Our method, dubbed as Caption-Matching (CM), uses generated image captions as a domain-agnostic intermediate representation, enabling effective cross-domain similarity computation without the need for labeled data or fine-tuning. We evaluate our method on standard CDIR benchmark datasets, demonstrating state-of-the-art performance in unsupervised settings with improvements of 24.0% on Office-Home and 132.2% on DomainNet over previous methods.
    @conference{iijima2024caption, author={Lucas Iijima and Nikolaos Giakoumoglou and Tania Stathaki}, title={Caption-Matching: A Multimodal Approach for Cross-Domain Image Retrieval}, booktitle={Proceedings of the 21st International Conference on Computer Vision Theory and Applications - Volume 3: VISAPP}, year={2026}, pages={600-607}, publisher={SciTePress}, organization={INSTICC}, doi={10.5220/0014460000004084}, isbn={978-989-758-804-4}, issn={2184-4321}, }

2025

  1. NeurIPS
    Mitigating Representation Bottlenecks in Multiple Instance Learning
    K. M. Papadopoulos, N. Giakoumoglou, A. Floros, T. Stathaki
    In NeurIPS Workshop "MedEurIPS", 2025
    Multiple Instance Learning (MIL) is widely used for Whole Slide Image classification in computational pathology, yet existing approaches suffer from a representation bottleneck where diverse patch-level features are compressed into a single slide-level embedding. We propose Divide-and-Distill (D&D), which clusters the feature space into coherent regions, trains expert models on each cluster, and distills their knowledge into a unified model. Experiments demonstrate that D&D consistently improves six state-of-the-art MIL methods in both accuracy and AUC while maintaining single-model inference efficiency.
    @inproceedings{papadopoulos2025mitigating, title={{Mitigating Representation Bottlenecks in Multiple Instance Learning}}, author={Papadopoulos, Kleanthis Marios and Giakoumoglou, Nikolaos and Floros, Andreas and Dragotti, Pier Luigi and Stathaki, Tania}, booktitle={Medical Imaging meets NeurIPS Workshop (MedNeurIPS)}, year={2025}, url={https://openreview.net/forum?id=nywAT7N8Do} }
  2. ICIP
    Cluster Contrast for Unsupervised Visual Representation Learning
    N. Giakoumoglou, T. Stathaki
    In ICIP, 2025
    We introduce Cluster Contrast (CueCo), a novel approach to unsupervised visual representation learning that effectively combines the strengths of contrastive learning and clustering methods. CueCo is designed to simultaneously scatter and align feature representations within the feature space. Our method achieves 91.40% top-1 classification accuracy on CIFAR-10, 68.56% on CIFAR-100, and 78.65% on ImageNet-100 using linear evaluation with a ResNet-18 backbone.
    @inproceedings{giakoumoglou2025cluster, title={{Cluster Contrast for Unsupervised Visual Representation Learning}}, author={Giakoumoglou, Nikolaos and Stathaki, Tania}, booktitle={2025 IEEE International Conference on Image Processing (ICIP)}, pages={133--138}, year={2025}, organization={IEEE} }
  3. ICCV
    Training Self-Supervised Vision Transformers with Synthetic Data and Synthetic Hard Negatives
    N. Giakoumoglou, A. Floros, K. M. Papadopoulos, T. Stathaki
    In ICCV Workshop "LIMIT", 2025
    We build on existing self-supervised learning approaches for vision, drawing inspiration from the adage "fake it till you make it". We investigate two forms of "faking it" in vision transformers: leveraging synthetic data from generative models and generating synthetic hard negatives in the representation space. Our framework, dubbed Syn2Co, combines both approaches and evaluates whether synthetically enhanced training can lead to more robust and transferable visual representations on DeiT-S and Swin-T architectures.
    @inproceedings{giakoumoglou2025fake, title={{Fake \& Square: Training Self-Supervised Vision Transformers with Synthetic Data and Synthetic Hard Negatives}}, author={Nikolaos Giakoumoglou and Andreas Floros and Kleanthis Marios Papadopoulos and Tania Stathaki}, booktitle={Representation Learning with Very Limited Resources: When Data, Modalities, Labels, and Computing Resources are Scarce}, year={2025}, url={https://openreview.net/forum?id=TJUfbYKo2c} }
  4. CVPR
    Unsupervised Training of Vision Transformers with Synthetic Negatives
    N. Giakoumoglou, A. Floros, K. M. Papadopoulos, T. Stathaki
    In CVPR Workshop "Visual Concepts", 2025
    We address the neglected potential of hard negative samples in self-supervised learning. Previous works explored synthetic hard negatives but rarely in the context of vision transformers. We build on this observation and integrate synthetic hard negatives to improve vision transformer representation learning. This simple yet effective technique notably improves the discriminative power of learned representations. Our experiments show performance improvements for both DeiT-S and Swin-T architectures.
    @inproceedings{giakoumoglou2025unsupervised, title={{Unsupervised Training of Vision Transformers with Synthetic Negatives}}, author={Nikolaos Giakoumoglou and Andreas Floros and Kleanthis Marios Papadopoulos and Tania Stathaki}, booktitle={Second Workshop on Visual Concepts}, year={2025}, url={https://openreview.net/forum?id=dg8FuaOKnC}, }

Under Review

  1. Under Review
    Discriminative and Consistent Representation Distillation
    N. Giakoumoglou and T. Stathaki
    Under review
    @misc{giakoumoglou2024dcd, title={{Discriminative and Consistent Representation Distillation}}, author={Nikolaos Giakoumoglou and Tania Stathaki}, year={2024}, eprint={2407.11802}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2407.11802}, }
  2. Under Review
    What Makes Pretraining Data Good for Self-Supervised Learning?
    N. Giakoumoglou, A. Floros, K. M. Papadopoulos, T. Stathaki
    Under review
  3. Under Review
    Open-World Semantic Segmentation with Sensitivity Modeling
    A. R. Varvarigos, N. Giakoumoglou, T. Stathaki
    Under review
  4. Under Review
    A Review on Discriminative Self-supervised Learning Methods
    N. Giakoumoglou, T. Stathaki, A. Gkelias
    Under review
    @misc{giakoumoglou2024review, title={{A Review on Discriminative Self-supervised Learning Methods in Computer Vision}}, author={Nikolaos Giakoumoglou and Tania Stathaki and Athanasios Gkelias}, year={2025}, eprint={2405.04969}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2405.04969}, }
  5. Under Review
    A Review on Artificial Intelligence Methods for Plant Disease and Pest Detection
    N. Giakoumoglou, D. Kapetas, K. M. Papadopoulos, P. Christakakis, T. Stathaki, E. M. Pechlivani
    Under review

Positions of Responsibility

Reviewer
ICML 2026, IJCNN 2026, ECCV 2026, CVPR 2026, WACV 2026, VISAPP 2026, AAAI 2026, ISBI 2026, ICCV 2025, BMVC 2025 (Exceptional Reviewer), ICASSP 2025, ICIP 2025, WACV 2025, DSP 2025, CVPR 2025, CVPR 2024, Smart Agriculture Technology (ScienceDirect), Agriculture (MDPI), and more.