Research Postgraduate, Imperial College London, South Kensington, United Kingdom
Nikolaos Giakoumoglou is currently a postgraduate researcher at Imperial College London in the Department of Electrical and Electronic Engineering (EEE) in the Communications and Signal Processing (CSP) group, where he is pursuing his PhD under the guidance of Professor Tania Stathaki. Before his current role, he worked as a Research Assistant at the Centre for Research and Technology Hellas (CERTH) in the Information Technologies Institute (ITI) department. He obtained his Diploma in Electrical and Computer Engineering in 2021 from the Department of Electrical & Computer Engineering at Aristotle University of Thessaloniki. Nikolaos' research is primarily focused on Artificial Intelligence, Machine Learning, and Deep Learning, with a special interest in applications within the field of Computer Vision.
SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual Representations
N. Giakoumoglou and T. Stathaki
[arXiv] [pdf] [suppl] [code] [bibtex] [paperswithcode]
Contrastive learning has become a dominant approach in self-supervised visual representation learning. Hard negatives - samples closely resembling the anchor - are key to enhancing learned representations' discriminative power. However, efficiently leveraging hard negatives remains challenging. We introduce SynCo (Synthetic negatives in Contrastive learning), a novel approach that improves model performance by generating synthetic hard negatives on the representation space. Building on the MoCo framework, SynCo introduces six strategies for creating diverse synthetic hard negatives on-the-fly with minimal computational overhead. SynCo achieves faster training and better representation learning, reaching 67.9% top-1 accuracy on ImageNet ILSVRC-2012 linear evaluation after 200 pretraining epochs, surpassing MoCo's 67.5% using the same ResNet-50 encoder. It also transfers more effectively to detection tasks: on PASCAL VOC, it outperforms both the supervised baseline and MoCo with 82.5% AP; on COCO, it sets new benchmarks with 40.9% AP for bounding box detection and 35.5% AP for instance segmentation. Our synthetic hard negative generation approach significantly enhances visual representations learned through self-supervised contrastive learning. Code is available at https://github.com/giakoumoglou/synco.
DCD: Discriminative and Consistent Representation Distillation
N. Giakoumoglou and T. Stathaki
[arXiv] [pdf] [suppl] [code] [bibtex]
Knowledge Distillation (KD) aims to transfer knowledge from a large teacher model to a smaller student model. While contrastive learning has shown promise in self-supervised learning by creating discriminative representations, its application in knowledge distillation remains limited and focuses primarily on discrimination, neglecting the structural relationships captured by the teacher model. To address this limitation, we propose Discriminative and Consistent Distillation (DCD), which employs a contrastive loss along with an consistency regularization to minimize the discrepancy between the distributions of teacher and student representations. Our method introduces learnable temperature and bias parameters that adapt during training to balance these complementary objectives, replacing the fixed hyperparameters commonly used in contrastive learning approaches. Through extensive experiments on CIFAR-100 and ImageNet ILSVRC-2012, we demonstrate that DCD achieves state-of-the-art performance, with the student model sometimes surpassing the teacher's accuracy. Furthermore, we show that DCD's learned representations exhibit superior cross-dataset generalization when transferred to Tiny ImageNet and STL-10. Code is available at https://github.com/giakoumoglou/distillers.
Relational Representation Distillation
N. Giakoumoglou and T. Stathaki
[arXiv] [pdf] [suppl] [code] [bibtex]
Knowledge Distillation (KD) is an effective method for transferring knowledge from a large, well-trained teacher model to a smaller, more efficient student model. Despite its success, one of the main challenges in KD is ensuring the efficient transfer of complex knowledge while maintaining the student's computational efficiency. While contrastive learning methods typically push different instances apart and pull similar ones together, applying such constraints to KD can be too restrictive. Contrastive methods focus on instance-level information, but lack attention to relationships between different instances. We propose Relational Representation Distillation (RRD), which improves knowledge transfer by maintaining structural relationships between feature representations rather than enforcing strict instance-level matching. Specifically, our method employs sharpened distributions of pairwise similarities among different instances as a relation metric, which is utilized to match the feature embeddings of student and teacher models. Our approach demonstrates superior performance on CIFAR-100 and ImageNet ILSVRC-2012, outperforming traditional KD and sometimes even outperforms the teacher network when combined with KD. It also transfers successfully to other datasets like Tiny ImageNet and STL-10. Code is available at https://github.com/giakoumoglou/distillers.
A Multimodal Approach for Cross-Domain Image Retrieval
L. Iijima, N. Giakoumoglou and T. Stathaki
Cross-Domain Image Retrieval (CDIR) is a challenging task in computer vision, aiming to match images across different visual domains such as sketches, paintings, and photographs. Traditional approaches focus on visual image features and rely heavily on supervised learning with labeled data and cross-domain correspondences, which leads to an often struggle with the significant domain gap. This paper introduces a novel unsupervised approach to CDIR that incorporates textual context by leveraging pre-trained vision-language models. Our method, dubbed as Caption-Matching (CM), uses generated image captions as a domain-agnostic intermediate representation, enabling effective cross-domain similarity computation without the need for labeled data or fine-tuning. We evaluate our method on standard CDIR benchmark datasets, demonstrating state-of-the-art performance in unsupervised settings with improvements of 24.0% on Office-Home and 132.2% on DomainNet over previous methods. We also demonstrate our method’s effectiveness on a dataset of AI-generated images from Midjourney, showcasing its ability to handle complex, multi-domain queries.