BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//132.216.98.100//NONSGML kigkonsult.se iCalcreator 2.20.4//
BEGIN:VEVENT
UID:20260416T080038EDT-29755vpSw0@132.216.98.100
DTSTAMP:20260416T120038Z
DESCRIPTION:Abstract\n\nClustering is crucial in pattern recognition and ma
 chine learning for extracting key information from unlabeled data. Deep le
 arning-based clustering methods have proven effective in image segmentatio
 n\, social network analysis\, face recognition\, and machine vision.\n\nTr
 aditional deep clustering methods seek a single global embedding for all d
 ata clusters. In Section 3.1\, we introduce a deep multirepresentation lea
 rning (DML) framework\, where each challenging data group has its own opti
 mized latent space\, while easy-to-cluster groups share a common latent sp
 ace. Autoencoders generate these latent spaces\, and a novel loss function
  with weighted reconstruction and clustering losses emphasizes samples lik
 ely belonging to their clusters. DML is published in IEEE Transactions on 
 Neural Networks and Learning Systems (TNNLS).\n\nIn Section 3.2\, we intro
 duce a novel deep clustering framework with self-supervision using pairwis
 e data similarities (DCSS). DCSS tackles two main challenges in DML: the c
 omputational expense of using multiple deep networks and the neglect of pa
 irwise data similarity in its loss function. DCSS has two phases. First\, 
 we form hypersphere-like groups of similar samples using a cluster-specifi
 c loss function for a single autoencoder\, creating these hyperspheres in 
 the autoencoder's latent space. Second\, we use pairwise data similarities
  to create a $K$-dimensional space to handle more complex cluster distribu
 tions\, improving clustering accuracy. Here\, $K$ is the number of cluster
 s. The latent space from the first phase serves as the input for the secon
 d phase. Portions of the DCSS results were published in the International 
 Joint Conference on Neural Networks.\n\nIn Section 3.3\, we extend our DCS
 S framework to develop Contrastive Clustering (CC) leveraging pairwise sim
 ilarity. CC models create positive and negative pairs for each data instan
 ce via data augmentation to learn a feature space grouping instance-level 
 and cluster-level representations. Existing algorithms often overlook cros
 s-instance patterns\, crucial for improving clustering accuracy. In Sectio
 n 3.3\, we introduce Cross-instance guided Contrastive Clustering (C3)\, a
  method incorporating cross-sample relationships to increase positive pair
 s and reduce false negatives\, noise\, and anomalies. Our new loss functio
 n identifies similar instances based on instance-level representations and
  encourages their aggregation. We also propose a novel weighting method to
  select negative samples more efficiently. The C3 methodology is published
  in the 34th British Machine Vision Conference.\n\nIn Section 3.4\, we lev
 erage our contrastive clustering expertise to develop a novel approach for
  streaming data\, where data arrives sequentially and previous data is ina
 ccessible. Unsupervised Continual Learning (UCL) enables neural networks t
 o learn tasks sequentially without labels. Catastrophic Forgetting (CF)\, 
 where models forget previous tasks upon learning new ones\, is a significa
 nt challenge\, especially in UCL without labeled data. CF mitigation strat
 egies like knowledge distillation and replay buffers face memory inefficie
 ncy and privacy issues. Current UCL research addresses CF but lacks algori
 thms for unsupervised clustering. To fill this gap\, we introduce Unsuperv
 ised Continual Clustering (UCC) and propose Forward-Backward Knowledge Dis
 tillation for Continual Clustering (FBCC) to counteract CF. FBCC employs a
  single continual learner (the 'teacher') with a cluster projector and mul
 tiple student models. It has two phases: Forward Knowledge Distillation\, 
 where the teacher learns new clusters while retaining previous knowledge w
 ith guidance from specialized students\, and Backward Knowledge Distillati
 on\, where a student model mimics the teacher to retain task-specific know
 ledge\, aiding the teacher in subsequent tasks.\n
DTSTART:20241108T160000Z
DTEND:20241108T180000Z
LOCATION:Room 603\, McConnell Engineering Building\, CA\, QC\, Montreal\, H
 3A 0E9\, 3480 rue University
SUMMARY:PhD defence of Mohammadreza Sadeghi – Unsupervised Representation L
 earning for Data Clustering
URL:https://www.mcgill.ca/ece/channels/event/phd-defence-mohammadreza-sadeg
 hi-unsupervised-representation-learning-data-clustering-360821
END:VEVENT
END:VCALENDAR
