Machine Learning and Artificial Intelligence (AI)
Research Area Faculty
- Center Director, Prof. Dr. Constantine Dovrolis
- Assoc. Prof. Dr. Mihalis Nicolaou
Research Area Overview
In particular, research actions revolve around the design of novel machine learning algorithms that are robust, scalable, and efficient, addressing core challenges in the development of AI such as generalization, interpretability, fairness, and efficiency.
By leveraging expertise in areas such as representation (deep) learning, computer vision, and signal processing, the group develops methods that:
- Draw inspiration from the brain and other biological systems to generate new machine learning architectures that are sparse, modular and hierarchical.
- Reduce the computational requirements of training neural nets, increasing democratization and accessibility while reducing the carbon footprint of AI.
- Interpret the inner workings of deep networks by designing the appropriate decompositions on network weights and activations.
- Provide explainable decisions to build trust in user communities.
- Incorporate prior knowledge and geometry-aware learning.
- Mitigate classifier bias and increase fairness and diversity metrics.
These results and knowledge are transferred to critical interdisciplinary applications that relate among other to health and medical imaging, climate change, earth observation, smart farming, and others.
Research Highlights
Research Highlight 1
Title: Sparsity-Driven AI: Path Optimization and Hierarchical Modular Networks
Related people: Constantine Dovrolis and Shreyas Malakarjun Patil (PhD student at Georgia Tech)
Graphical Abstract
Description: Sparse neural networks derived through PHEW and Neural Sculpting, highlighting hierarchical modularity and optimized paths, offer performance benefits in terms of generalization and learning.
- PHEW (Paths with Higher Edge Weights):
- Uses biased random walks to identify high-weight paths within dense networks.
- Constructs sparse sub-networks that retain critical architectural properties.
- Demonstrates robust generalization and fast convergence.
- Neural Sculpting:
- Iteratively prunes network connections to uncover hierarchical modularity.
- Aligns network architecture with task-specific structures.
- Enhances interpretability and reduces overfitting in data-scarce scenarios.
References
-
Patil, S. M., & Dovrolis, C. (2021). PHEW: Constructing Sparse Networks That Learn Fast and Generalize Well. International Conference on Machine Learning (ICML). DOI
-
Patil, S. M., & Dovrolis, C. (2023). Neural Sculpting: Uncovering Hierarchically Modular Task Structure in Neural Networks Through Pruning and Network Analysis. Neural Information Processing Systems (NeurIPS). DOI
Research Highlight 2
Title: Neuro-Inspired Architectures for Continual Learning
Related people: Constantine Dovrolis and Burak Gürbüz (PhD student at Georgia Tech)
Graphical Abstract
- NISPA Framework (ICML 2022): This framework focuses on balancing stability and plasticity in neural networks through modularity and sparse connectivity. NISPA dynamically adjusts its structure in response to new tasks, preserving critical connections for prior tasks while enabling adaptability.
- NICE Framework (CVPR 2024): NICE extends these ideas with a neurogenesis-inspired approach, where new "neural modules" are selectively generated to handle task-specific features. By contextual encoding, NICE ensures that knowledge remains disentangled across tasks, achieving superior accuracy in class-incremental learning benchmarks.
- Gürbüz, B., & Dovrolis, C. (2022). NISPA: Neuro-inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks. International Conference on Machine Learning (ICML). DOI
- Gürbüz, B., & Dovrolis, C. (2024). NICE: Neurogenesis Inspired Contextual Encoding for Replay-Free Class Incremental Learning. Conference on Computer Vision and Pattern Recognition (CVPR). DOI
Selected Publications
- Y. Panagakis et al., ‘Tensor Methods in Computer Vision and Deep Learning’, Proc. IEEE, vol. 109, no. 5, pp. 863–890, May 2021, doi: 10.1109/JPROC.2021.3074329.
- M. B. Gurbuz and C. Dovrolis, ‘NISPA: Neuro-inspired stability-plasticity adaptation for continual learning in sparse networks’, International Conference of Machine Learning (ICML), 2022. Available: https://icml.cc/virtual/2022/spotlight/16096
- J. Oldfield, C. Tzelepis, Y. Panagakis, M. A. Nicolaou, and I. Patras, ‘PandA: Unsupervised Learning of Parts and Appearances in the Feature Maps of GANs’, Feb. 06, 2023, arXiv: arXiv:2206.00048. Accessed: Nov. 21, 2024. Available: http://arxiv.org/abs/2206.00048
- J. Oldfield, C. Tzelepis, Y. Panagakis, M. A. Nicolaou, and I. Patras, ‘Parts of Speech-Grounded Subspaces in Vision-Language Models’, Nov. 12, 2023, arXiv: arXiv:2305.14053. Accessed: Nov. 21, 2024. [Online]. Available: http://arxiv.org/abs/2305.14053
- S. M. Patil, L. Michael, and C. Dovrolis, ‘Neural Sculpting: Uncovering hierarchically modular task structure in neural networks through pruning and network analysis’, Neural Information Processing Systems (NeurIPS) conference, 2023. Available: http://arxiv.org/abs/2305.18402
- M. B. Gurbuz, J. M. Moorman, and C. Dovrolis, ‘NICE: Neurogenesis Inspired Contextual Encoding for Replay-free Class Incremental Learning’, in 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). DOI: 10.1109/CVPR52733.2024.02233.