Machine Learning
Harnessing the remarkable progress in high-performance computing infrastructure and the abundance of large-scale datasets, machine learning models have witnessed an extraordinary surge in performance. This breakthrough paves the way for an exciting era of innovation in our increasingly digitalized world, where the versatility and accessibility of these models break down barriers and unlock endless possibilities.
At MIND, we dive deep to understand statistical regularities behind real-world data, seeking out patterns, correlations, trends, and dependencies that hold invaluable insights. Ultimately, we hope to design machine learning algorithms that are not only generalizable and robust but also capable of transforming industries. To achieve this, we build upon a solid foundation of large-scale and multimodal data. From there, we push the boundaries with cutting-edge techniques such as transfer learning, federated learning, graph learning, automatic machine learning, and trustworthy machine learning. These advanced algorithms empower us to enhance the representation power, generalization, robustness, interpretability, fairness, and scalability of our models. We actively engage with academic and industry partners to unlock the full potential of our developed algorithms in realms like industrial 4.0, healthcare, material science, and beyond. Together, we strive to shape the future by harnessing the power of data-driven innovation.
Publications
[1] C. Zhang, P. Lim, A. K. Qin, K. C. Tan, “Multiobjective Deep Belief Networks Ensemble for Remaining Useful Life Estimation in Prognostics,” In IEEE Transactions on Neural Networks and Learning Systems, vol. 28, pp. 2306-2318, 2016.
[2] R. Ren, T. Hung, K. C. Tan, “A Generic Deep-learning-based Approach for Automated Surface Inspection,” in IEEE Transactions on Cybernetics, vol. 48, pp. 929-940, 2017. (2020 Outstanding Paper Award, IEEE Transactions on Cybernetics)
[3] Z. Lu, R. Cheng, Y. Jin, K. C. Tan, K. Deb, “Neural Architecture Search as Multiobjective Optimization Benchmarks: Problem Formulation and Performance Assessment,” in IEEE Transactions on Evolutionary Computation, 2023.
[4] Z. Wang, L. Cao, W. Lin, M. Jiang, K. C. Tan, "Robust Graph Meta-Learning via Manifold Calibration with Proxy Subgraphs," in the Proceedings of the 2023 Conference on Association for the Advancement of Artificial Intelligence, Washington DC, USA, Feburary 7-14, 2023.
[5] C. Yang, Y. M. Cheung, J. Ding, K. C. Tan, “Concept Drift-tolerant Transfer Learning in Dynamic Environments,” in IEEE Transactions on Neural Networks and Learning Systems, vol. 33, 2021.