Minh-Hao Van
Postdoctoral Fellow in Computer Science, EECS, University of Arkansas

SAIL Lab (JBHT-345)
227 N Harmon Ave
Fayetteville, AR 72701
Hi, my name is Minh-Hao Van (Văn Minh Hào in Vietnamese), and I am currently a Postdoctoral Fellow at the University of Arkansas. I earned my Ph.D. in Computer Science from the University of Arkansas, advised by Prof. Xintao Wu. My research interests include Machine Learning and Deep Learning, with a specific emphasis on exploring and improving the trustworthiness in AI/ML models. Recently, my work has centered on addressing robustness, safety, privacy, and fairness challenges in Large (Vision) Language Model. I also explore resource-efficient approaches for adapting large models to novel tasks, as well as the development of AI foundation models for scientific domains such as biomedical engineering and materials science. Please see my CV here.
Prior to University of Arkansas, I earned my B.Eng degree in Computer Science with Honors from HCMC University of Technoloty (Bach Khoa University), VNUHCM, in 2019. During my undergraduate study, I used to work on Machine Learning method for predicting the cryptocurrency price movements and the impact of social media news on the price with Dr. Khuong Nguyen-An.
Feel free to contact me via email or Linkedin if you are interested in my research. I am also willing to have a coffee chat when you drop by North-West Arkansas area.
news
Jun 25, 2025 | Check out our new survey of AI for materials science, covering foundation models, LLM agents, datasets and tools to develop AI foundation models for materials science tasks A Survey of AI for Materials Science: Foundation Models, LLM Agents, Datasets, and Tools ![]() |
---|---|
Jan 22, 2025 | One paper Soft Prompting for Unlearning in Large Language Models is accepted to main conference at NAACL-2025. Congratulations! ![]() |
Jan 15, 2025 | I will server as reviewers for WWW’25, PAKDD’25 and IJCNN’25. |
Sep 24, 2024 | Minh-Hao was awarded the Reginald R. ‘Barney’ & Jameson A. Baxter and EECS Graduate Fellowship for the academic year 2024-2025. Congratulations! ![]() |
Jun 30, 2024 | Check out our new works on addressing machine unlearning in LLM via soft prompting Soft Prompting for Unlearning in Large Language Models and effective influence analysis for selecting in-context demonstration examples In-Context Learning Demonstration Selection via Influence Analysis ![]() |
Mar 15, 2024 | One co-authored paper Evaluating the Impact of Local Differential Privacy on Utility Loss via Influence Functions was accepted at IJCNN’24. Congratulations! ![]() |
Feb 16, 2024 | Our paper On Large Visual Language Models for Medical Imaging Analysis: An Empirical Study was accepted at IEEE CHASE’24 as a short paper. Congratulations! ![]() ![]() |
Jan 30, 2024 | Our paper Robust Influence-based Training Methods for Noisy Brain MRI was accepted at PAKDD’24 as a full paper with an oral presentation. Congratulations! ![]() |
Sep 15, 2023 | Our paper HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning Attacks was accepted at 2023 IEEE International Conference on Data Minng as regular paper (acceptance rate 9.37%). Congratulations! ![]() |
selected publications
- arXivA Survey of AI for Materials Science: Foundation Models, LLM Agents, Datasets, and ToolsarXiv preprint arXiv:2506.20743, 2025
- JDSAInfluence-based approaches for tumor classification in noisy brain MRI with deep learning and vision-language modelsInternational Journal of Data Science and Analytics, 2025
- NAACL’25Soft prompting for unlearning in large language modelsProceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), 2025
- arXivFair In-Context Learning via Latent Concept VariablesarXiv preprint arXiv:2411.02671, 2024
- IJCNN’24Evaluating the impact of local differential privacy on utility loss via influence functionsIn 2024 International Joint Conference on Neural Networks (IJCNN), 2024
- PAKDD’24Robust Influence-Based Training Methods for Noisy Brain MRIIn Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2024
- Defactify@AAAI’24Detecting and correcting hate speech in multimodal memes with large visual language modelarXiv preprint arXiv:2311.06737, 2023
- ICDM’23HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning AttacksIn 2023 IEEE International Conference on Data Mining (ICDM), 2023
- BigData’22Defending evasion attacks via adversarially adaptive trainingIn 2022 IEEE International Conference on Big Data (Big Data), 2022
- DASFAA’22Poisoning attacks on fair machine learningIn International Conference on Database Systems for Advanced Applications, 2022
- ACOMP’19Predicting Cryptocurrency Price Movements Based on Social MediaIn 2019 International Conference on Advanced Computing and Applications (ACOMP), 2019