Wei Fu 付 伟

I am a NSF founded PhD student of Computer Science at North Carolina State University under the direction of Dr. Tim Menzies.

Before coming to NC State, I was a visiting student of Automation Department at Tshinghua University from Mar. 2012 to May. 2013, where I worked with Dr. Feifei Gao, mainly focusing on signal processing in wireless communication networks. I earned my M.S. in Electrical Engineering from Beijing University of Posts and Telecommunications, China, in Mar. 2012 and B.S. in Electrical Engineering from Nanjing Tech University, China, in Jun. 2009. I was an intern at Software Engineering group of ABB Corporate Research Center US in 2016 summer.

I'm on the job market, looking for industrial and academic positions.

Email  /  CV  /  Google Scholar /  LinkedIn /  Github


My research topics mainly focus on how to apply next generation of AI techniques to help improve software quality and aid software process. Techniques that I'm interested in (but not limited to) are transfer learning, deep learning and search-based optimization. My research question is always: can we do "it" better and faster?

Why is Differential Evolution Better than Grid Search for Tuning Defect Predictors?
Wei Fu, Tim Menzies, Vivek Nair
arXiv Preprint, 2017

Grid search has been widely used as a parameter tuning method in software engineering community. However, by taking defect prediction task as a case study, we find that differential evolution as a parameter tuner performs if not better, at least as good as grid search but it runs 210X faster.

What is Wrong with Topic Modeling?(and How to Fix it Using Search-based Software Engineering)
Amritanshu Agrawal, Wei Fu, Tim Menzies
arXiv Preprint, 2017

LDA suffers from “order effects”. Applying differential evolution algorithm to tune LDA parameters will dramatically reduce clustering instability and it also leads to improved performances for supervised as well as unsupervised learning.

Revisiting Unsupervised Learning for Defect Prediction
Wei Fu, Tim Menzies

ŒThis paper repeats and refutes Yang et al's FSE'16 paper (1) ŒThere is much variability in the efficacy of the Yang et al. models, some supervised data is required to prune weaker models. (2) ŒWhen we repeat their analysis on a project-by-project basis, supervised methods are seen to work beŠtter.

Easy over Hard: A Case Study on Deep Learning
Wei Fu, Tim Menzies

SVM with a simple differential evolution-based parameter tuning can get better performance than deep learning(CNN) method for knowledge units relateness classification task on Stack Overflow. At the same, it is 84X faster than the deep learning method.

Too Much Automation? The Bellwether Effect and Its Implications for Transfer Learning
Rahul Krishna, Tim Menzies, Wei Fu

We find a "bellwether" effect in software analytics. Given N data sets, we find which one produces the best predictions on all the others. This "bellwether" data set is then used for all subsequent predictions.

Tuning for Software Analytics: is it Really Necessary?
Wei Fu, Tim Menzies, Xipeng Shen
Information and Software Technology (IST) 76 (2016): 135-146.

We applied differential evolution algorithm to explore the hyper-parameter space to learn the best optimal parameters for defect prediction, which improves learners' performance in most cases and terminates quickly.

Heterogeneous Defect Prediction
Jaechang Nam, Wei Fu, Sung Kim, Tim Menzies, Lin Tan
Transactions on Software Engineering(TSE), IEEE, 2017 (accepted).

Our HDP approach conducts metric selection and metric matching to build a prediction model between projects with heterogeneous metric sets. Our empirical study on 28 subjects shows that about 68% of predictions using our approach outperform or are comparable to WPDP with statistical significance.


ECE220 Foundations of Electrical & Computer Engineering(Lab)

ECE212 Fundamentals of Logic Design(Lab)

ECE109 Introduction to Computer Systems(Lab)

I like this website! Last updated: Jun. 15 2017