Skip to main content
Wei Yang

Wei Yang

Associate Professor - Computer Science
 
+1 (972) 883-4173
ECSS 4.225
Personal Website
Google Scholar
ORCID

Currently accepting undergraduate and graduate students

Research Areas

I am broadly interested in topics related to software engineering and security.


Efficiency Robustness

My current research is primarily driven by the need to adapt AI on edge devices such as mobile devices, IoT devices and autonomous vehicles. The line of work on efficiency robustness, pioneered by our group in 2019, began with the observation that different inputs may incur varied amount of computation costs on neural networks. We have developed various attacks such as white-box attack(CVPR 2020), black-box attack(ICSE 2022), and feed-forward attack(ASE 2022) on a range of applications such as Neural Machine Translation(FSE 2022), Neural Image Caption Generation(CVPR 2022), Transformer-based Language Models(ACL 2023, TOSEM 2024), and Neural ODEs (ICCV-W 2023).


Infrastructure Support for AI Deployment

Another line of work to enable AI on edge devices is to provide infrastructure support. To this end, our main effort is to build a compiler toolchain (ISSTA 2023, IJCAI 2022) to enable compilation on dynamic-shaped neural networks. We have also investigated the security of such deployment on IoT devices (CCS 2019).


Mobile Testing

We have been working on mobile testing since we built one of the first automated mobile testing tools in 2012. We have performed a few studies (FSE 2016,ICSE 2017, ASE 2018) on existing mobile testing tools, and based on the study results, we have focused on solving the bottleneck issues such as generating textual inputs(IEEE S&P 2020) and avoiding exploration tarpits (FSE 2021).



Malware Detection

We have proposed a new notion of expectation context which contrasts user expectation and program behaviors to detect malware. This notion has opened up the new field of text analytics for mobile security. Specifically, on the user expectation side, we extract information such as app descriptions (Usenix Security 2013, RE 2018), contextual events(HotSoS 2014, ICSE 2015, JCS 2016, HotSoS 2017), ads information (NDSS 2016) and on-screen messages (VL/HCC 2018) to depict what users expect to happen in the apps. On the program behaviors side, we have been developing techniques such as entity-based program analysis(ICSE 2018), centrality analysis(ASE 2019), intimacy analysis(TOSEM 2021), homophily analysis (ISSTA 2021), and contrastive learning (TDSC 2022) to detect potentially unwanted apps (PUAs) and malware.


SE/Security for Deep Learning

We have investigated other topics in software engineering and security of DL models. We are one of the first to propose property inference attack(CCS 2018) and adversarial malware generation (ACSAC 2017, AAAI-W 2018). We are also the first to use a global property to interpret a DL model without a specific input (FSE 2020). We did some work in testing DL models such as NMT models(DSN 2019, ICSE 2019) and NLP models (COLING 2022). Realizing such testing may or may not result in a better model, recently, we begin to focus on improving inputs for better robustness (CVPR-W 2022) and accuracy of DL models.


Intelligent Software Testing/Security

I am generally interested in develop more intelligent tools for software engineers and security researchers. We have made tools for converting natural language specification to programing languages (EMNLP 2018, AAAI-W 2018), generating input grammars for fuzzing (FSE 2019), clone detection (ASE 2020), diagnosing database performance issues (ICSME 2020), analyzing UI flaky tests (ICSE 2020), mapping website changes (ISSTA 2021), detecting game bugs (FSE 2021, ISSRE 2023), and vulnerability detection (ICSE 2022).



Publications

Towards Improving Mobile Application Security by Enhancing User Perceptions of Application Behaviors - Journal Article
AT-EASE: A Tool for Early and Quick Usability Evaluation of Smartphone Application - Journal Article
Foundation Model Engineering: Engineering Foundation Models Just as Engineering Software 2025 - Journal Article
Judge: Effective State Abstraction for Guiding Automated Web GUI Testing 2025 - Journal Article
TaOPT : Tool-Agnostic Optimization of Parallelized Automated Mobile UI Testing 2025 - Conference Paper
Automated Testing Linguistic Capabilities of NLP Models 2024 - Journal Article
WEFix: Intelligent Automatic Generation of Explicit Waits for Efficient Web End-to-End Flaky Tests 2024 - Conference Paper
MENDNet: Just-in-time Fault Detection and Mitigation in AI Systems with Uncertainty Quantification and Multi-Exit Networks 2024 - Conference Paper

Awards

NSF CAREER Award - NSF [2022]
ACM SIGSOFT Distinguished Paper Award - ACM [2021]

Appointments

Associate Editor
ACM Transactions on Software Engineering and Methodology [–Present]
https://dl.acm.org/journal/tosem/editorial-board
University Advisory Committee on Research
UT Dallas [–Present]
https://senate.utdallas.edu/senate-committees/senate-advisory-committee-on-research-fy2024/
Editorial Board
Software Testing, Verification and Reliability journal [–Present]
https://onlinelibrary.wiley.com/journal/10991689

News Articles

Amazon Nova AI Challenge - Team ASTRO
Team ASTRO Team ASTRO (AI Security and Trustworthiness Operations), has been selected as a red teaming participant in the competition, which runs from November 2024 through July 2025. The challenge focuses on making AI safer by preventing code-generating models from assisting with malicious code or introducing security vulnerabilities. As a red teaming participant, our role will be to develop dynamic automatic testing techniques to identify potential weaknesses in code-generating models, ultimately contributing to more robust and secure AI systems.
ASTRO operates under the guidance of two distinguished faculty advisors. Dr. Wei Yang, an Associate Professor and NSF CAREER Award recipient, brings extensive expertise in software engineering and AI security. With over 30 publications in flagship software engineering conferences and pioneering work in reliability and robustness of AI systems especially AI-based software engineering systems, his research has significant implications for Responsible AI and code generation. Our second advisor, Dr. Xinya Du, is a tenure-track Assistant Professor. His expertise in natural language processing and large language models provides crucial guidance for our team's approach to AI security.
Students Advance in Amazon Challenge
Amazon-AI-Competition A University of Texas at Dallas student team is one of 10 from around the world selected to compete in a new Amazon tournament designed to strengthen the security of software developed with the assistance of artificial intelligence.

The Comets are competing in the Amazon Nova AI Challenge as one of five “red teams,” which must find vulnerabilities and flaws in code-generating models developed by five “model developer” teams. The teams were selected from over 90 proposals.

The tournament kicked off in January, and the final round will be held in June. Each team received $250,000 in sponsorship, monthly Amazon Web Services credits and the chance to compete for top prizes. The winning red team and model developer team will receive $250,000 each. Second-place teams will receive $100,000.

“What makes ASTRO particularly unique is our team’s diverse composition and depth of expertise across all academic levels,” said Dr. Wei Yang, associate professor of computer science in the Erik Jonsson School of Engineering and Computer Science and one of the team’s faculty advisors. Dr. Xinya Du, assistant professor of computer science, also serves as an advisor to the team.

PREPARED STUDENTS YIELD BETTER RESEARCH
PREPARED STUDENTS YIELD BETTER RESEARCH Yang recently received tenure at UT Dallas as an associate professor of computer science. He is focused primarily on software engineering, a popular field among UT Dallas students. However, despite the program’s exponential growth and strong career prospects, students still may struggle to transition from the classroom to their careers.“We’ve had a tougher job market recently,” Yang said. “I’ve been meeting with students mostly who are about to graduate; sometimes, a bit earlier. Previously, I was helping them to evaluate options between multiple job offers, but that’s not always the case right now.”

 “We achieve mutual growth. I have been benefited from others who were willing to sit down and spend time with me. The gesture itself means a lot.”
— Dr. Wei Yang, associate professor of computer science


Like other faculty mentors, Yang focuses on students’ professional skills. However, because many are close to graduation, he also guides them through the arduous job-search process.“Students come to me because they have not yet found their ideal job,” Yang said. “I help them handle stress. Sometimes, students place too much attention on hiding their flaws. I want them to think instead about what they do best and how they can contribute.”While Yang has a busy workload, he enjoys helping students transition into their careers.“My main principle is to focus on my students,” Yang said. “We achieve mutual growth. I have benefited from others who were willing to sit down and spend time with me. The gesture itself means a lot.”
Computer Scientists Dive into AI with New CAREER Award Projects
Two computer scientists in The University of Texas at Dallas’ Erik Jonsson School of Engineering and Computer Science have received National Science Foundation (NSF) Faculty Early Career Development Program (CAREER) awards to support research to detect online deception and improve automated software analysis.

Dr. Shuang Hao, assistant professor of computer science, will use his grant to research ways to detect deception — fabricated online content generated through artificial intelligence. Dr. Wei Yang, assistant professor of computer science, will develop artificial intelligence techniques to identify and analyze issues in software code.

AI Software Analysis Research Funded by CAREER Award

Dr. Wei Yang
, a computer scientist in The University of Texas at Dallas’ Erik Jonsson School of Engineering and Computer Science was granted a National Science Foundation (NSF) Faculty Early Career Development Program (CAREER) Award to support research to improve automated software analysis.
Yang, who is an assistant professor of computer science, will develop artificial intelligence techniques to identify and analyze issues in software code.
“Wei’s research nicely combines software engineering and cyber security, allowing him to delve into various application domains, from deep learning to the security of open–source software,” said Dr. Ovidiu Daescu, head of the Department of Computer Science and professor of computer science.
NSF CAREER grants are the agency’s most prestigious award for early-career faculty who exemplify the role of teacher-scholar and are likely to become leaders in their fields. Each grant is approximately $500,000 over five years.

Activities

See my Researchr Profile