AI for Cybersecurity Research Lunch
Format: The lunch is held every Wednesday during Spring 2025 in PETR 214 from 12:00 PM to 1:00 PM. If you’d like to give a talk, please contact Ze Sheng with an abstract (zesheng@tamu.edu).
Mailing List: aicybersecurity-research-lunch@lists.tamu.edu
Previous Meetings:: View All Past Events
Slides/Paper of Previous Meetings:Here
🍔Ordering Food🍔: Coming Soon
Upcoming Schedule
Fuzzing Complex Software with Structured Inputs Using LLMs
Date: 03/05/2025
Speakers: Zhicheng Chen
Time: 12:00pm - 1:00pm
Location: PETR 214Abstract: Recent advances have explored the use of large language models (LLMs) for fuzz driver generation (e.g., PromptFuzz) and commit-based fuzzing (e.g., WAFLGo). However, both approaches have significant limitations. PromptFuzz performs fuzzing by combining APIs but cannot effectively fuzz parts affected by commit changes. Additionally, PromptFuzz does not address the challenge of generating drivers for complex programs like Nginx. WAFLGo, on the other hand, supports commit-based fuzzing but performs poorly on inputs with strict structural requirements. This talk will analyze the strengths and weaknesses of these approaches and present preliminary experimental results using Nginx, demonstrating how controlling mutation regions can significantly improve the efficiency of fuzzing highly structured inputs.
Please feel free to join us at 12:00pm every Wednesday. If you want to schedule a talk, email Ze Sheng at zesheng@tamu.edu.
LLMPirate: LLMs for Black-box Hardware IP Piracy
Date: 03/19/2025
Speakers: Matthew DeLorenzo
Time: 12:00pm - 1:00pm
Location: PETR 214Abstract: The rapid advancement of large language models (LLMs) has enabled the ability to analyze and generate code nearly instantaneously, resulting in researchers and companies integrating LLMs across the hardware design and verification process. However, LLMs can also induce new attack scenarios within hardware development. One such threat not yet explored is intellectual property (IP) piracy, in which LLMs may be utilized to rewrite hardware designs to evade piracy detection. Furthermore, we propose LLMPirate, the first LLM-based technique able to generate pirated variations of circuit designs that successfully evade detection on 100% of tested circuits across multiple state-of-the-art piracy detection tools, even capable of pirating full processor designs.
Please feel free to join us at 12:00pm every Wednesday. If you want to schedule a talk, email Ze Sheng at zesheng@tamu.edu.