Abstract
Large Language Models (LLMs) have demonstrated the ability to generate complex models, including Long Short-Term Memory (LSTM) networks for time-series forecasting. While prior research established that LLM-generated LSTMs can achieve performance comparable to manually optimized models, but their robustness against adversarial attacks remains unexamined. In this work, we investigate the susceptibility of LLM-generated LSTMs to data poisoning and adversarial perturbations, evaluating their impact on predictive accuracy. We implement three adversarial attacks and assess the effectiveness of corresponding defense strategies. Our findings reveal that while defenses can mitigate performance degradation, LLM-generated models exhibit vulnerabilities. This study underscores the need for integrating adversarial robustness evaluations into LLM-driven model generation pipelines, particularly in sensitive applications such as financial forecasting.
| Original language | English |
|---|---|
| Title of host publication | Adversarial Example Detection and Mitigation Using Machine Learning |
| Editors | Eshan Nowroozi, Rahim Taheri, Lucas Cordeiro |
| Publisher | Springer Cham |
| Pages | 183-194 |
| Number of pages | 12 |
| Edition | 1st |
| ISBN (Electronic) | 9783031994470 |
| ISBN (Print) | 9783031994463, 9783031994494 |
| DOIs | |
| Publication status | Published - 22 Jan 2026 |
Fingerprint
Dive into the research topics of 'Evaluating and defending against adversarial attacks on LLM-generated LSTM Models'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver