Evaluating and defending against adversarial attacks on LLM-generated LSTM Models

Mani Ghahremani*, Arsenii Podshyvalin, Rahim Taheri

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)peer-review

Abstract

Large Language Models (LLMs) have demonstrated the ability to generate complex models, including Long Short-Term Memory (LSTM) networks for time-series forecasting. While prior research established that LLM-generated LSTMs can achieve performance comparable to manually optimized models, but their robustness against adversarial attacks remains unexamined. In this work, we investigate the susceptibility of LLM-generated LSTMs to data poisoning and adversarial perturbations, evaluating their impact on predictive accuracy. We implement three adversarial attacks and assess the effectiveness of corresponding defense strategies. Our findings reveal that while defenses can mitigate performance degradation, LLM-generated models exhibit vulnerabilities. This study underscores the need for integrating adversarial robustness evaluations into LLM-driven model generation pipelines, particularly in sensitive applications such as financial forecasting.
Original languageEnglish
Title of host publicationAdversarial Example Detection and Mitigation Using Machine Learning
EditorsEshan Nowroozi, Rahim Taheri, Lucas Cordeiro
PublisherSpringer Cham
Pages183-194
Number of pages12
Edition1st
ISBN (Electronic)9783031994470
ISBN (Print)9783031994463, 9783031994494
DOIs
Publication statusPublished - 22 Jan 2026

Fingerprint

Dive into the research topics of 'Evaluating and defending against adversarial attacks on LLM-generated LSTM Models'. Together they form a unique fingerprint.

Cite this