Vulnerabilities and Defenses: A Monograph on Comprehensive Analysis of Security Attacks on Large Language Models
DOI:
https://doi.org/10.51983/ijiss-2025.IJISS.15.2.54Keywords:
Large Language Models, LLM Security, Data Poisoning, Prompt Injection, Jailbreaking, Model Robustness, Explainability, Defense Mechanism, AI GovernanceAbstract
This research mainly focused on highly developed
natural language processing capabilities, such as large language
models (LLMs), which can generate code and power chatbots,
among many other uses. Their growing use, though, has put
them under many security risks. This work thoroughly
investigates LLM vulnerabilities, including adversarial attacks,
data poisoning, prompt injection, privacy leaking, and model
exploitation via jailbreak. Though there is an increasing corpus
of defensive tactics, most still have limited reach, potency, or
adaptability. The paper lists ideas for the following studies and
emphasizes the requirement for strong, generalizable,
explainable security solutions. Creating uniform evaluation
standards, adaptive defense mechanisms, more transparent
models, automated threat detection, and frameworks for ethical
integration are all part of the approach. Ensuring LLMs calls
for a multidisciplinary strategy that strikes a compromise
between responsible government and technology innovation.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 The Research Publication

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.