From Texts to Shields: Convergence of Large Language Models and Cybersecurity (Tao Li; Ya-Ting Yang; Yunian Pan; Quanyan Zhu): This paper explores how large language models (LLMs) are increasingly converging with cybersecurity tasks: from vulnerability analysis and network/5G security to generative security engineering. It looks both at how LLMs can assist defenders (automation, reasoning, security analytics) and how they introduce new risks (trust, transparency, adversarial use). The authors outline socio-technical challenges like interpretability, human-in-the-loop design, and propose a forward-looking research agenda for secure, effective LLM adoption in cybersecurity.
Adversarial Defense in Cybersecurity: A Systematic Review of GANs for Threat Detection and Mitigation (Tharcisse Ndayipfukamiye; Jianguo Ding; Doreen Sebastian Sarwatt; Adamu Gaston Philipo; Huansheng Ning): This is a large scale systematic literature review (PRISMA-compliant) of how Generative Adversarial Networks (GANs) are being used in cybersecurity—both as attack vectors and as defensive tools—from January 2021 through August 2025. It identifies 185 peer-reviewed studies, develops a four-dimensional taxonomy (defensive function, GAN architecture, cybersecurity domain, adversarial threat model), shows publication trends, assesses the effectiveness of GAN-based defences, and highlights key gaps (training instability, lack of benchmarks, limited explainability). The authors propose a roadmap for future work.
Securing the AI Frontier: Urgent Ethical and Regulatory Imperatives for AI-Driven Cybersecurity (Vikram Kulothungan): This paper examines the ethical and regulatory challenges that arise when AI is deeply integrated into cybersecurity systems. It traces historical regulation of AI, analyzes current frameworks (for example the EU AI Act), and discusses ethical dimensions such as bias, transparency, accountability, privacy, and human oversight. It proposes strategies to promote AI literacy, public engagement, and global harmonisation of regulatory approaches in the cybersecurity/AI domain.
Neuromorphic Mimicry Attacks Exploiting Brain-Inspired Computing for Covert Cyber Intrusions (Hemanth Ravipati) This paper introduces a new threat class: “Neuromorphic Mimicry Attacks (NMAs)”. These attacks target neuromorphic computing systems (brain-inspired chips, spiking neural networks, edge/IoT hardware) by mimicking legitimate neural activity (via synaptic weight tampering, sensory input poisoning) to evade detection. The paper provides a theoretical framework, simulation results using a synthetic neuromorphic dataset, and proposes countermeasures (neural-specific anomaly detection, secure synaptic learning protocols).
Towards Adaptive AI Governance: Comparative Insights from the U.S., EU, and Asia (Vikram Kulothungan; Deepti Gupta) This article offers a comparative analysis of how different regions (US, European Union, Asia) approach AI governance, innovation, and regulation—especially in cybersecurity/AI domains. It identifies divergent models (market-driven, risk-based, state-guided), explains tensions for international collaboration, and proposes an “adaptive AI governance” framework blending innovation accelerators, risk oversight, and strategic alignment.
Asymmetry by Design: Boosting Cyber Defenders with Differential Access to AI (Shaun Ee; Chris Covino; Cara Labrador; Christina Krawec; Jam Kraprayoon; Joe O’Brien) This work proposes a strategic framework for cybersecurity defence by deliberately shaping access to AI capabilities (“differential access”) such that defenders have prioritized access or harder restrictions on adversaries. It outlines three approaches—Promote Access, Manage Access, Deny by Default—and gives example schemes of how defenders might leverage these in practice. It argues that as adversaries gain advanced AI, defenders must build architectural and policy asymmetries in their favour.