Another look at CISA and a survey of the landscapeBeating the Bots: How to Stop Automated Mobile App AttacksProtecting your mobile app and defending its APIs from bots and automated attacks is more important than ever. Learn how modern API protections can help prevent attacks and mitigate bot impact. Start prepping your defenses by registering for our upcoming webinar.Register Now#215: AI RegeneratingAnother look at CISA and a survey of the landscapeWelcome to another_secpro!In cybersecurity, there's no such thing as standing still. While standing still might mean "going with the flow" in ordinary life, it means the very opposite when it comes to jousting with the adversary - indeed, standing still means "letting the flow go past you"! That's why we in the _secpro team are always pushing ourselves and pushing our readers to pick up ideas, develop skills, and stay above water in the rushing waves of "the flow"!That's why this week we are beginning a four-part series that looks into the deeds and needs of a CISA-trained professional - and, more importantly, how you can get to that plateau too. With the help of Hemang Doshi's fantastic book, we're taking the necessary steps to move from IT generalist or junior secpro into the higher echelons of auditing. Sound good? Check out this week's excerpt: Use of AI in Audit Planning.Check out _secpro premiumIf you want more, you know what you need to do: sign up to the premium and get access to everything we have on offer. Click the link above to visit our Substack and sign up there!Cheers!Austin MillerEditor-in-ChiefAI-Powered Platform EngineeringPlatform engineering is moving fast and AI is at the center of it. In this 5 hour workshop, George Hantzaras will show you how to design golden paths, build smarter developer portals, and bring AI into ops and observability. You’ll leave with practical patterns, real examples, and a 90-day roadmap to start implementing right away.Seats are limited!Reserve your spot today at 30% offHere's a little meme to keep you going...Source: RedditThis week's articleUse of AI in the Audit ProcessAI is revolutionizing various industries, including auditing. Traditionally, auditing has been a manual and time-consuming process, requiring auditors to sift through large volumes of data to identify discrepancies and ensure compliance. However, with the advent of AI, the audit process is becoming more efficient, accurate, and insightful. AI can analyze vast amounts of data quickly, identify patterns, and even predict potential risks, making it an invaluable tool in modern auditing.Read the rest here!News BytesSnowflake-Linked Data Breaches Hit Multiple Firms: Attackers exploited stolen credentials to access customer environments on the Snowflake cloud platform, impacting high-profile companies and exposing large datasets. Investigators warn of ongoing attempts to monetize the stolen data.Critical Infrastructure Targeted in ‘Volt Typhoon’ Campaign: A sophisticated state-aligned threat group expanded its Volt Typhoon operations, deploying stealthy living-off-the-land techniques to compromise U.S. energy and transportation sectors without triggering standard alerts. For further coverage from May, see here.Okta Warns of Credential Stuffing Surge Against Admin Portals: Identity management provider Okta reported a sharp spike in automated credential stuffing attacks on its administrator portals, prompting urgent guidance on MFA enforcement and IP allowlisting.New macOS Spyware ‘FrostedWeb’ Slips Past Apple’s Security Controls: Researchers detailed a novel macOS spyware strain capable of bypassing Gatekeeper and XProtect, harvesting browser data and keystrokes while maintaining persistence through undocumented APIs.This week's academiaHere Comes The AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications: Introduces Morris-II, a self-replicating “AI worm” that exploits RAG/GenAI pipelines by embedding adversarial, self-replicating prompts which cause GenAI apps to both execute malicious payloads and propagate the prompt to other agents. The paper demonstrates feasibility in controlled environments and proposes detection/mitigation (the “Virtual Donkey”) to detect propagation. (Stav Cohen, Ron Bitton, Ben Nassi and collaborators).Ransomware 3.0: Self-Composing and LLM-Orchestrated: A proof-of-concept study showing how LLMs can autonomously orchestrate full ransomware campaigns: reconnaissance, synthesis of payloads (code), environment-specific adaptation, exfiltration/encryption, and personalized extortion. The work demonstrates the economic feasibility of LLM-driven ransomware and argues for new behavioral/telemetry defenses. (Md Raz, Meet Udeshi, P. V. Sai Charan, Prashanth Krishnamurthy, Farshad Khorrami, Ramesh Karri)Multimodal Prompt Injection Attacks: Risks and Defenses: Systematic study of prompt-injection threats when inputs are multimodal (text + images + other modalities). Identifies new attack vectors that bypass text-only defenses (for example, embedding malicious instructions in images or mixed content) and evaluates mitigation strategies — useful reading for defenders building multimodal LLM appsPrompt Injection 2.0: Hybrid AI Threats: Extends prompt-injection analysis to hybrid attacks that combine classical web/vulnerability techniques (XSS, CSRF, etc.) with prompt-injection to escape sandboxing and exfiltrate data. The paper analyzes attack chains, demonstrates proof-of-concepts, and evaluates defensive measures that bridge web security and LLM guardrails.Revealing a Hidden Class of Task-in-Prompt Adversarial Attacks (PDF): Presents and characterizes Task-in-Prompt (TIP) attacks — adversarial inputs that appear as innocuous tasks but cause LLMs to perform unintended or harmful actions. The paper provides taxonomy, attack generation techniques, responsible disclosure details, and recommended mitigation guidance for model builders and integrators. This paper was presented at ACL and has sparked active discussion in the NLP/AI safety community. (S. Berezin et al.)A Survey on Model Extraction / Model-Stealing Attacks and Defenses for Large Language Models: A comprehensive survey and taxonomy of model extraction attacks against deployed LLMs (functionality extraction, training-data extraction, prompt-targeted attacks), plus an overview of defensive techniques (rate-limiting, watermarking, API-level defenses). This survey is gaining traction as practitioners scramble to protect proprietary models and user privacy. (K. Zhao et al.)Source: Reddit*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more