Want to appear here? Talk with us
Vulnerability
SolarWinds, again: Critical RCE bugs reopen old wounds for enterprise security teams
SolarWinds has disclosed six serious vulnerabilities in its Web Help Desk software, including four rated critical.
These flaws allow attackers to bypass authentication and execute code remotely, posing high risks for organizations using the product.
Widespread Impact
WHD is widely used by Fortune 500 companies and government agencies, making timely patching essential.
Past SolarWinds breaches show the potential for massive downstream damage if left unaddressed.
Immediate Action Needed
Security teams are urged to update to Web Help Desk 2026.1 immediately to reduce exposure.
Legacy Software Risks
The recurring vulnerabilities highlight how older software can continue to threaten enterprise security even years after initial breaches.
Regulation
Reports of GDPR violations have risen sharply
Reports of GDPR violations have surged sharply, showing that many organizations are still struggling to meet data protection requirements.
Regulators are noticing more noncompliance cases across Europe.
Common Causes
Failures often stem from poor data handling, inadequate security measures, and unclear policies.
Companies that store or process personal data without proper controls are most at risk of fines.
Regulatory Pressure
Regulators are stepping up enforcement with larger fines and stricter oversight.
Businesses are being urged to review processes and ensure proper reporting and accountability.
Action Steps for Security Leaders
CISOs and data officers should prioritize audits, employee training, and automated controls to prevent breaches and avoid penalties.
Meeting GDPR standards is becoming more urgent as violations continue to climb.
📺️ Podcast
Why Your Bots Need Better Permissions Than Your Admins
Agentic AI is changing how companies think about security.
Traditional LLMs are like databases, but autonomous agents act like dynamic applications that make decisions in real time.
Understanding the Stack
The AI stack now has three layers: the database (LLM), the application (agent), and the presentation (natural language interface).
Protecting only the data isn’t enough agents must be secured too, because they can act unpredictably.
IAM Challenges
Assigning identity to agents is tricky. Treating them like employees or as workloads has risks.
Agents need tightly scoped permissions to perform tasks without overreaching.
This "identity chain" problem is still unresolved across the industry.
Top Risks and Controls
Key risks include rogue actions and agents misusing cloud APIs. Solutions like proxy layers can sandbox capabilities.
Future threats may involve agents manipulating each other in ways that bypass human oversight.
Shared Responsibility
Security doesn’t fall on one party. Model providers, infrastructure hosts, developers, and users all share responsibility.
Current regulations are too broad and need to focus on specific high-risk cases.
Trust as a Barrier
The biggest obstacle to adoption is trust. Enterprises need confidence that agent actions won’t cause unintended harm.
Until agents can reliably predict impacts, human oversight remains critical.
Autonomous agents are powerful but risky, and securing them requires new strategies, shared responsibility, and careful identity management.
Shadow IT
Roughly half of employees are using unsanctioned AI tools, and enterprise leaders are major culprits
AI Use Explodes in the Workplace About half of employees are using AI tools at work without approval.
Even more striking, enterprise leaders are often the ones encouraging this behavior, creating risks across organizations.
Hidden Risks of AI Adoption
Unapproved AI tools can expose sensitive company data and bypass security controls.
Many organizations lack policies or training to manage AI use safely, leaving blind spots that attackers could exploit.
Leadership and Culture
Executives and managers are sending mixed signals by promoting productivity through AI while ignoring security concerns.
This makes it harder for security teams to enforce rules and for employees to understand safe practices.
Steps to Gain Control
Organizations should map AI usage, enforce policies, and educate staff on safe practices.
Building clear governance and auditing AI tools can help prevent data leaks and operational mistakes.
Unchecked AI use is growing fast, and without leadership alignment and clear policies, companies risk serious security and compliance issues.
Data Leakage
More than half of former UK employees still have access to company spreadsheets
A recent study shows that over half of former employees in the UK still have access to company spreadsheets.
This exposes sensitive business information long after staff have left, creating security risks.
Why It Happens
Many organizations fail to revoke accounts and permissions when employees depart.
Legacy access is often overlooked in busy IT environments, leaving doors open to misuse or accidental leaks.
The Risk to Companies
Unmonitored access can lead to data theft, errors, or compliance violations.
Attackers could exploit these gaps, or ex-employees might unintentionally expose information if accounts remain active.
Best Practices to Close Gaps
Regular audits of user accounts and automated deprovisioning can prevent lingering access.
Clear exit procedures and role-based access policies help keep sensitive data secure.
Old accounts are a hidden threat that can compromise company data if not managed proactively.
Internal Threat
Trump’s acting cyber chief uploaded sensitive files into a public version of ChatGPT
CISA Acting Head Uploads Sensitive Files to Public AI
AI Use Under Scrutiny
Madhu Gottumukkala, the acting director of CISA, uploaded sensitive but unclassified contracting documents into the public version of ChatGPT last summer.
Multiple automated security alerts flagged the activity, highlighting risks of using public AI tools with government data.
Temporary Permission Granted
Gottumukkala had received special approval to use ChatGPT under DHS controls. The app remained blocked for other employees.
While none of the uploaded files were classified, they were marked “for official use only,” a designation for sensitive government information.
Internal Review and Meetings
CISA and DHS officials launched an internal review to assess potential harm from the uploads.
Gottumukkala met with senior officials including the CIO and general counsel to discuss the incident and proper handling of sensitive materials.
The outcome of the review has not been publicly disclosed.
Risks of Public AI Tools
Material uploaded to ChatGPT is accessible to OpenAI and can influence responses for other users.
DHS now uses internal AI tools like DHSChat, designed to prevent information from leaving federal networks.
Leadership Concerns
Gottumukkala’s tenure at CISA has faced other security-related challenges, including a failed counterintelligence polygraph exam and attempts to remove the agency’s CIO.
These incidents underline the need for strong oversight when using AI tools and handling sensitive data.
Public-Private Oversight Gap
The situation illustrates the tension between emerging AI adoption and traditional federal security protocols.
Even temporary permissions can create exposure if public platforms are involved.
Sensitive AI use demands careful review and enforcement to prevent accidental leaks, even from senior officials.
Stay safe!






