top of page

How to Protect Your Midsized Organization from AI-fueled Cyberattacks

Pondurance
April 7, 2025

Artificial intelligence is transforming how midsized organizations operate. While some companies adopt AI faster than others, they’re discovering how new AI tools and applications can deliver value across every aspect of the business. Many organizations use generative AI, which relies on user input, to personalize content, automate repetitive tasks, analyze complex data to identify patterns and trends, and improve product design and development.


Progressive companies are embedding a newer type of AI—called agentic AI—into their business processes. Agentic AI systems can act autonomously to achieve a complex goal with limited supervision. Use cases include self-driving cars, smart homes, more intuitive customer chatbots, personalized healthcare, and automated workflow management.


This fourth article in our series on minimizing breach risks discusses the specific and significant risks the use of AI can have on your organization. We’ll discuss how AI is enabling more deceptive and successful cyberattacks, how to evaluate AI risk, and how to protect your organization against both external threats and internal risks.


How AI Puts Your Data at Risk

AI can increase the likelihood for a data breach in two key ways:

  • Smarter external cyber-attacks

  • Your organization’s use of sensitive data in AI tools and applications 


Supercharged Cyberattacks

AI makes existing cyber threats much more effective and harder to detect. It speeds up open-source intelligence gathering, so attackers can more easily research individuals or organizations and craft highly targeted attacks. Instead of manually piecing together information, they can use AI to generate better, more convincing phishing messages in seconds.


In particular, AI makes phishing attacks harder to detect. Attackers can use AI to refine the tone of an email, making it friendlier and more persuasive, so it blends in with everyday business communications. Since sales and marketing teams already use AI for personalized outreach, people are becoming desensitized to polished, AI-generated messages—making it easier for phishing emails to slip through unnoticed.


The risk goes even deeper when attackers gain access to a user’s email account. AI can analyze past messages, learn the person’s writing style, and generate responses that perfectly mimic their tone and cadence. Business email compromise (BEC) attacks are difficult to spot because the fraudulent messages will feel completely authentic.


And it’s not just emails—deepfake voice and video technology are also improving. The old advice of “just call to verify” won’t always be reliable if attackers can clone voices convincingly. The more AI advances, the harder it becomes to confirm whether someone is who they say they are.


Employee and Organizational Use of AI

When assessing internal risks, think about how you handle sensitive or confidential information in your AI tools. Are you using AI as part of your product or service, or is it strictly for internal purposes? The tools you choose also matter—free, open-source options like ChatGPT allow public access to any data entered, while paid enterprise versions provide stronger security and administrative controls.


Consider whether you’re using your own data to train large language models (LLMs) to refine prompts and results. If so, be aware that sensitive or proprietary information used in training can be at a higher risk of exposure. Also, think about how your employees interact with AI. Free tools can quickly generate useful outputs like meeting summaries or transcripts, but the prompts they use might contain sensitive employee, patient, customer, or company data. 


How to Assess AI Risk

Risk is always a factor when adopting a new tool or application, and AI is no different. A recent World Economic Forum report offers a set of risk-based questions to help business leaders determine how and when to adopt AI in their organization.

  • Has the appropriate risk tolerance for AI been established and is it understood by all risk owners? 

  • Are risks weighed against rewards when new AI projects are considered? 

  • Is there an effective process in place to govern and keep track of the deployment of AI projects? 

  • Is there clear understanding of organization-specific vulnerabilities and cyber risks related to the use or adoption of AI technologies? 

  • Is there clarity on which stakeholders need to be involved in assessing and mitigating the cyber risks of AI adoption? 

  • Are there assurance processes in place to ensure that AI deployments are consistent with the organization’s broader organizational policies and legal and regulatory obligations?


At Pondurance, we recommend organizations follow NIST standards for risk management, specifically the Cybersecurity Framework (CSF) 2.0 and NIST SP 800-53. NIST CSF 2.0 includes governance, making it a strong foundation for organizations updating their security policies. NIST SP 800-53 provides a comprehensive set of privacy and security controls. Our MyCyberScorecard platform assesses cybersecurity programs using these standards.




The recently released NIST AI Risk Management Framework integrates with existing NIST standards. Organizations can continue to follow these established best practices while evolving their risk management strategies to address emerging AI threats. And since NIST frameworks emphasize traceability—allowing organizations to assess once and report many times—NIST AI 600-1 is a practical choice for managing AI risks efficiently.


Best Practices for AI Adoption

To minimize breach risks when implementing AI tools and applications, consider the following best practices: 

  • Integrate your organization’s AI governance policies with your acceptable use policies to ensure data protection. 

  • Implement cybersecurity and awareness training, updating it to include emerging threats such as AI-fueled attacks. 

  • Establish vendor risk assessment policies to regulate third-party access—including from AI apps and large language models (LLMs). Regularly assess third-party risk, not just annually.

  • Continuously update your cybersecurity program—including threat monitoring, incident response, and penetration testing—to account for new AI threats.

  • Mitigate the harm from AI-generated phishing and social engineering attacks by limiting user access and implementing stronger identity and access controls.

  • Compartmentalize user access to reduce the impact of successful attacks. Simply adopting zero trust for internal systems isn’t enough if third-party applications aren’t secured.


Embrace AI, but Responsibly

AI is a reality in your organization, whether or not you have formally adopted it. Your employees use it, possibly feeding open-source AI with sensitive data—one of many breach risks. Now is the time to to implement AI in a way that benefits your organization and end users—and always with security and compliance in mind. 


Trust Pondurance to help you mitigate risk from AI-powered threats. We’re the only managed detection response (MDR) cybersecurity service designed from the ground up to minimize data breach risk for mid-market organizations. We provide our mid-market customers with a platform, tools, and always-on security operations center (SOC), ensuring that sensitive data remains secure—before, during, and after incidents occur.


Check out our new cybersecurity playbook, crafted to empower organizations like yours to strengthen their defense against evolving threats. “A Midsized Organization’s Guide to Reducing Breach Risks in 2025.” 


wave pattern background

Featured Posts

Pondurance Named Finalist for Best Managed Detection and Response Service: A Milestone in Cybersecurity Excellence

April 8, 2025

How Your Business and Employees Can Outsmart Tax Scammers

April 9, 2025

How to Protect Your Midsized Organization from AI-fueled Cyberattacks

April 7, 2025

bottom of page