10
SaaS Tools
Back to Blog
Compliance Guide
8 min read

The Top 10 SaaS Tools Using Your Data for AI Training (2025 List)

CP
ClausePatrol Team
Legal & Compliance Experts

The Silent IP Theft

Your agency uses dozens of SaaS tools daily. What you might not know is that many of them have quietly added clauses to their Terms of Service that give them permission to train AI models on your data—including your client's confidential information.

Why This Matters for Agencies

If you run a development agency, marketing firm, or any client services business, you have a legal and ethical obligation to protect your client's data. When you upload client code, designs, or strategy documents to a SaaS tool that trains AI on it, you might be:

  • Violating your NDA with the client
  • Exposing trade secrets to competitors
  • Breaking GDPR/CCPA compliance
  • Opening yourself up to lawsuits

The 2025 List: 10 Popular SaaS Tools with AI Training Clauses

1. Slack

Low Risk

What they do: After May 2024 controversy, Slack clarified they do NOT use customer data to train generative AI. They only use data for non-generative features (emoji suggestions, search).

The policy: "Slack will not use Customer Data to train generative AI models unless Customer provides affirmative opt-in consent."

Opt-out: Generative AI training requires opt-in. For non-generative features, contact [email protected] to opt out.

View live Slack analysis →

2. Adobe Creative Cloud

Low Risk (After Update)

What happened: June 2024 controversy over vague TOS language caused massive backlash. Adobe reversed course and clarified their policy.

Current policy (June 24, 2024): "Adobe will not use your Local or Cloud Content to train generative AI" (except Adobe Stock submissions).

Opt-out: Not needed. Updated TOS explicitly excludes AI training on customer content.

View live Adobe analysis →

3. Zoom

Low Risk

What they do: After August 2023 backlash, Zoom clarified they do NOT use customer content (audio, video, chat) for AI training. Only uses telemetry/diagnostic data.

Current policy: "Zoom does not use audio, video, chat, screen-sharing, attachments or other communications-like Customer Content to train AI models."

Opt-out: Account admins can disable AI Companion features in Account Settings → AI Companion.

View live Zoom analysis →

4. GitHub Copilot

High Risk

What they do: GitHub does NOT train on private repos (Business/Enterprise). BUT: 2025 issue - temporarily public repos cached by Bing remain accessible via Copilot even after made private.

The risk: 20,000+ private repos from 16,000 organizations still exposed through cached "zombie data" in Microsoft Copilot.

Mitigation: Use Business/Enterprise plans. Never make sensitive repos public, even temporarily.

View live GitHub analysis →

5. Figma

High Risk

What they do: CLASS ACTION LAWSUIT (Nov 2025): Figma accused of secretly harvesting user designs, layers, text, and images to train AI without consent.

The issue: Free/Pro users automatically opted IN to AI training by default. Only Enterprise/Org users opted OUT by default.

Opt-out: Admins can disable in team settings, but many users didn't know they were opted in.

View live Figma analysis →

6. Notion

Safe

What they do: Strong privacy policy. Notion explicitly does NOT train AI models on workspace content. Zero data retention for Enterprise plans.

Current policy: "Your use of Notion AI does not grant us any right to train ML models on your Customer Data."

Data retention: Zero retention (Enterprise), max 30 days (other plans). OpenAI embeddings don't retain data.

View live Notion analysis →

7. Dropbox

Medium Risk

What they do: Dropbox may use file metadata and interaction data for AI features, but not file content (as stated).

The clause: "We use metadata to improve AI-powered search and organization features."

Opt-out: Limited; contact support for enterprise options.

View live Dropbox analysis →

8. Google Workspace

Safe (Paid Plans)

What they do: Google explicitly does NOT use customer data (including prompts) to train Gemini models for paid Workspace customers.

Current policy: "Your data is not used to train Gemini models or for ads targeting. Content is not human reviewed or used for model training."

Exception: Free Gemini Enterprise Starter (after trial) DOES use data for training unless you opt out in settings.

View live Google Workspace analysis →

9. Microsoft 365

Safe

What they do: Microsoft explicitly does NOT use customer data from Microsoft 365 to train foundation LLMs. This applies to both commercial AND personal/family subscriptions.

Current policy (Nov 2024): "Prompts, responses, and Microsoft Graph data aren't used to train foundation LLMs, including those used by Microsoft 365 Copilot."

Note: Microsoft DOES use data from Bing/MSN for AI training, but keeps M365 data separate.

View live Microsoft 365 analysis →

10. Canva

Medium Risk

What they do: Canva's AI features may use your designs to improve templates and suggestions, though they claim private designs are protected.

The clause: "We may use anonymized design data to improve AI recommendations."

Opt-out: Review AI feature settings in your account.

View live Canva analysis →

What Should Agencies Do?

3-Step Action Plan:

  1. Audit your tech stack - List every SaaS tool your team uses to handle client data.
  2. Read the Terms of Service - Check for clauses about AI, ML, or "service improvement."
  3. Automate monitoring - Use ClausePatrol to get alerts when vendors change their terms.

The Bottom Line

You can't protect your clients if you don't know which tools are exposing their data. The vendors on this list aren't necessarily "bad actors"—many are transparent about their AI training—but ignorance is not an excuse when it comes to client liability.

The best defense is continuous monitoring. Terms change frequently, and what's safe today might be risky tomorrow.

Sources

CP
Written by ClausePatrol Team

Our legal and compliance experts monitor 1000+ SaaS vendors daily to help companies stay compliant with CCPA, GDPR, and state privacy laws.

Verified Compliance Experts

Track vendor policy changes automatically

ClausePatrol monitors 1000+ SaaS vendors and alerts you when they update their ToS, Privacy Policy, or DPA—especially when they add AI training clauses that could put you out of compliance.

Start monitoring for free →
No credit card required
Setup in 2 minutes