Curriculum
Course: AI and Data Governance
Login
Text lesson

Indirect Liability for Companies using AI vendors

Now to the billion-dollar question:

“I’m a company with customers and active contracts. I’m sending customer data to AI, AI 3rd-party decided to use it to train. Who should be found responsible?”

This is a complex situation with potential legal and ethical ramifications. There isn’t a simple “guilty” or “not guilty” answer, as liability will depend on several factors, primarily focusing on the contracts, privacy policies, and applicable laws. This issue has 3 parties: you, your customers as the 2nd party, and the AI vendor as the 3rd party:

Customers

Customers are not guilty of anything. They are the injured parties whose data has been misused. They have the right to seek legal remedies against both your company and the AI third party.

AI Third-Party

  1. Breach of Contract: If the contract between your company and the AI third-party explicitly prohibits the use of customer data for training purposes, then the AI company is in clear breach of contract. 
  2. Violation of Privacy Laws/Regulations: Most jurisdictions have strict data privacy laws (e.g., GDPR in Europe, CCPA/CPRA in California, similar laws around the world). Using personal data for training AI models, especially without consent, can easily violate these laws if:
    1. Customers weren’t informed of this potential use.
    2. Customers didn’t give explicit consent for their data to be used for this purpose.
    3. The data used is considered sensitive personal information (e.g., health data, financial data, demographic data revealing race, religion, etc.).
    4. The AI company didn’t implement adequate data anonymization or pseudonymization techniques before using the data for training. Even then, if re-identification is possible, they’re still potentially liable.
  3. Unfair or Deceptive Practices: Using customer data for a purpose that customers didn’t anticipate or agree to could be considered an unfair or deceptive trade practice, leading to regulatory action and potential lawsuits.

Your Company:

  1. Negligence in Data Governance: Your company has a responsibility to protect its customers’ data. If you were negligent in:
    1. Choosing a reputable AI partner with clear data privacy policies.
    2. Performing due diligence to ensure the AI partner’s data handling practices were compliant with applicable laws.
    3. Clearly communicating to customers how their data would be used by the AI partner.
    4. Putting sufficient contractual safeguards in place to prevent unauthorized data use. You could be found liable for contributing to the breach. A court would consider what a reasonable company in your position would have done to protect customer data.
    5. Doing almost care in forbidding customer data ever hitting the AI vendor if not needed. 
  2. Breach of Privacy Policy: If your own company’s privacy policy states that customer data will not be used for AI model training (or any similar purpose), and you then allow it to happen, you are in breach of your own policy, which can lead to lawsuits and regulatory penalties. A common issue is a vague privacy policy; the more specific, the better.
  3. Indirect (Vicarious) Liability: In some cases, you could be held indirectly liable for the AI third-party’s actions, especially if the AI company is considered your agent or if you directly benefited from their unauthorized data use.

Key Factors That Determine Liability:

  1. Contractual Language: What does your contract with the AI company specifically say about data usage, privacy, and security? Is there a data processing agreement (DPA) in place?
  2. Privacy Policies: What does your company’s privacy policy state? How clear and unambiguous is it about data sharing with third parties and the purpose of that sharing?
  3. Consent: Did you obtain informed consent from your customers for their data to be used for AI training purposes? Was that consent explicit and specific?
  4. Anonymization/Pseudonymization: Did the AI company adequately anonymize or pseudonymize the data before using it for training? Even with anonymization, re-identification risks must be considered.
  5. Data Minimization: Did the AI company use only the minimum amount of customer data necessary for the training process?
  6. Jurisdiction: Which laws and regulations apply (e.g., GDPR, CCPA, state privacy laws)? This will determine the specific legal standards and penalties.
  7. Due Diligence: What steps did your company take to vet the AI third-party’s data practices?
  8. Communication: How transparent were you with your customers about the data you were sharing with the AI provider and the purpose of that sharing?

In Summary:

While the AI third-party bears the primary responsibility for violating contractual terms and privacy laws by training their AI on your customer data without consent, your company could also be held liable for negligence, breach of its own privacy policy, or failure to perform adequate due diligence. The exact apportionment of responsibility depends on the specific facts and circumstances of the situation.