Skip to content Skip to footer

Artificial Intelligence, Real Consequences & Liabilities for Employers

Published by: Westfair Business Journal

By Robert G. Brody and Catherine Y. Bailey

In August 2025, Otter.ai—a widely used provider of artificial intelligence (“AI”) notetaking services—was hit with a class-action lawsuit that could have significant implications for employers nationwide.  The case alleges Otter’s tools, “Otter Notetaker” and “OtterPilot,” records virtual meetings without the consent of all participants, including non-users.

While the suit is still in early stages, it has already set off alarms for organizations that use artificial intelligence tools in daily operations. From meeting transcription software to AI-based hiring systems, employers are increasingly relying on third-party vendors to collect, analyze, and act on sensitive information. However, as recent litigation shows, these tools carry legal risks, especially if used without sufficient oversight or transparency.

This article explores the emerging legal challenges involving AI tools in the workplace and outlines steps employers should take to mitigate exposure.

Lawsuit Against Otter.ai Raises Consent and Privacy Concerns

The Otter.ai case alleges violations of several federal and state laws, including the Electronic Communications Privacy Act (“ECPA”) and the Computer Fraud and Abuse Act (“CFAA”).  These laws are designed to protect individuals from unauthorized access to their electronic communications and from misuse of their data. The ECPA prohibits the intentional interception of electronic communications without consent, while the CFAA focuses on unlawful access to protected computers and networks. Violations of these statutes can result in significant civil and criminal penalties.

Otter.ai isn’t the only tech provider facing scrutiny. In another high-profile case, fitness company Peloton was sued in California for allegedly violating the California Invasion of Privacy Act (“CIPA”) by using an AI-powered chat tool developed by a third party (Drift).  The complainant alleges the AI tool intercepted and recorded conversations between users and the company without appropriate notice or consent.

California’s privacy laws, including the CIPA, are among the strictest in the country. They have been increasingly cited in lawsuits targeting companies that deploy advanced technologies, like AI-driven chat systems, without appropriate disclosures. As these cases proceed, they will continue to provide valuable insight into the courts’ reactions to emerging technology.

AI in Hiring: Allegations of Bias and Discrimination

Legal risks extend beyond privacy violations. Employers are also facing lawsuits alleging discriminatory outcomes when using AI for employment decisions, particularly in hiring and recruitment.

One such case is Harper v. Sirius XM, filed in August 2025. Plaintiffs allege Sirius XM’s use of an AI-powered applicant tracking system led to unlawful discrimination against job seekers. The suit claims the system disproportionately downgraded qualified applicants based on proxies for race, such as zip code and educational background, reducing the chances of marginalized candidates being hired.

Such practices violate Title VII of the Civil Rights Act, which prohibits employment discrimination based on race, color, religion, sex, or national origin. The case highlights concerns about how AI systems are “trained” and whether their underlying algorithms reinforce historical biases, exacerbating inequality in hiring practices.

Key Risks Employers Must Consider

  1. Proving Consent is Often the Employer’s Responsibility.

Employers may assume AI vendors handle consent requirements, but most vendor agreements place this burden on the employer. In states like California, consent must be obtained from all parties in a conversation. If AI notetaking software records meetings with clients, job applicants, or third parties, the employer must secure individual consent from each participant. Ignorance of the recording is not a valid defense.

  • Confidentiality and Privacy Could Be Undermined.

Using third-party tools can jeopardize attorney-client privilege and expose confidential business information. Courts often require data owners to take steps to protect confidentiality, and AI tools that store or process sensitive data without clear restrictions may undermine these protections.

  • AI Bias is a Legal Liability.

Employers may be held accountable for discriminatory outcomes produced by AI hiring tools, even if the bias is unintentional. Claims based on disparate impact, as seen in the Sirius XM case, highlight the need for employers to assess how AI systems make decisions and ensure they do not reinforce bias.

Regulatory Landscape: The Law is Catching Up to AI

New laws are emerging rapidly in response to these concerns, particularly in areas like employment, privacy, and consumer protection. Employers must closely monitor regulatory developments to avoid legal exposure, penalties, and reputational damage.

Illinois has already passed one of the first U.S. laws specifically targeting AI in employment. The Illinois Artificial Intelligence Video Interview Act (“IAIVIA”), effective January 1, 2020, governs the use of AI by Illinois employers to evaluate video interviews of job candidates. The IAIVIA requires employers to inform applicants prior to using AI, provide an explanation of how AI would be applied in the hiring process, and secure the applicant’s written consent. Employers must also restrict video sharing and delete recordings upon request.

Similar legislation is being considered in several other states, and federal action is expected in the coming years. Most significantly, the Colorado Artificial Intelligence Act (“CAIA”) will take effect on June 30, 2026. This Act requires organizations deploying “high-risk” AI systems, such as those used in hiring and employee evaluations, to meet new standards around transparency and accountability. Organizations must disclose AI use, assess potential discriminatory outcomes, and take steps to mitigate these risks. Regular audits and documentation of fairness efforts may also be required.

What Employers Should Do Now

To reduce risk and maintain compliance, employers should consider implementing the following.

  1. Identify and Review all AI Tools in Use.

Conduct a comprehensive inventory of AI tools used across the organization, focusing on those involved in communication, data analysis, and hiring. Evaluate their purpose, functionality, and data handling practices, especially for tools processing sensitive information.

  • Audit Vendor Contracts.

Review contracts with AI vendors to identify clauses related to data protection, consent, and compliance. Ensure the contract specifies who is responsible for obtaining consent and verify vendors have robust data protection measures in place. Negotiate indemnification clauses to protect your organization in case of vendor non-compliance.

3.           Ensure Transparency.

Clearly communicate the use of AI in processes such as hiring, performance evaluations, or customer service. Provide accessible explanations of how AI works and what data it uses. Offer individuals the opportunity to ask questions or opt out of AI-driven processes where feasible.

4.           Establish Internal Protocols to Monitor AI-Driven Decisions.

Develop a framework for regularly auditing AI-driven decisions to identify and address potential biases. Involve diverse stakeholders in the design and review of AI systems, train employees on ethical AI use, and implement a process for reporting concerns about AI-driven decisions.

5.           Stay Informed About New Regulations.

Assign a team to monitor legislative developments related to AI at the state and federal levels. Subscribe to legal and industry updates, consult with legal counsel, and update internal policies to ensure compliance with new laws.

               6.           Monitor the Political Winds.

Politics will heavily influence this issue. Pay attention to the stance of those in power, as their views on AI could lead to increased enforcement or reduced regulation.

Conclusion

The lawsuits against Otter.ai, Peloton, and Sirius XM highlight a growing legal trend: employers are ultimately responsible for how AI is used within their organization. Even if the technology is provided by a third-party vendor, the risks fall squarely on the employer. As regulators catch up and public scrutiny increases, the use of AI in the workplace must be provided with transparency, accountability, and proactive compliance. Employers that take these steps now will be better positioned to harness AI’s benefits, all while avoiding its growing legal pitfalls.

Brody and Associates regularly advises management on compliance with the latest local, state and federal employment laws.  If we can be of assistance in this area, please contact us at info@brodyandassociates.com or 203.454.0560.

Author

Leave a comment