Using AI at Work?

The employment landscape is an ever changing one. No where is that more true than in the areas of computing and artificial intelligence, and their continuing impact on employers and employees alike. The proposed “No Robo Bosses” Act (Senate Bill 7) seeks to regulate the use of AI systems when hiring, promoting, disciplining, and terminating workers. A recent 41-page AI policy report commissioned by California Governor Gavin Newsom could soon lead to AI regulation that could impact AI-based decision-making, hiring processes, and workplace surveillance. The report’s recommendations have already been incorporated into a proposed bill, SB53 that could introduce new AI-related compliance and disclosure obligations. Here are some portions of the report that are anticipated to be impactful.

Protections for AI Whistleblowers

The report suggests the implementation of enhanced legal protections for employees who expose AI-related risks, meaning businesses could incur new liabilities if they retaliate against workers reporting AI-related issues. AI whistleblowers could be safeguarded under expanded labor laws similar to the protections provided for reporting workplace safety violations. Employers would be required to investigate AI-related complaints, or face penalties.

Possible Preventive Actions: Examine existing whistleblower policies and revise to include AI related concerns. Train relevant staff on how to handle whistleblower complaints delicately, without retaliation, including any complaints related to AI.

Stricter Reporting and Disclosure Requirements

The policy report advocates for the implementation of mandatory reporting systems, obligating companies to disclose any AI-related failures, discrimination, or harm, such as biased hiring decisions or data breaches. Stricter documentation and reporting requirements are expected.

Possible Preventive Actions: Keep detailed documentation regarding AI related decisions. Designate a team member to oversee AI compliance.

Transparency Regarding AI Functions

California’s employers may be required to disclose information as to how the AI models they utilize operate, what data they utilize, and the processes employed therein. These disclosures may also include explaining how AI-driven workplace and hiring decisions were made.

Possible Preventive Actions:

Businesses should create or update policies and procedures regarding AI-driven decisions in the workplace, including as to notification and grounds for such decisions. These decisions should be documented carefully as to the same. Businesses should verify that AI vendors maintain documentation regarding their AI sources and models.

Third Party Audits for AI

Lastly, a key recommendation of the report is the implementation of independent, third party safety assessments regarding AI use at each business, to prevent potential harm. Businesses utilizing AI for hiring or other workplace decisions may be required to conduct their own formal risk assessments, including retaining a third party auditor to verify the risks or biases inherent in any AI tools.

Possible Preventive Actions:

Businesses who utilize AI in their workplace decisions, including, hiring, promotions, performance assessment, or termination, should conduct their own audits to assess compliance with existing laws and regulations. Businesses obtaining AI from outside vendors should confirm that these vendors meet their transparency and risk management standards.

Competent legal representation can greatly expedite and support compliance with each of these requirements and others as they continue to emerge. Any business requiring further guidance should contact MNK Law, APC, at 562.362.6437 or info@mnklawyers.com.

SHARE THIS POST
Facebook
Twitter
LinkedIn
Email
Print