As the AI hype cycle peaks and CIOs rush to embed generative AI into their businesses, a big question looms: When will regulators crash the party?
Lawmakers worldwide are already drafting new rules for AI. Some have even proposed creating an entire commission to develop rules overseeing AI’s use in everyday life. The potential to get sued is dizzying–enough to make some consider pausing to wait and see how Washington and the EU will treat AI.
But what if the most important rules for AI are already on the books, drafted decades ago to manage those other complex, biased, and unpredictable creatures who tend to run amok in business… humans?
At A.Team’s recent AI x Future of Work Summit, a panel of experts from government, business, and the nonprofit sector—led by Keith Sonderling, Commissioner of the Equal Employment Opportunity Commission and moderated by Krystal Hu, venture capital and startups reporter at Reuters—offered clear guidance on navigating this fast-changing landscape.
The first takeaway? Ignore the D.C. bluster. Lawmakers are proposing new AI rules (and even whole AI commissions). But most of Capitol Hill’s furor amounts to “a really big distraction,” assured Keith Sonderling, Commissioner of the Equal Employment Opportunity Commission, which enforces laws against workplace discrimination in the United States.
“We’re here on the one-year anniversary of ChatGPT being launched,” noted Sonderling, wryly adding, “It’s also the one-year anniversary of most people in Washington caring about AI – or understanding what AI is.”
Most of the current “chaos” of proposed AI regulations, Sonderling predicted, should be interpreted as reactionary bluster, not law in the near term. After all, the current Congress struggles with basic tasks like budget approval, to say nothing of sweeping and complex tech regulation.
But AI ain’t lawless. For American businesses deploying new AI tools, the rulebook remains age-old laws like the Civil Rights Act of 1964. Those enduring principles, which encourage employers to hire candidates based on their skills and not discriminate against personal characteristics like race, sexual orientation, and age, remain the lodestar. AI simply introduces new challenges for how businesses must adapt to uphold them.
When Buying New AI Tools: Kick the Tires
Just as IT departments check how new software complies with a company’s quality and data security standards, AI tools must both be technologically and ethically sound. “Be sure to kick the tires,” said Stephen Malone, VP of Legal, Employment & Corporate Affairs at Fox Corp.
This careful vetting process in AI tool procurement is crucial in aligning AI use with ethical and legal norms, especially in sensitive areas like recruitment, retention, and fostering diversity in the workplace.
“It’s going to be a back-and-forth process with the vendors,” said Sonderling. “These are not off-the-shelf products.” For example, ask vendors how they’ll help adapt the software to shifts in your workforce’s demographics, or adapt as necessary skills for various jobs evolve. Make sure that a hiring tool is designed to look for the specific skills and characteristics of the individual jobs in your company – not a cookie-cutter model that may simply “amplify the status quo.”
Your AI Needs DEI (Just Like Your People)
The "black box" nature of AI algorithms means their inner workings aren’t fully transparent to those who employ them. But, of course, neither are the brains of human employees, which can also be prejudiced.
“In the last few years, we've collected over a billion dollars from employers for violating our laws. So, humans haven't been doing it that great, either. That's the reason my agency continues to exist,” Sonderling said.
Remove humans from the loop, and the model collapses.
Just as organizations establish guidelines and practices to guide human behavior and decision-making, AI needs rules, too. You can work with vendors to ensure the tools have been instructed to align with organizational values and legal standards—just as your HR department would teach your human employees to address potential biases and uphold ethical practices, mirroring the oversight and training provided to human employees.
Deploying AI tools across an organization requires careful consideration of how they might affect protected groups, especially older workers and those with disabilities. These groups may need help to adapt to the technology or acquire necessary skills. Additionally, the significant changes brought by AI can trigger anxiety and mental health concerns. It's crucial that AI implementation, while exciting, is also equitable and doesn't worsen existing labor force disparities.
“There are going to be classes of workers who need more time,” said Sonderling.
Start Working on These Problems Yesterday
“We’re already in a period of consequences,” said Var Shankar, Executive Director at Responsible AI. “Better to put guardrails in now.”
Demonstrating your company made a proactive effort to abide by legal standards can be crucial evidence that you did your best to comply with the law in case of future legal challenges or regulatory scrutiny.
AI tools work best as copilots for smart humans, not as replacements for them, added George Mathew, Managing Director at Insight Partners. Checking that AI tools are upholding the spirit and principles of the laws, whether decades old or new, will be a continuous process. “Remove humans from the loop, and the model collapses. You need machines and humans to work very closely together for the foreseeable future.”