Bringing AI to work? Better have policies in place
Although less than half the respondents either are using AI now or will be within a year, more than half of responding companies (56%) already have policies in place.
As artificial intelligence aids like ChatGPT invade the workplace, many employers are asking: Do I need this? Do I even want this? And if I do get it or already have it, should I have a policy in place to guide our usage of it?
For the want-it, need-it questions–that’s up to your leadership team. But it’s a hard “yes” to the last one.
To find out whether employers are adopting AI and whether, if they do, they have policies in place, an AI vendor, Conversica, decided to ask around. It surveyed 500 business leaders and owners, and learned (by AI?) that 4 in 10 already deploy AI in some format. However, only 6% of the 500 “have established clear guidelines for the use of AI.” Conversica found this to be troubling.
In its white paper, “AI Ethics and Corporate Responsibility,” Conversica basically says the uptake of AI in the workplace is running ahead of guidelines around how to govern its use.
Although less than half the respondents either are using AI now or will be within a year, more than half of responding companies (56%) already have policies in place, and another 20% said they are considering adopting one. So it’s not that companies with AI, or those about to plunge into those waters, don’t realize they need “guardrails” (Conversica’s term) to control this new tool. For various reasons, they haven’t gotten around to it yet.
“A majority of respondents recognize the paramount importance of well-defined guidelines for the responsible use of AI within companies, especially those that have already embraced the technology,” Conversica said, noting that “86% of those already adopting AI affirmed that such guidelines are indispensable for both businesses and their leaders.”
But among the entire survey group, 73% agreed with the indispensability of guidelines. It was that gap that surprised the surveyors.
“One explanation for the gap in prioritization is that those already employing AI have seen firsthand the challenges arising from implementation, increasing their recognition of the urgency of policy creation,” the white paper stated. “However, this alignment with the principle does not necessarily equate to implementation, as many companies are yet to formalize their policies.”
Questions about AI utilization will inevitably surface, Conversica said, including concerns about accuracy, false data, out-of-date information, legal issues, ethical issues, and more. While this is new terrain for many firms, the best practice would be to research and implement guidelines for use along with the introduction of AI into the workplace.
Read more: How AI-related risks can affect insurance
“Reluctance to proactively adopt best practices for AI adoption could lead to a number of issues, ultimately creating more liability than benefit,” the white paper warned. Among them:
- Security risks
- Regulatory violations
- Poor User experience
- Brand damage
- Outdated technology
“Therefore,” Conversica advised, “robust AI governance is not merely an option but an absolute necessity for any organization seeking to harness the full power of AI responsibly and effectively.”