Banning AI in the office may do more harm than good
"It would be like banning email," says one expert. "These are tools that are going to become part of the way people work.”
Worries about generative artificial intelligence tools and chatbots show no sign of abating, including the potential for inadvertent leaks of confidential information.
A string of companies, including Samsung, Verizon and JPMorgan Chase, has restricted or barred the use of AI chat bots such as ChatGPT and Google’s Bard on work devices, fearing employees might inadvertently share source code or business secrets with these platforms.
“I think it’s shortsighted,” said Nithya Das, former chief operating officer and chief legal officer at Olo, a software company that sells online ordering and delivery programs to restaurants. “You can ban it on company equipment, but you can’t ban it on other devices that employees might be using. It would be like banning email, or Google search, or a reticence to using the Google enterprise suite of products. These are tools that are going to become part of the way people work.”
Das said she uses ChatGPT for routine queries and contract language, and the results have been a huge time-saver. “I just used it to give me a first draft of some terms of service for a generative AI tool. It wasn’t perfect, but it was a good starting point. It saved me a few hours of work and from having to start from scratch,” she said. “A lot of colleagues use it for things like that—terms of service, nondisclosure agreements, privacy policies, and using it to give summaries of routine concepts they want to explain to others.”
‘Wooden ships always have leaks’
Banning new technology is at best a temporary solution to a decades-long problem around data leaks and intellectual property protection, experts said, especially when employees are already either experimenting with generative AI applications on their own time, or are using them for substantive work.
“In a sense, the horse is already somewhat out of the barn. The odds are that employees already are making use of these latest AI tools and have failed to consider the enormous risks involved,” said global AI expert and Forbes columnist Lance Eliot.
“One thing that is unlikely to work and that can entirely backfire consists of outright banning the use of generative AI. Firms that do this are forsaking the benefits of generative AI. They are also hamstringing the employees that want to save the company time and money by leveraging generative AI. A ban simply says that the top leaders, including the general counsel, are out of touch.”
Tim Pham, general counsel of Patented.ai, said legal chiefs have long grappled with how to safeguard trade secrets and company knowledge.
“I think a lot of the tech companies, in my view, are like the Black Pearl in ‘Pirates of the Caribbean,’” Pham said. “Those wooden ships always have leaks. In terms of IP, there’s a front door through which you should normally send IP outside, but there are always back doors and side doors; employees leaving, or sending out emails by mistake.”
AI tools, he said, “have opened up a new category of leaks. And I think the problem is huge.”
Pham, formerly deputy general counsel and head of IP at Twitter and head of patent prosecution at Google, said that even with training and guidance from security teams, employees can still struggle with identifying and distinguishing confidential company information.
“At least with one of the companies I worked at, there was a lot of training and we had a pretty robust InfoSec team. We said what you can and can’t do with confidential information and we even labeled certain things as confidential,” he said. “But at a tech company you’re generating so many documents. And especially if the culture of the company is to share information as much as possible, it is extremely difficult for the average employee to know what is confidential information.”
Enlist AI as a security tool
Security software designed to prevent users from entering sensitive information and proprietary source code into chatbots offers one solution. Patented.ai created a plug-in called LLM Shield, named after the large language models that serve as the foundational technology behind AI chatbots, to recognize key words and warn users if they’re about to inadvertently enter something confidential.
“If you are a developer or sales or marketing person and you don’t know what’s confidential, this provides that co-pilot,” Pham said. “It trains itself on all the information that you tell it is confidential. It sees that and can flag it for you.”
Pham said that for companies such software tools “may help move the risk needle to something that is more acceptable.”
He added, “I look at the risk needle in terms of IP leaks and sensitive information. The introduction of LLMs have just changed this. The entropy provided by that is just crazy, just moved the needle way up, and I think there are certain things we can do to move it back.”
OpenAI, which released its public version of ChatGPT in November, is also trying to move the needle back, rolling out new privacy features that allow users to delete their chat history and opt out of sharing their data for training purposes.
Companies who use OpenAI’s app programming interface can opt in to sharing their data to train the tool’s algorithm. Deputy General Counsel Che Chang said the plan is to bring that feature to an enterprise-worthy version of ChatGPT, called ChatGPT Business.
“I think the number-one request is how GCs can get comfortable with their employees putting information into ChatGPT,” Chang said.
“They want to allow the use of our technology in a productive way, but they want to understand how to develop a safe policy that protects their company,” he said. “Recently, we announced ChatGPT Business, which we think will be a good middle ground and can address these concerns. We plan to make ChatGPT Business available in the coming months.”
Das said companies should make sure whatever technology vendor they use has the right security guardrails in place.
“Part of this is making sure that as a company you’re using either the right paid model, like ChatGPT or Google’s Bard product, and educating employees about the risks of using a free tool or a plug-in that may not have the same security provisions,” she said.
Educating employees on how to use these tools safely can be a challenge, Das said, “but it’s always one that can be overcome. If we rewind in time just 10 years ago, employees weren’t as familiar with the concept of phishing or not clicking on links. Education increases as the tech becomes more prevalent.”
She said tech law firms, along with networking groups such as Tech GC and the Women’s General Counsel Network, are starting to dig into how legal chiefs can advise their companies on adopting generative AI tools while mitigating risk.
While a ban can be an easy legal fix, it’s unsustainable as a business strategy, she said, because if a company doesn’t embrace this technology, their competitors surely will.
“As an executive, I would wonder what the impact on innovation, engagement and talent is if I’m at a company that takes such a firm position instead of treating employees like adults,” Das said. “The really strong GCs will figure out how to use this technology in a safe and responsible manner to increase growth.”