ChatGPT really is a marvelous technology – an artificial intelligence designed to answer just about any question a person might ask it – and yet, somehow, it leaves CISOs and compliance officers with even more questions.
For example, how should companies govern the use of ChatGPT (or any of the other next-generation AI applications rushing onto the market these days) within their own organizations? How are you supposed to guard against new risks posed by others using “weaponized AI” against you? How do you monitor the risks of vendors in your supply chain using AI? Exactly what are those risks, anyway?
Right now, nobody quite knows.
Clearly AI will change the business world, because a technology so powerful and easy to use can’t not change corporate operations, risks and governance in profound ways. It’s also clear that CISOs (and other risk assurance professionals) will play a crucial role in guiding your organization through those challenges.
Beyond that, however, the answers to the questions mentioned above (and many, many more) are still anyone’s guess – and in most cases, the “correct” answer will vary from one company to the next. At this juncture, CISOs simply need to be prepared to find those answers as we move forward into this brave new world.
How so? By asking yourself and your company several more questions.
Do we have the right oversight structures in place?
The fundamental challenge with AI is governance. From the highest levels, your company needs to devise a system that manages how AI is studied, developed and used within the enterprise.
For example, does the board want to embrace AI swiftly and fully, to explore new products and markets? If so, the board should designate a risk or technology committee of some kind to receive regular reports about how the company is using AI.
On the other hand, if the board wants to be cautious with AI and its potential to up-end your business objectives, then perhaps it could make do with reports about AI only as necessary, while an in-house risk committee tinkers with AI’s risks and opportunities.
Whatever path you choose, senior management and the board must establish some sort of governance over AI’s use and development. Otherwise employees will proceed on their own – and the risks only proliferate from there.
Do we have the right policies in place?
This is the next, more granular step after laying down governance principles for AI. The company then needs to follow up with more precise policies and procedures that your employees and third parties can follow.
For example, if senior management has decided it has big ambitions for using generative AI (say, to automate interactions with customers), you might then follow up with policies that spell out how specific business units can try integrating AI into their operations. If you hail from financial services or some other highly regulated industry, you might want policies that place tight limits on rolling out AI until dedicated teams test those AI systems for security and compliance risks. (Numerous Wall Street banks have already done precisely that.)
This is also where you can start thinking about vendor-related issues more substantively. Do you want vendors to disclose whether they use AI when processing data or transactions on your behalf? Do you want to include a security assessment before purchasing AI systems from a vendor? Those issues will require policies. You’ll need to work closely with the procurement team (or whoever is authorized to buy IT services for your enterprise) to be sure those policies are understood and integrated into their operations.
Can we manage AI-enabled work on a routine basis?
This is the people part of the puzzle: have you defined the necessary roles and responsibilities to put these lofty ideas into practice?
For example, if you want to assess the security risks of an AI solution, someone will have to do that. Do you have the right IT audit expertise in-house, or will you need to rely on outsourced help? If you want to use generative AI to develop software code for new products (yes, ChatGPT can do that), someone will need to test that code once it’s written. Do you have the right talent for that work? (Especially if you laid off half your coders since ChatGPT is writing the code.)
This part of the AI puzzle could prove especially challenging because you’ll be designing new workflows in potentially far-reaching ways. CISOs will need to consult closely with internal audit teams performing risk assessments, and operational teams telling you what is or isn’t possible.
‘ChatGPT, do we need to panic?’
No, not at all. Fundamentally, artificial intelligence is just another new technology – akin to the rise of cloud-based services in the 2010s, mobile devices in the 2000s, or the internet back in the 1990s. It raises a host of security, operational, and compliance issues we haven’t considered yet, but CISOs do have the tools to work through those issues and find answers that fit your company.
You’ll need to rely on risk management frameworks (NIST and a few groups have already started developing them for AI), and strengthen capabilities such as policy management, risk assessment, monitoring, and training. You’ll also need support from the board, senior management, and colleagues across the enterprise, as you all try to keep your eyes on the proper balls and work toward a common vision.
Then again, hasn’t that always been necessary for corporate success? Maybe the issues ChatGPT brings to the fore aren’t so new after all.
Final words
ChatGPT will unquestionably change the compliance landscape. Staying ahead of the changes and maintaining an agile program requires a comprehensive software solution. To learn more about how NAVEX can help: