Gen AI Security Check: 15 Key Questions for Vendors

Autor:
Pete Humes's profile picture
Pete Humes Head of Content

If you are adding generative AI solutions to your contact center tech stacks, there is no such thing as “an overabundance of caution.”


When evaluating new software solutions with Gen AI, think of yourself as a bouncer.


You’ve seen those big, burly guys outside of the hottest night spots, right? They stand blank-faced, with their arms crossed and accept zero nonsense.


You want to be like them.


But instead of keeping riffraff out of the VIP areas, you’re defending your company and customers from mishandled information, data breaches and AI hallucinations.


A recent study by IBM’s Institute for Business Value showed that 94% of C-suite leaders consider the security of AI solutions a top priority. But it also reported that 69% put innovation ahead of that same security.


And while 47% saw Gen AI as a tool to “help improve the time to detect and respond to cyber threats,” the exact same percentage were “concerned that adopting Gen AI in operations will lead to new kinds of attacks.”


We're in uncharted territory when it comes to AI.


That’s why a bit of extra caution goes a long way.


As a “bouncer” you can’t waive solutions through the front door just because Gen AI is famous, and everybody’s heard of it. You have a responsibility to serve and protect… and that starts with asking lots of questions.

15 Key Questions CCaaS Leaders Need to Ask Gen AI Vendors

Evaluating software solutions has always required a fair share of due diligence. But the addition of generative AI adds fresh anxiety to the already complex CCaaS tech landscape.


It is easy to get seduced by the bold promises of AI. Contact center leaders are under constant pressure to improve metrics and deliver results.


Few can resist the idea of virtual assistants that work all day and night at lightning speed. But it's critical to understand how you can put customer information and company data at risk.


First, start with these three internal questions...


What specific tasks or issues will gen AI address?


Set objectives that align with business goals. Will you use AI for customer service automation, content creation or decision support? Treat Gen AI like a new employee. Give it a job description with responsibilities and expectations.


How will Gen AI integrate with our existing systems?


Will your solution connect seamlessly? Or will it need a workaround? Consider APIs, data exchange protocols, and the modification of existing workflows.


How will we train and provide support for staff using Gen AI?


Ensure your teams understand how to use the technology. This should include discussions about compliance and security.


Once you understand how AI will fit your business goals, it’s time to build a list of questions for AI vendors.


Your list may vary, but here’s a solid dozen to get started:

Ensure Data Protection and Privacy Compliance

Understand how Gen AI handles and processes sensitive customer information, including personal and financial data. Compliance with regulations such as GDPR, CCPA, HIPAA (if dealing with health-related information), and other relevant data protection laws is crucial.


How do you ensure compliance with global data protection regulations (e.g., GDPR, CCPA. HIPAA)?



Can you detail the data encryption methods used during data transmission and at rest?



What policies and procedures do you have in place for data access and how do you ensure that only authorized personnel can access sensitive information?


Minimize Data Handling and Storage Risks

The risk of data breaches is a significant concern when integrating Gen AI into your call center operations. It’s important to understand what security measures are in place to protect against unauthorized access, hacking, or other cyber threats.


Where is the data stored, and what are the physical and logical security measures in place at these locations?



Do you utilize public, private, or hybrid cloud services for data storage, and how do you ensure their security?



What policies and procedures do you have in place for data retention and deletion?


Take Control of AI Model Training and Maintenance

Model training and maintenance are crucial for ensuring that customer interactions remain efficient, personalized, and effective. Regular training updates keep the AI models relevant and accurate to changing customer behaviors and preferences.


How do you ensure the security and privacy of data used in training your AI models?



What measures are in place to prevent biased outcomes from your AI models?


How do you manage updates and maintenance for AI models to ensure they remain effective and secure over time?


Maintain AI Transparency and Explainability

AI isn’t perfect. Mistakes will be made, and you’ll need to understand why. Get clarity on how decisions made by the AI are derived, particularly for sensitive customer interactions. The ability to audit and explain AI decisions is crucial for accountability, troubleshooting, and compliance.


Can your AI system provide explanations for its decisions or outputs?



What level of detail do the explanations include, and are they accessible to non-technical stakeholders?



How do you ensure the AI system remains explainable as it evolves with further training and updates?


Strive for Innovation, Commit to Security

The integration of Gen AI into CCaaS environments requires a meticulous approach to security and compliance. Like vigilant bouncers guarding the gates to exclusive venues, CCaaS leaders must rigorously evaluate potential Gen AI solutions to protect data integrity and customer privacy.


Given the rapid evolution of technology and the novel threats that come with it, you need to stand firm. Stay tough. Accept zero nonsense.


Because as we venture further into the era of Gen AI, the burden of responsibility lies with CCaaS leaders. It’s up to you to ask the right questions and educate yourself (and your staff) to ensure that the drive for business innovation does not outpace the commitment to information security.



This article first appeared on ICMI.