Listen to article:
For every organization, there is a delicate balance to strike between innovation and risk — one that informs every interaction between your employees, customers, key stakeholders, and the supply chain. We’d be hard-pressed to name a more seismic innovation than the explosion of artificial intelligence (AI), and particularly generative AI (GAI) with its ability to dynamically generate highly realistic, relative outputs. Scaling alongside this innovation are the risks, whether they are preexisting risks such as the proliferation of disinformation or emerging risks such as AI “hallucinations,” the leakage of sensitive information, or inference attacks.
The AI genie is out of the bottle. There’s no going back to the days before large language models (LLMs), such as OpenAI’s ChatGPT and Google’s Bard, broke into mainstream public consciousness. The question we now face is, how do we move forward responsibly? The best answer: proactively building into your organization’s AI ethics model a focus on governance, risk mitigation, and compliance.
Once your organization has determined GAI will add value to the business, you can take concrete steps to ensure ethical GAI use, protect sensitive information, and maintain regulatory compliance.
Building an AI Corporate Responsibility Framework
Taking ownership of how your organization leverages GAI to create value is critical. In so many instances, the worst unintended outcomes of technological innovation can be avoided in the design and planning phases but are incredibly difficult to counteract once products come to market.
GAI is no different. Proactively building an ethos of wise governance into your organization’s AI ethics model can mitigate many of the most harmful risks, from security threats to data leakage to employees not adhering to your established guidelines.
Preventing Security Threats
How exactly does GAI put your organization at increased cyber risk? GAI can reduce barriers of entry for threat actors through enhanced spear phishing, deep fakes, and audio impersonations.
Malicious actors — even novice individuals — can use GAI services to generate phishing campaigns, malware, and other malicious code, despite some GAI services implementing protections against these threats. What steps can your organization take to prevent these security threats? Here’s a short list:
Transparency and Understanding AI
It’s impossible to fully explain an AI model’s outputs if you don’t have any insight into how it interprets information or makes decisions. Transparency, or the extent to which a user can understand an AI’s inner workings, makes the use of GAI safer and more reliable. Transparency will also help secure employee buy-in regarding your AI ethics framework and explain any guidelines and constraints you put in place.
Here are some areas to focus on as your organization seeks to improve transparency around GAI use, which in turn will fortify your corporate responsibility framework:
Governance, Risk, and Compliance
As GAI becomes more pervasive, compliance with existing regulations and laws becomes increasingly important. Organizations must navigate the complex regulatory landscape to ensure their AI systems adhere to industry-specific guidelines, consumer protection laws, and data privacy regulations like the General Data Protection Regulation (GDPR).
Here are some key steps for fostering innovation while maintaining compliance with legal and ethical requirements:
- Include organizational usage of GAI in mapping of organizational compliance requirements. Visibility is key, as is a comprehensive understanding of where and how AI systems are being leveraged within your organization.
- Establish an ethical governance committee to support decision-making at a business and technical level. Establish a process for individuals to interact with this committee for new potential uses of GAI/AI. This committee should also review current uses of GAI with a regular cadence to ensure those uses are meeting expectations and support security and compliance postures.
- Understand how the usage of AI impacts existing compliance requirements for your organization, e.g., the Equal Credit Opportunity Act (ECOA), the Fair Housing Act (FHA), the Federal Food, Drug, and Cosmetic Act (FD&C Act), and so on.
- While AI-specific compliance statutes are lagging far behind the technology’s exponential advance, start thinking now about reporting to boards and governing bodies.
- Update your policies and procedures on your organizational stance regarding the use of GAI and understand how that use impacts your organization’s resiliency efforts and business continuity.
Striking the Balance
GAI has the potential to revolutionize various aspects of society, but it is vital to address the governance, risk, and compliance concerns associated with this technology. Ethical implications, privacy protection, bias and fairness, and regulatory compliance must be at the forefront of discussions and decision-making processes. By implementing a robust corporate responsibility framework, your organization can strike the balance between innovation and risk, harnessing the power of GAI while neutralizing the most dangerous risks to your organization.