Skip to main content
By Shayla Treadwell, Ph.D.
ECS Vice President of Governance, Risk, and Compliance
and William Rankin
ECS Director of Governance, Risk, and Compliance

Listen to article:

For every organization, there is a delicate balance to strike between innovation and risk — one that informs every interaction between your employees, customers, key stakeholders, and the supply chain. We’d be hard-pressed to name a more seismic innovation than the explosion of artificial intelligence (AI), and particularly generative AI (GAI) with its ability to dynamically generate highly realistic, relative outputs. Scaling alongside this innovation are the risks, whether they are preexisting risks such as the proliferation of disinformation or emerging risks such as AI “hallucinations,” the leakage of sensitive information, or inference attacks.

The AI genie is out of the bottle. There’s no going back to the days before large language models (LLMs), such as OpenAI’s ChatGPT and Google’s Bard, broke into mainstream public consciousness. The question we now face is, how do we move forward responsibly? The best answer: proactively building into your organization’s AI ethics model a focus on governance, risk mitigation, and compliance.

Once your organization has determined GAI will add value to the business, you can take concrete steps to ensure ethical GAI use, protect sensitive information, and maintain regulatory compliance.

Building an AI Corporate Responsibility Framework

Taking ownership of how your organization leverages GAI to create value is critical. In so many instances, the worst unintended outcomes of technological innovation can be avoided in the design and planning phases but are incredibly difficult to counteract once products come to market.

GAI is no different. Proactively building an ethos of wise governance into your organization’s AI ethics model can mitigate many of the most harmful risks, from security threats to data leakage to employees not adhering to your established guidelines.

Preventing Security Threats

How exactly does GAI put your organization at increased cyber risk? GAI can reduce barriers of entry for threat actors through enhanced spear phishing, deep fakes, and audio impersonations.

Malicious actors — even novice individuals — can use GAI services to generate phishing campaigns, malware, and other malicious code, despite some GAI services implementing protections against these threats. What steps can your organization take to prevent these security threats? Here’s a short list:


Most importantly, revisit cybersecurity best practices around access management, asset management, and detection. Consider deploying AI-driven detection tools capable of identifying and blocking malicious GAI content and activities, as well as AI-powered security solutions that can analyze network traffic and user behavior to detect anomalies and potential threats. Develop an internal testing regime to test your systems and applications for vulnerabilities that could be exploited by GAI. Share information about emerging threats with trusted partners and industry peers.

Automate software updates and patching and put data loss prevention controls in place. Such controls could include data anonymization, encryption, and secure storage. Clear consent mechanisms, data minimization, and purpose limitation principles should be employed to protect individual privacy rights and prevent potential data breaches.

Develop an “innovation sandbox” to support various parts of the organization and ensure a safe environment for innovation.

Understand the “criticality” of GAI to your business. Is it now a critical service for the success and future of your business?

Understand the type of information to be provided to a GAI service and ensure this is reflected in data flow mappings.

Transparency and Understanding AI

It’s impossible to fully explain an AI model’s outputs if you don’t have any insight into how it interprets information or makes decisions. Transparency, or the extent to which a user can understand an AI’s inner workings, makes the use of GAI safer and more reliable. Transparency will also help secure employee buy-in regarding your AI ethics framework and explain any guidelines and constraints you put in place.

Here are some areas to focus on as your organization seeks to improve transparency around GAI use, which in turn will fortify your corporate responsibility framework:

By striving for transparency, you can enhance the explainability and interpretability of your AI models, enabling stakeholders, including end users and regulatory bodies, to understand why and how a particular output or decision was reached. It allows for critical evaluation, identification of biases, and identification of potential errors or shortcomings in the system.
Transparency is crucial for assessing and mitigating biases in AI models. Without understanding the underlying mechanisms, it becomes difficult to identify and rectify biases that may be present in the training data or the decision-making algorithms. Ensure the technology builds in robust test and evaluation processes to include human-in-the-loop feedback.
Transparency plays a pivotal role in ensuring ethical and legal compliance in AI applications. As AI increasingly impacts areas such as healthcare, finance, and justice, it is essential to understand how AI models generate actionable information. This understanding enables organizations to ensure compliance with regulatory frameworks, industry standards, and ethical guidelines, thus avoiding potential legal issues and reputational damage.
Transparency empowers users by providing them with insights into how AI models affect their experiences and decisions. Users can make informed choices and exercise control over their interactions with AI systems. Transparency helps users understand how their data is being used, how decisions are made, and the implications of the AI-generated information, enabling them to assert their rights and preferences.
Transparent AI models facilitate continuous improvement and safety enhancements. With transparency, organizations can analyze the behavior of AI systems, identify weaknesses, and address potential risks or biases. This iterative approach to model refinement ensures that AIs evolve to deliver more accurate, reliable, and secure actionable information over time.

Governance, Risk, and Compliance

As GAI becomes more pervasive, compliance with existing regulations and laws becomes increasingly important. Organizations must navigate the complex regulatory landscape to ensure their AI systems adhere to industry-specific guidelines, consumer protection laws, and data privacy regulations like the General Data Protection Regulation (GDPR).

Here are some key steps for fostering innovation while maintaining compliance with legal and ethical requirements:

  1. Include organizational usage of GAI in mapping of organizational compliance requirements. Visibility is key, as is a comprehensive understanding of where and how AI systems are being leveraged within your organization.
  2. Establish an ethical governance committee to support decision-making at a business and technical level. Establish a process for individuals to interact with this committee for new potential uses of GAI/AI. This committee should also review current uses of GAI with a regular cadence to ensure those uses are meeting expectations and support security and compliance postures.
  3. Understand how the usage of AI impacts existing compliance requirements for your organization, e.g., the Equal Credit Opportunity Act (ECOA), the Fair Housing Act (FHA), the Federal Food, Drug, and Cosmetic Act (FD&C Act), and so on.
  4. While AI-specific compliance statutes are lagging far behind the technology’s exponential advance, start thinking now about reporting to boards and governing bodies.
  5. Update your policies and procedures on your organizational stance regarding the use of GAI and understand how that use impacts your organization’s resiliency efforts and business continuity.

Striking the Balance

GAI has the potential to revolutionize various aspects of society, but it is vital to address the governance, risk, and compliance concerns associated with this technology. Ethical implications, privacy protection, bias and fairness, and regulatory compliance must be at the forefront of discussions and decision-making processes. By implementing a robust corporate responsibility framework, your organization can strike the balance between innovation and risk, harnessing the power of GAI while neutralizing the most dangerous risks to your organization.


ECS Vice President of Governance, Risk, and Compliance


ECS Director of Governance, Risk, and Compliance

Close Menu

© 2023 ECS. All Rights Reserved.