ECS Vice President of Governance, Risk, and Compliance
Navigating a Safer, More Secure, More Trustworthy AI Paradigm
The rapid evolution of AI has transformed industries ranging from healthcare to finance and forced a shift in government priorities. Acknowledging the immense potential and associated risks of AI, the United States government has taken a momentous step. On October 30, 2023, President Biden issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
This executive order (EO) addresses accountability for organizations developing and deploying AI, not only for generative (GAI) systems, but all systems leveraging AI. The EO likely signals a shift in the market, as its prescribed actions will impact all sectors of the AI economy, from start-ups to mature organizations. It will require rigorous evaluation of organizations’ use of AI and the extent to which AI products are leveraged through third parties.
Let’s review eight key principles and priorities any organization developing or deploying AI should take from this groundbreaking EO. Then, learn how ECS can help guide your organization to more responsible AI use.
For organizations developing and deploying AI systems, we recommend the NIST AI Risk Management Framework. Looking into the future, that guidance will likely not end there; however, the EO does provide eight principles and priorities that organizations are expected to follow.
Eight Principles and Priorities From the AI Executive Order
The EO lists eight guiding principles and priorities to advance and govern the development and use of AI:
Global Collaboration for More Ethical AI
In its concluding sections, the EO emphasizes U.S. leadership in helping to shape global AI endeavors, urging government officials to engage in international cooperation and multi-stakeholder collaboration. The administration underscores the significance of voluntary commitments made by American technology firms, advocating the establishment of a robust international framework for effectively managing AI-related risks and capitalizing on its benefits. The EO mandates the Secretaries of Commerce and State to collaborate with international partners on setting global technical standards, with a stipulated report outlining a plan for global engagement to be delivered within 270 days.
Finally, the EO underscores the government’s commitment to not only harnessing the potential of AI but also to leading the way in addressing its challenges, fostering a collaborative, global environment for the responsible development and deployment of AI technologies.
Achieving Responsible AI Use With ECS
As the legal landscape around AI continues to take shape, it’s urgent for organizations to prioritize responsible and ethical AI use with an understanding of both the benefits and the potential risks. Even if we understand the goals of the EO, however, it’s not always clear how to turn those goals into action.
ECS’ experts know how to incorporate governance, risk mitigation, and compliance into your organization’s AI ethics model. With a robust AI corporate responsibility framework in place, your organization will be better positioned to leverage AI ethically, protect sensitive information, and maintain regulatory compliance.