I respond therefore I am? The case for corporate AI governance - Yvonne E. Hyland

!mpact
5 min readJun 8, 2023

--

Given the opportunity and challenges that generative AI presents to corporations, there is nothing more important than the policy and governance of AI being the fundamental core responsibility of the C-Suite and the Board.

Artificial Intelligence (AI) has been around since the 1950s when Elvis Presley was singing Heartbreak Hotel! Why the frantic fascination in AI now? Is it because the recent introduction of generative large language models like ChatGPT4, and the tsunami of commercial applications built using these, is forcing corporations to pay attention to the impact on their bottom line? Yes! And they better pay attention if they want to stay competitive.

However, with great and powerful AI comes great responsibility.

Politicians in the United States are accelerating their rhetoric because of fear and hype, combined with the accelerating speed of AI usage. The real question is how quickly will Congress act with meaningful regulation? The Blueprint for an AI Bill of Rights was a great first step with intent but without actionable regulation.

There now appears to be bipartisan support for Artificial Intelligence regulation. Sen. Hawley (R-Mo) and Sen. Richard Blumenthal (D-Conn) are aligned on an AI initiative. With the current lack of US Federal regulation, State lawmakers are starting to take measures into their own hands. For example with Utah’s Children’s Online Privacy Protection Act (COPPA) and California’s Age Appropriate Design Code Act. Additionally industry organizations by State are imposing AI regulation too, for example Colorado is Proposing Regulations For Insurance and Big Data.

Leading the way is the European Union (EU) which has been moving towards substantive regulations since the 2010s with the European AI Act. The EU put into effect the general data protection regulation (GDPR) which is EU’s data protection law in 2018. This foundational regulation was essential to enact, given data is the oil to the AI engine.

We can also see that there is a myriad of AI regulation discussions. In Latin America the governments are planning to meet in October 2023 to discuss regional measures for AI ethics and governance which is organized by UNESCO and the Development Bank of Latin America.

Globally there is the desire and mission to enable the UN 2030 Sustainable Development Goals via Artificial Intelligence.

There is also an interesting twist on accelerating global AI policy with the head of the world’s largest sovereign wealth fund in Norway saying it will set AI ethical guidelines to be followed by the 9,000 companies it invests in. These companies include many big technology companies. Norway’s $1.4tn wealth fund calls for state regulation of AI | Financial Times (ft.com).

AI Ethics are predicated on the philosophical models of legitimacy, stability, and justice. In a pluralistic society where there are differing views, public administrations are often the best placed to provide legitimacy and stability for the AI system rules and guidelines that are generally aligned with societal goals. In the initial absence of government regulation and standards, corporations must focus on defining their approach to AI policy. They will need to dynamically monitor and re-calibrate corporate AI policy for the next few years as the pace of innovation rapidly accelerates.

Corporate Senior Leadership and the Board of Directors will become responsible for setting the overall AI governance strategy and guidelines and monitoring outcomes.

For the last several years, corporations have been using versions of AI to automate tasks. They are now moving from automation to augmentation. Using AI to assist and augment employees to make them more productive in their work.

On the plus side of the AI equation, AI enabled applications currently exist to generate media content, summarize meetings, improve supply chain efficiency, improve manufacturing operations, and enhance the customer experience. With more risk averse industries, such as banking, moving quickly to leverage the benefits of AI to their competitive advantage, for example Citi US Personal Banking is turning to AI to personalize service and ‘delight’ customers.

On the negative side of the equation, we see that AI is discriminating against minorities in the hiring processes, denying healthcare and loans to underserved communities, wrongfully charging individuals with crimes and damaging reputations.

The fundamental challenge of AI today lies with societal and corporate governance. The data that is being used to train AI, as well as the biases of the people writing the underlying code, needs to be governed.

Use of a corporation’s own data vs. 3rd party data must be judicious, with great attention paid to data provenance if using open-source data for training models. AI produced “hallucinations” are enabled by the data the model consumes, correlates, and serves back. According to Phil Spencer, CEO of Xbox at Microsoft “…we have confused the ability to talk with the ability to think…[AI] large language models are very good at talking.”

Given the above, Corporations need to decide when to use generative AI and when not! They will need to be transparent and clear in their use cases, intent, and outcomes.

There is nothing more important than the policy and governance of AI being the fundamental core responsibility of the C-Suite and Board.

AI literacy is essential at every level of every organization, in one way or another everyone will interact with generative AI. There needs to be meaningful policies in every corporation that outline, by use case, the allowable and denied use of data streams and AI models. These policies and procedures need to be constantly reviewed and updated given the pace of AI innovation. They must include in-house developed, and 3rd party bought systems, applications, and platforms with embedded AI.

There is substantial business ROI that can be achieved by using generative AI models. There must also be strong corporate governance in place to ensure that the use of AI isn’t to the detriment of employees, customers, partners and to the bottom line or reputation of the corporation.

Don’t hold your breath for global AI rules

Don’t hold your breath for global AI rules

Yvonne E. Hyland is a people-centric, solutions-driven executive with 30 years of experience in international enterprise technology leadership. A pragmatic innovator and former intrapreneur, Yvonne improves and optimizes businesses with the power of technology.

Connect with Yvonne on LinkedIn.

--

--

!mpact

!mpact Magazine is a platform where people with a vision can share their ideas and insights.