menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

A.I. Adoption Without Literacy Is a Governance Risk

21 0
06.03.2026

Business Finance Media Technology Policy Wealth Insights Interviews

Art Art Fairs Art Market Art Reviews Auctions Galleries Museums Interviews

Lifestyle Nightlife & Dining Style Travel Interviews

Power Index Nightlife & Dining Art A.I. PR

About About Observer Advertise With Us Reprints

A.I. Adoption Without Literacy Is a Governance Risk

As companies accelerate A.I. deployment, bottom-up governance is the missing link in responsible adoption.

As companies race to embed A.I. into their operations, the governance debate has stalled in the wrong place. Regulators deliberate over mandates, policymakers debate guardrails and developers argue over technical controls. These questions are important, but they overlook the most immediate driver of responsible A.I. governance: the people using these systems every day. Without investing in workforce capability, organizations risk embedding harm into their operations and finding themselves liable when things go wrong.

Sign Up For Our Daily Newsletter

Thank you for signing up!

By clicking submit, you agree to our terms of service and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

A.I. adoption is not waiting for governance to catch up

Companies are integrating A.I. tools wherever they can to capture efficiency and revenue gains, with or without oversight frameworks in place. Recent news from the U.K. illustrates this tension between governance and innovation. In the same week that the Treasury Committee warned that the financial sector’s ad hoc adoption of A.I. risked causing “serious harm” to society and the economy, Lloyds Banking Group announced that A.I. adoption increased its 2025 revenue by £50 million ($66.8 million).

The governance risk, then, is not only that A.I. is advancing quickly. Here, risk also stems from the fact that A.I. is being embedded into workplaces where employees are not equipped to understand its limitations, failures or compliance implications. That gap is where new........

© Observer