Human Oversight of AI Systems: What Norway's Wealth Fund Gets Right (and Where the Gaps Hide)

Human Oversight of AI Systems: What Norway's Wealth Fund Gets Right (and Where the Gaps Hide)

Last week, Norway's $2.1 trillion sovereign wealth fund showed what human oversight of AI systems looks like when an organisation actually commits to it. AI analyses information across 7,000 portfolio companies; humans retain every trading and investment decision. That distinction matters for your AIGP exam more than most candidates realise.

Human Oversight of AI Systems in Practice

Norges Bank Investment Management (NBIM) uses large language models to screen companies for ESG risks, scan emerging-market coverage in local languages and simulate contract negotiations. Around half of its roughly 700 staff now code their own AI tools. The principle, as the fund's head of machine learning told Reuters, is straightforward: better human decisions through AI-supported analysis.

This maps directly to the IAPP's Body of Knowledge (BoK) for the AIGP exam; the BoK defines every topic candidates are tested on. Domain IV.A asks you to evaluate the factors relevant to deploying an AI system. Domain IV.C covers governing that deployment in practice. Norway's fund offers a textbook illustration of both.

Where the AI Act and NIST Diverge

Article 14 of the EU AI Act requires that high-risk AI systems allow effective oversight by humans during use. The fund's approach satisfies this in spirit: oversight measures match the risk level, and the humans involved have the competence and authority to intervene. Nobody delegates a final decision to the model.

The NIST AI Risk Management Framework covers similar ground through its Govern and Map functions but frames it differently. Govern addresses organisational policies and roles for AI oversight across the lifecycle. Map focuses on understanding context and identifying risks before deployment. Candidates should be able to distinguish these two frameworks and explain where they overlap. The AIGP exam can test that distinction directly.

The Governance Gap Nobody Planned For

Here is where it gets more interesting for the exam. Half of NBIM's staff build their own AI tools using a single large language model. That is not a procurement scenario; no vendor contract governs what each employee creates. The question that follows is one the AIGP exam can and does ask: what governance applies when internal staff build AI tools informally?

Traditional AI governance policies assume the organisation procures or commissions a system. They define review procedures, assign risk ownership and require documentation before deployment. None of that naturally applies when a portfolio analyst spends an afternoon building a research tool at their desk. The governance framework was not designed for this; it assumed a boundary between builder and user that no longer exists.

Human-in-the-Loop Versus Human-on-the-Loop

The fund operates human-in-the-loop for high-stakes investment decisions; a human reviews AI output before any action follows. For the hundreds of employee-built tools used in daily research, the model sits closer to human-on-the-loop: tools operate with periodic review rather than decision-by-decision approval.

Candidates often assume that oversight of AI means a human approves every output. The exam tests whether you understand the spectrum from full automation to full human control, and which level fits which risk context. Domain I.B covers establishing organisational expectations for AI governance. Domain I.C addresses the policies that apply across the AI lifecycle. When employees build their own tools, both domains apply simultaneously.

Why AI Oversight Gaps Appear on Your Exam

The exam can present a scenario where an organisation maintains strong human oversight for its primary AI deployments but weak governance over employee-built tools. Candidates who only think about externally procured systems will miss the internal tooling gap entirely.

There is also a design-versus-deployment distinction to watch. Domain III.A covers governing the design and build of AI systems; Domain IV.C covers governing deployment and use. When a staff member builds a tool and then uses it in their own workflow, both domains apply at once. That overlap is where exam questions tend to sit.

The Practical Lesson

Norway's wealth fund gets the big things right: AI supports analysis, humans hold authority, and the organisation is open about its approach. The governance gap sits between that high-level commitment and the hundreds of tools staff create daily. For AIGP candidates, the takeaway is concrete. Know the spectrum of human oversight. Know where internal tooling creates ungoverned risk. Know which BoK domains cover each scenario.

If you want to test how well you spot these governance gaps in exam-style scenarios, try the free AIGP assessment at 22academy.com/study.

Share this Post

Exam Question Masterclass



Ready to kick-start your career?

GET STARTED NOW



About The Blog


Stay up to date with the latest news, background articles, and tips for your study.


Our latest video





22Academy

Tailored Training Solutions

Let's find the best education solution for your situation. We will contact you for Free Support!

Success! Your message has been sent to us.
Error! There was an error sending your message.
It’s for:
We will only use your email address to contact you regarding your education needs. We do not sell your personal data to third parties.