Proven AI Vendor Governance Traps
The AIGP exam does not just ask "what is third-party risk?" It asks you to apply AI vendor governance to relationships that go wrong. This spring, one did; and the case reads like an exam scenario brought to life.
What Happened Between the Pentagon and Its AI Provider
In February 2026, the US Department of Defense terminated its $200 million contract with Anthropic after the AI company refused to remove safety restrictions on autonomous weapons and mass surveillance use cases. The Pentagon subsequently designated Anthropic a "supply-chain risk"; a label normally reserved for foreign adversaries. The designation bars all defence contractors from conducting commercial activity with the company.
The dispute centred on contract language. The Pentagon required all AI vendors to offer their models for "all lawful purposes" in classified settings. Anthropic's contract carried two specific exclusions: no fully autonomous weapons and no domestic mass surveillance. When the company declined to remove those restrictions, the relationship collapsed within weeks.
What the AIGP Body of Knowledge Says About AI Vendor Governance
The IAPP's Body of Knowledge (BoK) for the AIGP exam, which defines every topic candidates are tested on, maps this scenario across several domains. If you are sitting the AIGP certification, you need to see where these pieces connect.
Policies and Contracts for Third-Party Risk (I.C.3)
Domain I.C.3 requires candidates to understand how to create, update and implement policies, assessments and contracts to manage third-party risk. This covers procurement, supply chain and acceptable use terms. The Pentagon case is a direct test of this competency: both parties had contractual terms, but those terms contained unresolved tensions that neither side's governance process caught early enough.
Evaluating Vendor Agreement Terms (IV.B.2)
Domain IV.B.2 asks candidates to identify and evaluate the terms and risks in a vendor or licensing agreement. On the exam, you will not be asked to recite contract law. You will be asked to spot where an agreement creates ambiguity, single-vendor dependency or unaddressed restrictions that could derail deployment.
The Developer-Provider-Deployer Distinction (I.B.5)
Domain I.B.5 tests whether you understand the governance differences among AI developers, providers, deployers and users. In this case, Anthropic is the provider; the Pentagon is the deployer. Their obligations differ, and the exam expects you to know where one ends and the other begins. The EU AI Act draws this distinction explicitly in its deployer obligations, and the NIST AI Risk Management Framework treats third-party AI as part of the deployer's risk surface.
How the Exam Would Frame This Scenario
Suppose the exam gives you a scenario: a government deployer has contracted an AI provider for classified operations. The provider's acceptable use policy excludes certain applications. The deployer's operational requirements expand beyond those restrictions. What should the AI governance professional have done?
Before the Contract Was Signed
The governance professional should have conducted a use-case assessment against the provider's acceptable use policy. If any intended application fell outside the provider's terms, that conflict needed resolution before signing; not after deployment began. Domain I.C.3 is explicit: contracts must address acceptable use, and AI vendor governance cannot be an afterthought bolted onto procurement.
During the Vendor Relationship
Continuous monitoring of the vendor relationship is a governance function, not a procurement function. The AIGP BoK allocates up to 25 exam questions to Domain IV, which covers deployment governance. IV.C.7 specifically addresses creating policies and controls to deactivate or localise an AI system when regulatory requirements or performance issues demand it. An exit strategy should have existed from the start.
When the Relationship Broke Down
The Pentagon's supply-chain risk designation forced every defence contractor to sever ties with the company. That cascading impact illustrates why IV.B.3 tests candidates on the risks unique to deploying proprietary AI: increased obligations and higher potential liability when you depend on a single provider. The fact that other providers moved quickly to fill the gap only underscores how avoidable the disruption was with proper vendor diversification.
AI Vendor Governance Is Not a Procurement Problem
The mistake most candidates make is treating AI vendor governance as a purchasing decision. The AIGP exam treats it as a governance question with strategic, legal and operational dimensions. Procurement selects the vendor. Governance sets the conditions under which the vendor relationship operates, monitors compliance with those conditions and plans for the possibility that the relationship ends abruptly.
The Pentagon case is useful precisely because it combines multiple BoK domains into a single real-world example. Contractual ambiguity (I.C.3), vendor agreement evaluation (IV.B.2), provider-deployer distinctions (I.B.5), exit planning (IV.C.7) and proprietary model risk (IV.B.3) all intersect in one scenario. The exam will test your ability to hold those threads together.
If you are preparing for the AIGP, use this case as a study exercise. Map it to the BoK yourself and see which domains you reach for first. A structured approach to AI vendor governance is exactly what the exam rewards. Start with the free AIGP resources at 22academy.com/study and test whether your knowledge holds up under exam conditions.