
The bank has cut access to Claude AI tools in Hong Kong following a contractual interpretation with Anthropic, while keeping other major AI systems active across its internal platform
ACTOR-DRIVEN — The story is driven by a policy decision inside Goldman Sachs, one of the world’s largest investment banks, to restrict access to a specific artificial intelligence system developed by Anthropic for employees operating in Hong Kong.
Goldman Sachs has removed access to Anthropic’s Claude AI models for its bankers based in Hong Kong, according to people familiar with the situation cited in multiple reports.
Employees previously used Claude through the bank’s internal AI platform, where it was embedded alongside other large language models used for research, document drafting, and workflow automation.
That access has now been withdrawn in recent weeks.
What is confirmed is that the decision stems from Goldman Sachs adopting a strict interpretation of its contractual arrangement with Anthropic after consulting with the AI company.
Under that interpretation, the bank concluded that its Hong Kong-based employees should not use Anthropic products at all, effectively excluding Claude from that jurisdiction while leaving other AI tools operational.
Importantly, the restriction does not extend across Goldman Sachs’ entire AI ecosystem.
Internal access to other widely used models, including systems developed by OpenAI and Google, remains in place.
This selective removal indicates that the decision is not a broad AI rollback, but a targeted compliance and contractual adjustment tied specifically to one vendor.
Anthropic has indicated that its Claude models were never formally designated as supported in Hong Kong, though the tool had been accessible internally through Goldman’s systems prior to the restriction.
Goldman Sachs has not publicly elaborated on its reasoning, consistent with its broader practice of limiting disclosure on internal technology controls.
The change takes place against a backdrop of accelerating AI integration across global investment banking.
Goldman Sachs has been actively working with Anthropic in other contexts to develop AI-powered agents designed to automate internal processes such as trade documentation, compliance workflows, and client onboarding.
That parallel collaboration underscores that the restriction is not a withdrawal from the technology itself, but a geographically constrained policy boundary.
The Hong Kong dimension is structurally sensitive.
While mainland China prohibits many U.S.-developed AI tools, Hong Kong has historically operated under a more flexible regime, leaving access decisions largely to private companies rather than government restriction.
Goldman’s move therefore reflects internal risk and compliance interpretation rather than a direct regulatory ban in the territory.
The practical consequence for Goldman’s Hong Kong operations is a narrower set of approved AI tools for internal use.
While day-to-day workflows continue to incorporate AI assistance through alternative systems, employees lose access to one of the more widely used competing models in enterprise deployment, reinforcing how fragmented AI adoption has become even within single global institutions.
The decision also highlights a broader emerging reality in financial services: access to frontier AI models is increasingly governed not just by capability or cost, but by contract terms, jurisdictional interpretation, and internal risk policy.
In practice, that means identical tools may be available in one office and restricted in another under the same corporate structure.
Goldman Sachs’ adjustment therefore represents a narrowing of operational AI flexibility in one region while maintaining continued expansion of AI use elsewhere, reinforcing that enterprise adoption of generative AI is being shaped as much by legal architecture as by technological performance.
Goldman Sachs has removed access to Anthropic’s Claude AI models for its bankers based in Hong Kong, according to people familiar with the situation cited in multiple reports.
Employees previously used Claude through the bank’s internal AI platform, where it was embedded alongside other large language models used for research, document drafting, and workflow automation.
That access has now been withdrawn in recent weeks.
What is confirmed is that the decision stems from Goldman Sachs adopting a strict interpretation of its contractual arrangement with Anthropic after consulting with the AI company.
Under that interpretation, the bank concluded that its Hong Kong-based employees should not use Anthropic products at all, effectively excluding Claude from that jurisdiction while leaving other AI tools operational.
Importantly, the restriction does not extend across Goldman Sachs’ entire AI ecosystem.
Internal access to other widely used models, including systems developed by OpenAI and Google, remains in place.
This selective removal indicates that the decision is not a broad AI rollback, but a targeted compliance and contractual adjustment tied specifically to one vendor.
Anthropic has indicated that its Claude models were never formally designated as supported in Hong Kong, though the tool had been accessible internally through Goldman’s systems prior to the restriction.
Goldman Sachs has not publicly elaborated on its reasoning, consistent with its broader practice of limiting disclosure on internal technology controls.
The change takes place against a backdrop of accelerating AI integration across global investment banking.
Goldman Sachs has been actively working with Anthropic in other contexts to develop AI-powered agents designed to automate internal processes such as trade documentation, compliance workflows, and client onboarding.
That parallel collaboration underscores that the restriction is not a withdrawal from the technology itself, but a geographically constrained policy boundary.
The Hong Kong dimension is structurally sensitive.
While mainland China prohibits many U.S.-developed AI tools, Hong Kong has historically operated under a more flexible regime, leaving access decisions largely to private companies rather than government restriction.
Goldman’s move therefore reflects internal risk and compliance interpretation rather than a direct regulatory ban in the territory.
The practical consequence for Goldman’s Hong Kong operations is a narrower set of approved AI tools for internal use.
While day-to-day workflows continue to incorporate AI assistance through alternative systems, employees lose access to one of the more widely used competing models in enterprise deployment, reinforcing how fragmented AI adoption has become even within single global institutions.
The decision also highlights a broader emerging reality in financial services: access to frontier AI models is increasingly governed not just by capability or cost, but by contract terms, jurisdictional interpretation, and internal risk policy.
In practice, that means identical tools may be available in one office and restricted in another under the same corporate structure.
Goldman Sachs’ adjustment therefore represents a narrowing of operational AI flexibility in one region while maintaining continued expansion of AI use elsewhere, reinforcing that enterprise adoption of generative AI is being shaped as much by legal architecture as by technological performance.














































