April 28 (Reuters) – Goldman Sachs has barred its bankers in Hong Kong from using Anthropic’s AI models, the Financial Times reported on Tuesday, citing people familiar with the matter.
Employees of the U.S. bank were unable to access Claude models as of a few weeks ago, the newspaper added, citing four sources.
While AI models like ChatGPT and Claude, built by U.S. firms, are prohibited in mainland China, Hong Kong has mostly remained outside these controls, with usage limits set by U.S. companies themselves.
Anthropic’s spokesperson told the FT that its Claude models had never been officially “supported” in Hong Kong but declined to comment further.
Goldman’s move came as a result of the U.S. bank taking a strict interpretation of its contract with Anthropic following a consultation with the company, concluding that the bank’s employees in Hong Kong should not be able to use any Anthropic products, the report said.
The decision did not extend to contracts with other AI vendors such as OpenAI, the newspaper added.
Goldman Sachs and Anthropic did not immediately respond to Reuters’ requests for comment.
Goldman Sachs’ chief information officer Marco Argenti said in February that the bank was working with Anthropic to develop AI-powered agents aimed at automating a widening range of internal functions.
(Reporting by Fabiola Arámburo in Mexico City; Editing by Tasim Zahid and Stephen Coates)


Comments