US Financial Services Committee leaders want ‘regulatory sandboxes’ for AI
United States Financial Services Committee (FSC) leaders responded to a request for feedback from the US Treasury concerning the regulation of artificial intelligence in a letter published on Aug. 16.
The letter, signed by the committee’s republican leadership, calls for what amounts to a light-touch approach to regulation. “A one-size-fits-all approach will only stifle competition among financial institutions,” write the signatories, adding that “regulators must evaluate each institution's use of AI technology on a case-by-case basis”
AI sandbox
The committee appeared bullish on the use of generative AI — which includes services and products such as OpenAI’s ChatGPT and Anthropic’s Claude — in the financial services sector. It highlighted the potential for these technologies to provide greater access to financial services, increasing both adoption and inclusion.
It also strongly recommended an organic approach to creating new regulations and laws. Describing a “regulatory sandbox” for AI, the FSC appears to be advocating maintaining a general focus on sustaining the status quo by applying existing rules to challenges as they arise.
Per the document:
“Regulators, Congress, and the Department of the Treasury should be judicious with respect to regulating AI, recognizing the regulations, rules, guidance, and laws currently in place that address the use of technologies by financial institutions.”
Data privacy
When it comes to consumer privacy and data protection, the committee’s views appear to be at odds with its insistence on a somewhat hands-off approach to regulation. Under the status quo, companies such as OpenAI, Google, xAI have, so far, been allowed to collect human-generated data for the purpose of training AI systems.
The FSC’s letter, however, says that US consumers “should be allowed to terminate the collection of their data or request its deletion.” If such a regulation were to become the law of the land, it could have a potentially devastating effect on some of the largest US AI firms’ primary business model.
Due to the way AI systems such as ChatGPT are trained, “terminating the collection” of user data could be a feckless measure when applied to existing technology. The real issue is that it may not be possible to “delete” user data from pre-trained systems.
Related: San Fran city attorney sues sites that ‘undress’ women with AI