Freeman Quoted on Data Privacy Concerns in Training LLMs
Bloomberg Law
Partner D. Reed Freeman was quoted on how the massive amount of data artificial intelligence (AI) models need is potentially undermining one of privacy advocates’ key goals: data minimization.
Reed said that the root of the conflict is that large language models (LLMs) have an insatiable hunger for data to train on.
“You need an extraordinary amount of data to develop a large language model that’s going to be used in a generative AI capacity,” he said. “And if a state comes along with a law that says you can only collect and use data for the purpose for which the consumer gave it to you, guess what? You can’t train.”
Data protection regulations around the world follow the principle that consumer privacy is best protected if entities collect and process and retain only the data needed for a specific purpose and a specific length of time.
However, Reed said that AI developers remain wary of regulations that could slow advances in technology. He advocated for a flexible approach to regulating AI that focuses on specific harms, such as its improper use in hiring decisions or granting credit, rather than abstract privacy harms related to data collection for training purposes.
Read the full article here.
Contacts
- Related Industries
- Related Practices