California Attorney General Rob Bonta has issued a legal advisory, putting AI firms on notice about activities that may not be legal.
California is at the epicenter of much of the AI development within the U.S., with Silicon Valley serving as home of many of the leading AI firms. As a result, the firms fall within the jurisdiction of California, which has some of the strictest privacy laws in the country.
The legal advisory acknowledges the good that AI can be used to accomplish.
AI systems are at the forefront of the technology industry, and hold great potential to achieve scientific breakthroughs, boost economic growth, and benefit consumers. As home to the world’s leading technology companies and many of the most compelling recent developments in AI, California has a vested interest in the development and growth of AI tools. The AGO encourages the responsible use of AI in ways that are safe, ethical, and consistent with human dignity to help solve urgent challenges, increase efficiencies, and unlock access to information—consistent with state and federal law.
The advisory then goes on the describe the challenges A systems pose, and the potential threats they may bring.
AI systems are proliferating at an exponential rate and already affect nearly all aspects of everyday life. Businesses are using AI systems to evaluate consumers’ credit risk and guide loan decisions, screen tenants for rentals, and target consumers with ads and offers. AI systems are also used in the workplace to guide employment decisions, in educational settings to provide new learning systems, and in healthcare settings to inform medical diagnoses. But many consumers are not aware of when and how AI systems are used in their lives or by institutions that they rely on. Moreover, AI systems are novel and complex, and their inner workings are often not understood by developers and entities that use AI, let alone consumers. The rapid deployment of such tools has resulted in situations where AI tools have generated false information or biased and discriminatory results, often while being represented as neutral and free from human bias.
The AG’s office outlines a number of laws that govern AI use, including the state’s Unfair Competition Law, False Advertising Law, several competition laws, a number of civil rights laws, and the state’s election misinformation prevention laws.
The advisory also delves into California’s data protection laws and they role they play in AI development and use cases.
AI developers and users that collect and use Californians’ personal information must comply with CCPA’s protections for consumers, including by ensuring that their collection, use, retention, and sharing of consumer personal information is reasonably necessary and proportionate to achieve the purposes for which the personal information was collected and processed. (Id. § 1798.100.) Businesses are prohibited from processing personal information for non-disclosed purposes, and even the collection, use, retention, and sharing of personal information for disclosed purposes must be compatible with the context in which the personal information was collected. (Ibid.) AI developers and users should also be aware that using personal information for research is also subject to several requirements and limitations. (Id. § 1798.140(ab).) A new bill signed into law in September 2024 confirms that the protections for personal information in the CCPA apply to personal information in AI systems that are capable of outputting personal information. (Civ. Code, § 1798.140, added by AB 1008, Stats. 2024, ch. 804.) A second bill expands the definition of sensitive personal information to include “neural data.” (Civ. Code, § 1798.140, added by SB 1223, Stats. 2024, ch. 887.)
The California Invasion of Privacy Act (CIPA) may also impact AI training data, inputs, or outputs. CIPA restricts recording or listening to private electronic communication, including wiretapping, eavesdropping on or recording communications without the consent of all parties, and recording or intercepting cellular communications without the consent of all parties. (Pen. Code, § 630 et seq.) CIPA also prohibits use of systems that examine or record voice prints to determine the truth or falsity of statements without consent. (Id. § 637.3.) Developers and users should ensure that their AI systems, or any data used by the system, do not violate CIPA.
California law contains heightened protection for particular types of consumer data, including education and healthcare data that may be processed or used by AI systems. The Student Online Personal Information Protection Act (SOPIPA) broadly prohibits education technology service providers from selling student data, engaging in targeted advertising using student data, and amassing profiles about students, except for specified school purposes. (Bus. & Prof. Code, § 22584 et seq.) SOPIPA applies to services and apps used primarily for “K-12 school purposes.” This includes services and apps for home or remote instruction, as well as those intended for use at a public or private school. Developers and users should ensure any educational AI systems comply with SOPIPA, even if they are marketed directly to consumers.
The advisory also cites the state’s Confidentiality of Medical Information Act (CMIA) which governs how patient data is used, as well as the required disclosures before that data can be shared with outside companies.
The AG’s notice concludes by emphasizing the need or AI companies to remain vigilant about the various laws and regulations that may impact their work.
Beyond the laws and regulations discussed in this advisory, other California laws—including tort, public nuisance, environmental and business regulation, and criminal law—apply equally to AI systems and to conduct and business activities that involve the use of AI. Conduct that is illegal if engaged in without the involvement of AI is equally unlawful if AI is involved, and the fact that AI is involved is not a defense to liability under any law.
This overview is not intended to be exhaustive. Entities that develop or use AI have a duty to ensure that they understand and are in compliance with all state, federal, and local laws that may apply to them or their activities. That is particularly so when AI is used or developed for applications that could carry a potential risk of harm to people, organizations, physical or virtual infrastructure, or the environment.
Conclusion
The AG’s notice serves as a warning shot to AI firms, emphasizing that they are not above existing law, just because they are creating industry-defining technology.
Many legal issues surrounding AI are currently being decided in the court system, although some experts fear AI companies are moving so fast that any legal decisions clarifying the legality of their actions may come too late to have any appreciable effect.
California, at least, appears to be taking a tougher stance, putting firms on notice that they must adhere to existing law, or face the consequences.
from WebProNews https://ift.tt/figu4XN
No comments:
Post a Comment