AI lobbying spikes 185% as calls for regulation surge


Artificial intelligence-related lobbying reached new heights in 2023, with more than 450 organizations participating. It marks a 185% increase from the year before, when just 158 organizations did so, according to federal lobbying disclosures analyzed by OpenSecrets on behalf of CNBC.

The spike in AI lobbying comes amid growing calls for AI regulation and the Biden administration’s push to begin codifying those rules. Companies that began lobbying in 2023 to have a say in how regulation might affect their businesses include TikTok owner ByteDance, Tesla, Spotify, Shopify, Pinterest, Samsung, Palantir, Nvidia, Dropbox, Instacart, DoorDash, Anthropic and OpenAI.

The hundreds of organizations that lobbied on AI in 2023 ran the gamut from Big Tech and AI startups to pharmaceuticals, insurance, finance, academia, telecommunications and more. Until 2017, the number of organizations that reported AI lobbying stayed in the single digits, per the analysis, but the practice has grown slowly but surely in the years since, exploding in 2023.

More than 330 organizations that lobbied on AI last year had not done the same in 2022. The data showed a range of industries as new entrants to AI lobbying: Chip companies such as AMD and TSMC, venture firms such as Andreessen Horowitz, biopharmaceutical companies such as AstraZeneca, conglomerates such as Disney, and AI training data companies such as Appen.

Organizations that reported lobbying on AI issues in 2023 also typically lobby the government on a range of other issues. In total, they reported spending more than $957 million lobbying the federal government in 2023 on issues including, but not limited to, AI, according to OpenSecrets.

In October, President Joe Biden issued an executive order on AI, the U.S. government’s first action of its kind, requiring new safety assessments, equity and civil rights guidance and research on AI’s impact on the labor market. The order tasked the U.S. Department of Commerce’s National Institute of Standards and Technology, or NIST, to develop guidelines for evaluating certain AI models, including testing environments for them, and be partly in charge of developing “consensus-based standards” for AI.

After the executive order’s unveiling, a frenzy of lawmakers, industry groups, civil rights organizations, labor unions and others began digging into the 111-page document and making note of the priorities, specific deadlines, and the wide-ranging implications of the landmark action.

One core debate has centered on the question of AI fairness. Many civil society leaders told CNBC in November that the order does not go far enough in recognizing and addressing real-world harms that stem from AI models, especially those affecting marginalized communities. But they said it’s a meaningful step.

Since December, NIST has been collecting public comments from businesses and individuals about how best to shape these rules; the public comment period ends Friday. In its request for information, NIST specifically asked responders to weigh in on developing responsible AI standards, testing AI systems for vulnerabilities, managing the risks of generative AI, and helping to reduce the risk of “synthetic content,” which includes misinformation and deepfakes.

CNBC’s Mary Catherine Wellons and Megan Cassella contributed reporting.



Image and article originally from www.cnbc.com. Read the original article here.