Why Sound Regulations are Key for Establishing Trust in AI Adoption


By Bill Rowan, Vice President of Public Sector, Splunk

Artificial intelligence, machine learning and large language models (AI/ML/LLMs) have taken the world by storm. Just over a year ago, the launch of several free, public-facing language models captured the imagination of people around the world, helping them envision a future where generative AI (GenAI) is ubiquitous. And now, 14 months later, there are dozens of GenAI tools available to businesses, consumers and content creators, and people are becoming more wary about the implications the accelerated adoption and growth of AI/ML/LLMs have had on our lives, business and society as a whole.

It’s important to note that AI is no ordinary technology. While email, the cloud, mobile devices and other recent innovations changed how we work or interact with each other, AI/ML/LLMs have the potential to change nearly every aspect of our lives – including how we think. This potential is rightfully causing some concern for organizations implementing this technology around the trust, reliability and privacy of AI tools.

These concerns need to be addressed, on a global scale and as quickly as possible, before the next generation of GenAI tools are unleashed.

Accelerated growth is a necessity and a concern

The accelerated development and rise of AI has been phenomenal. According to a recent survey conducted by Foundry on behalf of Splunk, nearly every public or private organization is currently implementing or planning to implement AI in their workflows. Every single respondent out of more than 200 IT leaders at public organizations and private businesses indicated that the new technology is [at least] on their radar.

These leaders are overseeing the implementation of AI in their organizations to increase productivity, empower innovation, enhance goods or services and improve customer experiences. However, nearly all respondents have their concerns. Almost half of those in the public sector cited insufficient trust as the leading obstacle to expanding the use of AI in their organization. While in the private sector, system reliability issues are the main concern. IT leaders across both public and private organizations have also cited data privacy and security issues as a leading roadblock to further adoption.

A concerted effort to rein in AI/ML/LLMs

Already, public and private organizations are keeping an eye on the risk AI poses to their business. While cybercriminals are increasingly using AI to launch more sophisticated attacks on an enormous scale, security teams are fighting fire with fire. Nearly 80% of organizations reported they were already addressing cybersecurity priorities with AI including leveraging new AI-powered cybersecurity tools that monitor for abnormal behavior, collect threat intelligence and help organizations put incident response plans in place that minimize downtime.

However, despite this integration, nearly all respondents in the Splunk survey admitted that new guidelines and an ethical framework are necessary as the adoption of these tools continues to grow. Not surprisingly, IT leaders favor an approach that applies global ethical principles rather than relying on individual nation-states to regulate the use of AI within their borders.

Sound regulations needed quickly

The accelerated adoption of AI is causing excitement and concern among stakeholders. While nearly every public and private organization is using these new tools to increase productivity, drive innovation, enhance goods or services, and improve experiences, C-suite leaders have questions about how safe, reliable and secure these tools really are. As a result, IT leaders are calling for a more specific body of general principles and rules for AI technology.

President Biden’s recent Executive Order on the Safe, Secure and Trustworthy Development and Use of AI is a step in the right direction. Because the EO can only get us so far, we need new sound legislation as soon as possible in the U.S. to create AI-specific guardrails for AI systems. 

Bill Rowan is Vice President of Public Sector, Splunk.

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.



Image and article originally from www.nasdaq.com. Read the original article here.

By Splunk