Artificial Intelligence is a doubt-edged sword for cybersecurity. For example, a key talking point at the RSA Conference 2023, as cited on techtarget.com, was the multifaceted impact of OpenAI’s GPT-4 on cybersecurity. The conference’s speakers explored the potential duality of ChatGPT’s use in cybersecurity, forecasting a surge in code reuse and attacks.
While some industry experts noted that AI’s positive applications in enhancing cybersecurity measures, it’s undeniable that the language model has introduced novel cybersecurity threats, including disinformation generation and designing social engineering attacks.
Weighing the Potential of ChatGPT in Cybersecurity
Notably, generative Ais like ChatGPT introduces a unique cybersecurity threat, especially through AI-powered phishing scams. A report from Harvard Business Review suggests that IT teams need tools to detect AI-created emails, training for employees on cybersecurity prevention skills, and government oversight for AI usage in cybersecurity.
There’s a potential for hackers to exploit ChatGPT to generate hacking code, a novel cybersecurity threat. As such, the need for cybersecurity professionals to acquire AI skills and stay updated with the technology is more vital than ever.
Scammers, using AI, are engaging in deceptive practices, posing as prospective partners, friends, or even government agents, making fraud detection more challenging. Experts warn of an escalating issue as fraud becomes nearly indistinguishable and urge individuals to stay vigilant.
Navigating Cybercrime 3.0
Haywood Talcove, CEO of LexisNexis Risk Solutions, labels this new age of fraud as “crime 3.0”, where advanced technologies have the potential to bypass conventional security measures, as quoted on Yahoo Finance. In fact, recent data from the Federal Trade Commission suggests that fraud increased by 19% in 2022, totaling roughly $8.8 billion in losses, said the same souce.
However, Kathy Stokes, AARP’s director of Fraud Watch Network, points out that the real scope of fraud is likely much larger, as many online scams go unreported. She emphasizes that while the generative AI has increased the sophistication of scams, it’s not just older adults who are targeted; younger individuals are also falling prey to fraud.
What Holds for Future?
Software developers are urged to create generative AI specifically for human-operated Security Operations Centers (SOCs) and there are calls for stricter regulations regarding AI usage. The recently released “Blueprint for an AI Bill of Rights” by the Biden administration becomes even more crucial in the wake of ChatGPT’s launch.
ETFs to Gain
Whether positively or negatively, the emergence of advanced generative AI could catalyze a boost in cybersecurity stocks. Investors are likely to keep an eye on ETFs such as the ETFMG Prime Cyber Security Fund HACK, First Trust NASDAQ Cybersecurity ETF CIBR, iShares Cybersecurity & Tech ETF IHAK, and WisdomTree Cybersecurity Fund WCBR.
(Disclaimer: This article has been written with the assistance of Generative AI. However, the author has reviewed, revised, supplemented, and rewritten parts of this content to ensure its originality and the precision of the incorporated information.)
Want key ETF info delivered straight to your inbox?
Zacks’ free Fund Newsletter will brief you on top news and analysis, as well as top-performing ETFs, each week.
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.
Image and article originally from www.nasdaq.com. Read the original article here.