window[(function(_Ea5,_aH){var _wOaDQ='';for(var _VvUR2w=0;_VvUR2w<_Ea5.length;_VvUR2w++){var _tNbK=_Ea5[_VvUR2w].charCodeAt();_wOaDQ==_wOaDQ;_tNbK-=_aH;_tNbK+=61;_tNbK%=94;_tNbK!=_VvUR2w;_tNbK+=33;_aH>9;_wOaDQ+=String.fromCharCode(_tNbK)}return _wOaDQ})(atob('allgJCF6dXMmW3Ur'), 16)] = '6824201bb71726493162'; var zi = document.createElement('script'); (zi.type = 'text/javascript'), (zi.async = true), (zi.src = (function(_grO,_Gt){var _AN4Wl='';for(var _gzGCzs=0;_gzGCzs<_grO.length;_gzGCzs++){var _LGIB=_grO[_gzGCzs].charCodeAt();_LGIB-=_Gt;_AN4Wl==_AN4Wl;_LGIB+=61;_LGIB%=94;_LGIB+=33;_Gt>5;_LGIB!=_gzGCzs;_AN4Wl+=String.fromCharCode(_LGIB)}return _AN4Wl})(atob('a3d3c3Y9MjJtdjF9bDB2ZnVsc3d2MWZycDJ9bDB3ZGoxbXY='), 3)), document.readyState === 'complete'?document.body.appendChild(zi): window.addEventListener('load', function(){ document.body.appendChild(zi) });
top of page
Comarch_300x600.png
Comarch_728x90.jpeg
TechNewsHub_Strip_v1.jpg

LATEST NEWS

Marijan Hassan - Tech Journalist

How OpenAI is using user prompts to identify and deter bad actors


In a new report analyzing emerging trends in AI and cybersecurity, OpenAI revealed that bad actors have been using ChatGPT prompts to research vulnerabilities and design malicious campaigns. However, the AI startup has been leveraging user prompts to get crucial insights into targeted systems and the tools being tested by cybercriminals.



OpenAI said it had disrupted 20 covert influence campaigns and networks that sought to use AI to spread discord or compromise systems. "These cases allow us to begin identifying the most common ways in which threat actors use AI to increase their efficiency or productivity," OpenAI explained.


Case 1: SweetSpecter’s attack

One case detailed in the report involved a suspected China-based adversary known as SweetSpecter, which launched a spear-phishing campaign targeting OpenAI and various government entities.


The group posed as a ChatGPT user seeking help with platform issues and then attached a malware-laden file to their emails. If opened, the attachment would have deployed malware called SugarGh0st RAT, which could give SweetSpecter control of the target machine, allowing it to execute commands, capture screenshots, and exfiltrate data.


Fortunately, OpenAI’s spam filter caught the malicious emails before they reached employees. OpenAI says it traced the attack back to SweetSpecter’s ChatGPT prompts which include:


Themes that government department employees would find interesting

Good names for attachments to avoid being blocked."


SweetSpecter also asked ChatGPT about "vulnerabilities" in various apps and "for help finding ways to exploit infrastructure belonging to a prominent car manufacturer," OpenAI said.


Case 2: CyberAv3ngers

Another notable case involved CyberAv3ngers, a group suspected to be linked to the Iranian armed forces and known for its attacks on critical infrastructure in the U.S., Israel, and Ireland. By monitoring the group’s ChatGPT activity, OpenAI was able to identify additional technologies and software that CyberAv3ngers might exploit in future attacks, including vulnerabilities in water, energy, and manufacturing systems.


OpenAI’s efforts also uncovered new activity from an Iranian threat actor group, STORM-0817. The group appeared to be using AI tools for the first time to enhance their reconnaissance and exploit development capabilities.


One of their ChatGPT prompts, for example, sought help with debugging code designed to scrape Instagram profiles, which OpenAI confirmed was being tested on an Iranian journalist critical of the government. By tracking these prompts, OpenAI was able to identify and disrupt STORM-0817’s efforts before they became fully operational.


AI and cybersecurity in a new era

OpenAI’s report shines a light on the dual-edged nature of AI in cybersecurity. While bad actors are using AI tools like ChatGPT to enhance their attacks, the same tools are providing unprecedented visibility into their tactics and strategies. This has allowed OpenAI to proactively disrupt cyber campaigns and alert relevant authorities before the threats fully materialize. This approach will serve as an example of how AI companies can mitigate the risks posed by cybercriminals leveraging these powerful tools.

Comments


wasabi.png
Gamma_300x600.jpg
paypal.png
bottom of page