The Rise of GenAI’s Criminal Underworld

Karen
3 min readOct 10, 2024

Do you ever get the feeling that with every amazing tech thing that pops up, there is always a catch like a shadow right behind it? Today that shadow is MALLA — the malicious use of large language models (LLMs) in cybercrime. Researchers at Indiana University Bloomington studied the weaponization of powerful AI systems like the ones behind ChatGPT, for criminal purposes like creating phishing scams, generate undetectable malware, and even help hackers exploit code vulnerabilities — that’s the reality Malla presents.

The first MALLA listing appeared online in April 2023, promoting a service called CodeGPT, on Hack Forums platform. CodeGPT was designed to provide users with jailbreak prompts — instructions designed to get past the safety measures in public LLM APIs, to enable the user to make harmful content —As a matter of fact, CodeGPT emerged just a few months after the public release of ChatGPT, it is scary how quickly malicious people already began exploiting this new technology.

Who is behind MALLAs? The people in this space goes beyond setting shops in the dark web, but some of these vendors are running their services on either uncensored LLMs, which has not guardrails OR publicly available models where questionable vendors jailbreak into safeguarded systems like POE (built by Quora) or FlowGPT. Then, they sell their services in hacker’s marketplaces and forums, charging far less than reliable vendors.

The data shows an alarming reach, the number of people (10,603 daily uses) using

--

--

Karen
Karen

Written by Karen

Hi 🙋🏻‍♀️ HCI - UX Researcher here. I enjoy to break down research papers into insightful bits. Thank you so much for visiting 🙏

No responses yet