Jailbreaking, cyberattacks, and ethical red lines: ChatGPT’s risks, and how a human-in-the-loop helps

Below, we share our assessment of some anticipated risks of Large Language Models (LLMs) like ChatGPT: “jailbreaking” content filters, and weaponisation through copycat technology and cyberattacks. We explain why a human-in-the-loop (HITL) thus remains necessary.