People / Samuel Nellessen

Samuel Nellessen

How do we make superhuman LLM agents fail-proof under adversarial stress? Samuel G. Nellessen is an AI Safety Researcher at KachmanLab using RL to autonomously discover LLM failure modes. An ARENA 5.0 Alumni, he develops verifiable automated jailbreaking frameworks to test model robustness in tool-augmented environments.

Fund the science of the future.

Donate today