Home About

Paul Turner

AI Security & Cybersecurity | Adversarial ML | Secure AI Systems

🔬

Quick Facts

Based in [Your Location]
Open to remote & on-site
Available from [Month Year]
Clearance-eligible (US citizen)

Currently

Building AI security portfolio
Publishing technical notes
Studying adversarial ML

Interested In

LLM security engineering
AI red teaming roles
AI safety research
MLSecOps / platform security

Background

I'm an engineer working at the intersection of machine learning and cybersecurity — building tools, studying attack techniques, and documenting what I learn.

My background spans computer science, secure systems design, and hands-on ML engineering. I came to AI security because I saw a clear gap: most security teams don't understand AI deeply enough to secure it, and most AI engineers don't understand security deeply enough to build it safely. That gap is where serious work needs to happen — and where I intend to operate.

I build in public: every project, writeup, and experiment is open on GitHub. I believe the best way to develop expertise quickly is to work on real problems, document everything, and publish relentlessly. This portfolio is the evidence.


Mission

mission.txt
AI systems are being deployed into critical infrastructure, healthcare,
finance, and national security — largely without the security rigor we
apply to other critical software.

My mission is to close that gap:
→ Build the defenses that protect AI systems from adversarial attack.
→ Develop the evaluation frameworks that make safety measurable.
→ Work at organisations doing the most important work in this space.

This portfolio is the evidence: practical work built and documented in the open.

Technical Skills

// Core AI Security

LLM Security & Prompt Defense75%
Adversarial ML60%
AI Threat Modeling70%
AI Red Teaming50%

// Engineering Stack

Python (ML/Security)85%
PyTorch / ML Frameworks70%
LangChain / LangGraph65%
Docker / CI/CD / MLOps70%

// Security & Infrastructure

Security Engineering75%
MITRE ATT&CK / ATLAS65%

Tools & Technologies

LLM & AI Frameworks

OpenAI APIAnthropic Claude LangChainLangGraph LlamaIndexHugging Face OllamavLLM

ML / Adversarial ML

PyTorchART (IBM) FoolboxCleverHans scikit-learnW&B JupyterNumPy

Security Tooling

PyRITGarak TrivyBandit Safety CLISemgrep Burp SuiteMITRE ATLAS

Infrastructure & MLOps

DockerGitHub Actions SOPSVault FastAPIPostgreSQL ChromaDBRedis

SIEM & Data

Elastic SIEMSplunk PandasStreamlit Grafana

Frameworks & Standards

MITRE ATT&CKMITRE ATLAS OWASP LLM Top 10NIST AI RMF STRIDE

How I Work

Build in Public

Every project, experiment, and failure is documented and published. Transparency builds trust, and the process is often more valuable than the outcome.

Research → Build → Break

Understand the theory. Implement the system. Then attack it. The cycle of building and breaking produces real security knowledge — not just awareness.

Depth Over Breadth

I'd rather fully understand three attack techniques than superficially know twenty. Employers can tell the difference. So can adversaries.

Security is a System Property

Security isn't a feature you bolt on. It emerges from design decisions made throughout the system. I design for security from the first line of code.

Get In Touch

I'm actively building toward AI security roles and open to conversations about collaboration, research, or opportunities. Reach out any time.

Let's Connect

If you're working on or interested in LLM security, adversarial ML, AI red teaming, or secure AI system design — feel free to reach out.