About
AI Security & Cybersecurity | Adversarial ML | Secure AI Systems
Quick Facts
Currently
Interested In
I'm an engineer working at the intersection of machine learning and cybersecurity — building tools, studying attack techniques, and documenting what I learn.
My background spans computer science, secure systems design, and hands-on ML engineering. I came to AI security because I saw a clear gap: most security teams don't understand AI deeply enough to secure it, and most AI engineers don't understand security deeply enough to build it safely. That gap is where serious work needs to happen — and where I intend to operate.
I build in public: every project, writeup, and experiment is open on GitHub. I believe the best way to develop expertise quickly is to work on real problems, document everything, and publish relentlessly. This portfolio is the evidence.
// Core AI Security
// Engineering Stack
// Security & Infrastructure
Tech Stack
LLM & AI Frameworks
ML / Adversarial ML
Security Tooling
Infrastructure & MLOps
SIEM & Data
Frameworks & Standards
Principles
Build in Public
Every project, experiment, and failure is documented and published. Transparency builds trust, and the process is often more valuable than the outcome.
Research → Build → Break
Understand the theory. Implement the system. Then attack it. The cycle of building and breaking produces real security knowledge — not just awareness.
Depth Over Breadth
I'd rather fully understand three attack techniques than superficially know twenty. Employers can tell the difference. So can adversaries.
Security is a System Property
Security isn't a feature you bolt on. It emerges from design decisions made throughout the system. I design for security from the first line of code.
Contact
I'm actively building toward AI security roles and open to conversations about collaboration, research, or opportunities. Reach out any time.
GitHub
github.com/paulturner
linkedin.com/in/paulturner
Twitter / X
@paulturnerdev
paul@example.com
Let's Connect
If you're working on or interested in LLM security, adversarial ML, AI red teaming, or secure AI system design — feel free to reach out.