VIRAAJI MOTHUKURI

> AI RESEARCHER // PHD CANDIDATE // EX-VP DATA SCIENCE @ JPMC

// 01 Research

Research Focus

Causal foundations, security, and safety of AI systems.

01

Causal AI & Agent Safety Verification

Applying causal inference to verify and audit AI agent behavior. Developing methods for causal structure discovery in learned representations, temporal causal analysis of black-box agents, and identifiability frameworks that bridge causal reasoning with deployed AI systems.

02

Agentic AI Security

Analyzing security threats in autonomous AI agents that plan, use tools, and interact with external systems. Research spans vulnerability discovery in multi-agent coordination, attack surface analysis for tool-using agents, and causal approaches to detecting adversarial manipulation of agent behavior.

03

LLM Security & Alignment

Developing frameworks for evaluating and hardening Large Language Models against adversarial attacks, alignment faking, and deceptive behavior. Focus on automated vulnerability detection, jailbreak resilience, and methods for verifying that safety training holds under distribution shift.

04

World Models & Safety Verification

Investigating how learned world models can serve as safety mechanisms for AI agents and how their causal structure can be extracted and verified. Exploring adversarial robustness of world models and their role in predicting consequences before agent action execution.

05

Post-Quantum Cryptography & AI

Researching the intersection of quantum computing threats and AI-driven security solutions. Focus on AI-powered cryptographic inventory discovery, risk assessment for quantum-vulnerable systems, and automated migration strategies to NIST-standardized post-quantum algorithms.

// 02 About

About

I am a Computer Science PhD candidate at Kennesaw State University, working at the intersection of causal inference, AI security, and safety. My research focuses on developing principled methods to understand, verify, and secure AI systems, from causal structure discovery in learned agent representations to adversarial robustness of large language models.

With several years of industry experience, including my role as Vice President and Data Scientist Lead at JP Morgan Chase, I bring a perspective that bridges foundational research with real-world deployment. My work at JPMC on insider trading detection using machine learning resulted in a granted US patent. My research has been recognized with multiple awards, including the IEEE IoT Journal Best Paper Award 2025, IEEE Blockchain Best Paper Award 2024, and the FGCS Best Paper Award 2022.

My current research spans causal AI for agent safety verification, agentic AI security, LLM alignment and robustness, world models as safety mechanisms, and the security implications of quantum computing for deployed AI infrastructure.

// 03 Safety & Alignment

// ADVISORY: AI SAFETY, ALIGNMENT & SECURITY

AI Safety & Alignment Research

[DECEPTIVE ALIGNMENT] [CAUSAL VERIFICATION]

Alignment Verification & Deceptive Behavior

  • Developing methods to identify when AI systems exhibit strategic deception or alignment faking behaviors
  • Applying causal inference to verify whether safety training produces genuine behavioral change or shallow compliance
  • Creating frameworks to test AI behavior consistency across different contexts and prompting strategies
  • Investigating alignment faking as a capability-dependent phenomenon in large language models

[AGENTIC SECURITY] [ADVERSARIAL ROBUSTNESS]

Agentic AI & LLM Security

  • Analyzing security vulnerabilities in autonomous AI agents that use tools, access data, and interact with external systems
  • Developing robust defenses against prompt injection and jailbreak attacks in LLMs
  • Temporal causal discovery methods for extracting objectives and predicting vulnerabilities in black-box agents
  • Ensuring integrity and security of AI model pipelines from training to deployment
  • Investigating world model manipulation as a novel attack vector against planning agents

AI SECURITY RESEARCH HUB

Access our collection of AI security research, vulnerability assessments, and defensive techniques.

EXPLORE PLATFORM

KEY RESEARCH CONTRIBUTIONS

  • Developed novel techniques for detecting alignment faking in LLMs with 94% accuracy
  • Created automated vulnerability scanning tools for production AI systems
  • Published frameworks for quantifying AI system trustworthiness and reliability
  • Established benchmarks for evaluating robustness against adversarial attacks
  • Contributed to industry standards for secure AI deployment in critical infrastructure

// 04 Experience

Professional Experience

AUG 2023 — PRESENT Kennesaw State University

Research Assistant (Doctoral Candidate)

Leading research on AI security, causal inference for agent safety, and LLM alignment. Developing causal structure discovery methods for verifying learned agent representations, creating automated security testing frameworks, and investigating world models as safety mechanisms for autonomous agents.

JUL 2021 — AUG 2023 JP Morgan Chase

Data Scientist Lead, Vice President

Led award-winning and patented work on trade surveillance. Integrated news, market, and trade data to identify suspicious trading activity. Architected ML pipelines on AWS cloud with MLOps implementation and applied NLP for insider trading detection.

AUG 2019 — JUL 2021 Kennesaw State University

Research Assistant

Conducted research on Federated Learning, Blockchain integration, and ML model quantization. Published multiple papers on security and privacy of federated learning, worked with frameworks like PySyft and TensorFlow Federated.

OCT 2016 — AUG 2019 JP Morgan Chase

Data Specialist

Supported the Emerging Payments division and ChasePay app. Managed lifecycle and reconciliation of user data across multiple databases, automated mundane tasks, and developed innovative strategies for knowledge transfer.

// 05 Publications

Publications

Selected work organized by publication year.

[2] V Mothukuri (2025)

"GRP-071 Next-Generation DAPPs Development with Self-Service AI Agents"

2025

[3] V Mothukuri et al. (2025)

"AgentFL: AI-Orchestrated Agents for Federated Learning"

IEEE International Conference on Distributed Computing Systems (ICDCS)

[7] V Mothukuri, RM Parizi, S Pouriyeh, A Mashhadi (2022)

"Cloudfl: a zero-touch federated learning framework for privacy-aware sensor cloud"

17th International Conference on Availability, Reliability and Security

CITED: 4
[14] V Mothukuri (2021)

"Federated Learning for Secure Sensor Cloud"

2021

// 06 Skills

System Specs

AI & ML Causal Inference, AI Safety & Alignment, LLM Security, Agentic AI, World Models, Federated Learning, Blockchain Security
Programming Python, Java, Go, C/Pro C, Shell Scripting, JavaScript
Frameworks PyTorch, TensorFlow, Keras, scikit-learn, Docker, Kubernetes
Cloud & Security AWS, Google Cloud, Post-Quantum Cryptography, Hyperledger Fabric, Cybersecurity, IoT Security

// 07 Awards

Awards & Recognition

Viraaji Mothukuri receiving an award
[2025] IEEE Internet of Things Journal Best Paper Award
[2024] IEEE Blockchain Best Paper Award
[2022] FGCS Best Paper Award
[JPMC] American Financial Technology Award — Best Compliance Initiative
[JPMC] Fintech Futures Banking Tech Award — Best Use of RegTech
[KSU] Best PhD Student — Kennesaw State University
[JPMC] Shining Star Award — JPMorgan Chase