AI Security Researcher

Viraaji Mothukuri

AI Security Researcher | PhD Candidate | Former VP as a Data Scientist at JP Morgan Chase

Advancing security frameworks and risk assessment methodologies for Large Language Models & decentralized AI.

Viraaji Mothukuri
PhD Candidate, Computer Science Kennesaw State University

Selected Focus Areas

  • Jailbreak resilience & prompt security
  • Adversarial robustness in production
  • Privacy leakage & membership inference defenses
  • Model backdoor detection & integrity

Research Snapshot

Key Work in Focus

A quick view of current AI security workstreams.

LLM Security & Risk Assessment

Developing comprehensive frameworks for evaluating and mitigating security vulnerabilities in Large Language Models.

Smart Contract Security Auditing

AI-powered tools for automated smart contract analysis using annotated control flow graphs to find vulnerabilities.

AI Safety & Robustness

Methods for reliability, anomaly detection, and robust deployment strategies for AI systems in regulated industries.

About

About Me

Background and research journey in AI security.

I am a Computer Science PhD candidate at Kennesaw State University, specializing in advancing security frameworks and risk assessment methodologies for Large Language Models and Generative AI systems. My research focuses on developing innovative approaches to AI security, with particular emphasis on automated vulnerability detection and robust deployment strategies.

With several years of industry experience, including my role as Vice President and Data Scientist Lead at JP Morgan Chase, I bring a unique perspective that bridges cutting-edge research with practical applications. My work has been recognized with multiple awards, including the IEEE Blockchain Best Paper Award 2024 and the FGCS Best Paper Award 2022.

My current research encompasses creating systematic methodologies for quantifying and mitigating potential risks in AI systems, implementing machine learning algorithms for anomaly detection, and developing real-time monitoring solutions for deployed LLM systems.

Research

Current Research

Focused on security, robustness, and trustworthy AI systems.

LLM Security & Risk Assessment

Developing comprehensive frameworks for evaluating and mitigating security vulnerabilities in Large Language Models. Focus on automated vulnerability detection, adversarial attack resistance, and risk scoring methodologies for production LLM systems.

Smart Contract Security Auditing

Creating AI-powered tools for automated smart contract security analysis. Leveraging LLMs with annotated control flow graphs to identify vulnerabilities, security patterns, and potential exploits in blockchain-based applications.

AI Safety & Robustness

Developing methods for ensuring AI system reliability and safety in critical applications. Focus on anomaly detection, model interpretability, and creating robust deployment strategies for AI systems in regulated industries.

AI Alignment & Security

AI Alignment & Security Research

Examining deception, alignment faking, and defensive methods.

AI Alignment Faking & Deceptive Behavior

Recent research reveals concerning behaviors in advanced AI systems, particularly in alignment faking where models pretend to follow safety guidelines while potentially harboring different objectives. My research investigates:

  • Deceptive Alignment Detection: Developing methods to identify when AI systems are exhibiting strategic deception or alignment faking behaviors.
  • Goal Misrepresentation: Analyzing cases where LLMs misrepresent their true objectives to bypass safety measures.
  • Behavioral Consistency Testing: Creating frameworks to test AI behavior consistency across different contexts and prompting strategies.
  • Sandbagging Detection: Identifying when models deliberately underperform to avoid detection or additional safety measures.

Current AI Security Research Focus

My ongoing research addresses critical security challenges in modern AI systems, with emphasis on:

  • Jailbreak Resilience: Developing robust defenses against prompt injection and jailbreak attacks in LLMs.
  • Adversarial Robustness: Creating methods to detect and mitigate adversarial examples in production systems.
  • Privacy Leakage Prevention: Implementing techniques to prevent training data extraction and membership inference attacks.
  • Model Backdoor Detection: Identifying and removing hidden triggers and backdoors in pre-trained models.
  • Supply Chain Security: Ensuring integrity and security of AI model pipelines from training to deployment.

AI Security Research Hub

Access our comprehensive collection of AI security research, vulnerability assessments, and defensive techniques through our interactive platform.

Explore Research Platform

Key Research Contributions

  • Developed novel techniques for detecting alignment faking in LLMs with 94% accuracy.
  • Created automated vulnerability scanning tools for production AI systems.
  • Published frameworks for quantifying AI system trustworthiness and reliability.
  • Established benchmarks for evaluating robustness against adversarial attacks.
  • Contributed to industry standards for secure AI deployment in critical infrastructure.

Experience

Professional Experience

Research leadership across academia and finance.

August 2023 - Present

Kennesaw State University

Research Assistant (Doctoral Candidate)

Leading research on AI Security and LLM Risk Scoring. Developing systematic methodologies for quantifying and mitigating risks in AI systems, creating automated security testing frameworks, and designing statistical models for risk prediction and anomaly detection.

July 2021 - August 2023

JP Morgan Chase

Data Scientist Lead, Vice President

Led award-winning and patent-pending work on trade surveillance. Integrated news, market, and trade data to identify suspicious trading activity. Architected ML pipelines on AWS cloud with MLOps implementation and applied NLP for insider trading detection.

August 2019 - July 2021

Kennesaw State University

Research Assistant

Conducted research on Federated Learning, Blockchain integration, and ML model quantization. Published multiple papers on security and privacy of federated learning, worked with frameworks like PySyft and TensorFlow Federated.

October 2016 - August 2019

JP Morgan Chase

Senior Associate

Supported the Emerging Payments division and ChasePay app. Managed lifecycle and reconciliation of user data across multiple databases, automated mundane tasks, and developed innovative strategies for knowledge transfer.

Publications

Publications

Selected work organized by publication year.

GRP-071 Next-Generation DAPPs Development with Self-Service AI Agents

V Mothukuri

2025

Cloudfl: a zero-touch federated learning framework for privacy-aware sensor cloud

V Mothukuri, RM Parizi, S Pouriyeh, A Mashhadi

17th International Conference on Availability, Reliability and Security | Citations: 4

Federated Learning for Secure Sensor Cloud

V Mothukuri

2021

Skills

Technical Skills

Tools, languages, and platforms used in my work.

AI & ML

AI Security LLM Risk Scoring Federated Learning Blockchain Hyperledger Fabric

Programming

Python Java Go C/Pro C Shell Scripting JavaScript

Frameworks & Tools

PyTorch TensorFlow Keras scikit-learn Docker Kubernetes

Cloud & Security

AWS Google Cloud Blockchain Hyperledger Fabric Cybersecurity IoT Security

Awards

Awards & Recognition

Honors highlighting research and industry impact.

IEEE Internet of Things Journal Best Paper Award

2025

IEEE Blockchain Best Paper Award

2024

FGCS Best Paper Award

2022

American Financial Technology Award

Best Compliance Initiative

Fintech Futures Banking Tech Award

Best Use of RegTech

Best PhD Student

Kennesaw State University

Shining Star Award

JPMorgan Chase