APWG eCrime 2025 Training Sessions

Expert trainers from APWG member institutions and research correspondents assemble on Monday between 12:00 and 5:30 for sessions on agents systems, modeling for anti-abuse systems, AI defense systems and home-brewed Open Source Intelligence

3:00-5:30 PM / Room Assignment: Britannia  

How to Build Agentic Systems to Automate Web Security

Mohamed Nabeel, Palo Alto Networks

 

AI agents are revolutionizing by automating complex workflows and taking actions autonomously. Cyber security analysts and researchers spend considerable time and effort to manually analyze web pages, which cannot be resolved using existing detectors, to assess if they are malicious/phishing. 

 

What if we can use AI agents to autonomously assess such pages? In this workshop, starting from the preliminaries of generative AI, we show how to build an agentic AI system using the LangGraph framework. The audience will be introduced to the foundational concepts of LLMs, prompting LLMs and LLM agents. Diving deeper, we will explore popular agent planning patterns such as reflection and ReAct (Reason and Acting) and agent tool calling with MCP (model context protocol), agent communication via A2A (agent to agent), agent evals (evaluations) and securing agents. 

 

Equipped with these concepts, we will dive into building a practical secure agentic system using a popular LLM development framework called LangGraph. We plan to share valuable experience and lessons we learnt by building several agentic systems for the security domain. The knowledge gained during this session could be applied to a wide variety of cyber security tasks such as threat hunting, cyber threat intelligence, and vulnerability analysis.

 

 

3:00-5:30 PM / Room Assignment: Cambria  

Practical API Integration: Connecting Applications to the eCrimex eXchange Data Clearinghouse

Carlos Ramirez, APWG Engineering

 

This session introduces developers, data analysts, and technical researchers to the fundamentals of integrating with the eCrimex API. We’ll walk through how to authenticate, query, and interact with the platform’s data endpoints to retrieve and update information programmatically.

 

Attendees will learn:

• How the API is structured (endpoints, methods, authentication, response formats)

• How to make test calls using tools like Postman or cURL

• Example workflows for pulling and submitting data

• Common pitfalls and best practices for efficient API use

 

The session is designed for technical users who want to automate tasks, build integrations, or analyze platform data directly via the API. Participants will leave with working examples, API documentation pointers, and a clearer understanding of how to leverage the system’s capabilities in their own applications or research.

 


 

12:00-2:30 PM / Room Assignment: Britannia

Can LLMs Outsmart Phishers? A Reality Check on AI Defenses

Aaron Escamilla, NetSTAR / ALPS System Integration Co., LTD

 

As artificial intelligence reshapes the cyber threat landscape, defenders face a critical question: to what extent can large language models (LLMs) be trusted to detect and respond to phishing and social engineering attacks? This session presents a data-driven examination of the boundaries of AI sensitivity, discrimination, and safe autonomy in cyber defense.

 

Using curated datasets from various recognized and open source lists, leading models including GPT‑4, Claude, and Perplexity were evaluated on their ability to classify and explain phishing websites. Achieving various ranges of accuracy between 72 and 95 percent in detecting clear lures and brand impersonation pages but dropped below 50 percent when faced with cloaked, obfuscated, or dynamically rendered content. 

 

These inconsistencies highlight a broader truth: while AI demonstrates strong surface‑level understanding, it lacks the contextual reasoning necessary for reliable autonomous security decisions. The session explores the technical and ethical boundaries of AI as a cyber defense collaborator, identifying when automation strengthens protection, when it creates new vulnerabilities, and how continued human oversight remains vital to prevent unintended disruption.

 

Additionally, the session explores various AI pipeline necessities and challenges including model hallucination, complexities of fine tuning and pruning, prompt curating, and contextual engineering.

 

Through live demonstrations and comparative analysis, participants will gain a grounded understanding of the current capabilities and blind spots of LLMs in phishing detection, how bias and hallucination manifest in cybersecurity contexts, and strategies for maintaining the right balance between machine assistance and human judgment in digital defense operations.

 

In addition, drawing on comparative evaluations across multiple AI frameworks and curated phishing datasets, the research reveals clear strengths in detecting explicit lures and brand impersonation, but persistent weaknesses in ambiguous or dynamically obfuscated attacks. Patterns of false confidence and excessive caution expose how hallucination and bias can distort automated decisions in high‑stakes defensive workflows.

 

Through the lens of operational safety and governance, the session examines where AI‑driven automation can enhance protection, where it introduces new failure modes, and how much independent decision‑making can be responsibly authorized in digital security environments. 

 

Participants will gain a clear understanding of the trade‑off between automation and human oversight, along with practical guidance for integrating AI assistance into investigative, triage, and response workflows without compromising reliability or control.

 

 

12:00-2:30 PM / Room Assignment: Cambria  

Modeling for Anti-abuse: Threats, Risks, and Solutions

Laurin Weissinger, UC Berkeley

 

This training workshop will familiarize attendees with threat modeling, an approach to identify, understand, and address threats to systems or assets with a focus on imagining, analyzing, and evaluating relevant abuse scenarios.

 

While threat modeling is used by various global tech players and generally useful for analysts and practitioners who benefit from the resulting technical and non-technical understanding, our focus will be on applying this lens to understanding how abuse works and how to build anti-abuse.

 

The session will start with a presentation covering threat modeling basics but the focus will be on actually conducting a hands-on abuse-case threat modeling exercise: attendees will go through all important steps of the process, allowing them to adapt and use the approach according to their needs.

 

Process steps examined includes: 

 

  • Asset identification and analysis
  • Dependencies interrogation
  • Isolation of entry points and exit points (e.g. interfaces)
  • Definition of trust levels and permissions
  • Data flow and architecture (process diagrams and analysis) Determination of threats, ranking of threats, prescription of countermeasures / mitigations
  • Resource prioritization

 

In this session, the training objective is on modeling of intended use versus abuse - and how to design around that to maximize enterprise cyber integrity.