Smart Honeypot Network with Autonomous Deception

Team Members

Noah RajFollow

Course Instructor

Pramod Gupta

Lead Team Member Affiliation

Computer Science

Abstract

Cybersecurity systems traditionally rely on static defensive strategies such as firewalls, intrusion-detection systems, and conventional honeypots that passively record unauthorized access attempts. However, modern cyber threats are increasingly adaptive, automated, and capable of modifying behavior based on the environment encountered. This creates a growing mismatch between dynamic offensive strategies and defensive tools that remain fixed in both structure and response. The problem this project investigates is how a defensive network can engage with an attacker as an intelligence-gathering tool rather than simply blocking intrusion which specifically through interactive deception that adapts to attacker behavior.

This work explores the development of a smart honeypot network capable of autonomous deception. The system is composed of several simulated environments which include SSH, HTTP, and malware-capture honeypots that are deployed within virtual machines to ensure a secure and isolated testing environment. The deception engine was implemented in Python and functions as the central coordinator of adaptive system behavior. It monitors attacker interactions through captured log data and modifies honeypot behavior in real time, such as altering available ports, presenting fabricated file structures, and dynamically changing operating system identity and service banners. This creates the illusion of a responsive live system while remaining entirely artificial.

To evaluate system performance, simulated attacks were conducted using standard penetration tools for reconnaissance, brute-force credential cracking, and controlled exploit and payload testing. All interactions remained confined to a lab environment to ensure ethical and safe experimentation. During these simulations, all commands, timing intervals, session durations, and interaction patterns were logged and analyzed. These logs formed the feature set used for attacker classification.

A key innovation in this project is the integration of an AI-based behavioral classifier. Using extracted features such as command sequencing, probing depth, authentication patterns, and exploration style, the classifier estimates attacker skill level along a defined spectrum. Based on this classification, the deception engine automatically adjusts its response strategy. For example, a less sophisticated attacker may be presented with simpler fake responses, while a more advanced attacker may be guided deeper into fabricated service layers or more convincing system illusions. This results in prolonged engagement and richer intelligence collection.

This project contributes to research in cyber deception and adversarial modeling by demonstrating that a defensive system does not need to merely prevent access as it can actively transform intrusion activity into actionable intelligence. By safely prototyping this approach in a controlled environment, the work lays a foundation for future deployment in real-world network settings, where authentic threat actor behavior can be captured and analyzed with proper safeguards.

This document is currently not available here.

Share

COinS
 

Smart Honeypot Network with Autonomous Deception

Cybersecurity systems traditionally rely on static defensive strategies such as firewalls, intrusion-detection systems, and conventional honeypots that passively record unauthorized access attempts. However, modern cyber threats are increasingly adaptive, automated, and capable of modifying behavior based on the environment encountered. This creates a growing mismatch between dynamic offensive strategies and defensive tools that remain fixed in both structure and response. The problem this project investigates is how a defensive network can engage with an attacker as an intelligence-gathering tool rather than simply blocking intrusion which specifically through interactive deception that adapts to attacker behavior.

This work explores the development of a smart honeypot network capable of autonomous deception. The system is composed of several simulated environments which include SSH, HTTP, and malware-capture honeypots that are deployed within virtual machines to ensure a secure and isolated testing environment. The deception engine was implemented in Python and functions as the central coordinator of adaptive system behavior. It monitors attacker interactions through captured log data and modifies honeypot behavior in real time, such as altering available ports, presenting fabricated file structures, and dynamically changing operating system identity and service banners. This creates the illusion of a responsive live system while remaining entirely artificial.

To evaluate system performance, simulated attacks were conducted using standard penetration tools for reconnaissance, brute-force credential cracking, and controlled exploit and payload testing. All interactions remained confined to a lab environment to ensure ethical and safe experimentation. During these simulations, all commands, timing intervals, session durations, and interaction patterns were logged and analyzed. These logs formed the feature set used for attacker classification.

A key innovation in this project is the integration of an AI-based behavioral classifier. Using extracted features such as command sequencing, probing depth, authentication patterns, and exploration style, the classifier estimates attacker skill level along a defined spectrum. Based on this classification, the deception engine automatically adjusts its response strategy. For example, a less sophisticated attacker may be presented with simpler fake responses, while a more advanced attacker may be guided deeper into fabricated service layers or more convincing system illusions. This results in prolonged engagement and richer intelligence collection.

This project contributes to research in cyber deception and adversarial modeling by demonstrating that a defensive system does not need to merely prevent access as it can actively transform intrusion activity into actionable intelligence. By safely prototyping this approach in a controlled environment, the work lays a foundation for future deployment in real-world network settings, where authentic threat actor behavior can be captured and analyzed with proper safeguards.