Skip to Main Content

Quinnipiac Scholarship Trends: Computer Science Publications

AI Articles Published at Quinnipiac University 2024-25 in Computer Science

1) Chataut, R., Nankya, M., Akl, R.
6G Networks and the AI Revolution—Exploring Technologies, Applications, and Emerging Challenges
(2024) Sensors, 24 (6), art. no. 1888, . 

Abstract
In the rapidly evolving landscape of wireless communication, each successive generation of networks has achieved significant technological leaps, profoundly transforming the way we connect and interact. From the analog simplicity of 1G to the digital prowess of 5G, the journey of mobile networks has been marked by constant innovation and escalating demands for faster, more reliable, and more efficient communication systems. As 5G becomes a global reality, laying the foundation for an interconnected world, the quest for even more advanced networks leads us to the threshold of the sixth-generation (6G) era. This paper presents a hierarchical exploration of 6G networks, poised at the forefront of the next revolution in wireless technology. This study delves into the technological advancements that underpin the need for 6G, examining its key features, benefits, and key enabling technologies. We dissect the intricacies of cutting-edge innovations like terahertz communication, ultra-massive MIMO, artificial intelligence (AI), machine learning (ML), quantum communication, and reconfigurable intelligent surfaces. Through a meticulous analysis, we evaluate the strengths, weaknesses, and state-of-the-art research in these areas, offering a wider view of the current progress and potential applications of 6G networks. Central to our discussion is the transformative role of AI in shaping the future of 6G networks. By integrating AI and ML, 6G networks are expected to offer unprecedented capabilities, from enhanced mobile broadband to groundbreaking applications in areas like smart cities and autonomous systems. This integration heralds a new era of intelligent, self-optimizing networks that promise to redefine the parameters of connectivity and digital interaction. We also address critical challenges in the deployment of 6G, from technological hurdles to regulatory concerns, providing a holistic assessment of potential barriers. By highlighting the interplay between 6G and AI technologies, this study maps out the current landscape and lights the path forward in this rapidly evolving domain. This paper aims to be a cornerstone resource, providing essential insights, addressing unresolved research questions, and stimulating further investigation into the multifaceted realm of 6G networks. By highlighting the synergy between 6G and AI technologies, we aim to illuminate the path forward in this rapidly evolving field. © 2024 by the authors.

 

2) Chataut, R., Gyawali, P.K., Usman, Y.
Can AI Keep You Safe? A Study of Large Language Models for Phishing Detection
(2024) 2024 IEEE 14th Annual Computing and Communication Workshop and Conference, CCWC 2024, pp. 548-554. 

Abstract
Phishing attacks continue to be a pervasive challenge in cybersecurity, with threat actors constantly developing new strategies to penetrate email inboxes and compromise sensitive data. In this study, we investigate the effectiveness of Large Language Models (LLMs) in the crucial task of phishing email detection. With the growing sophistication of these attacks, we assess the performance of three distinct LLMs: GPT-3.5, GPT-4, and a customized ChatGPT, against a carefully curated dataset containing both phishing and legitimate emails. Our research reveals the proficiency of LLMs in identifying phishing emails, with each model showing varying levels of success. The paper outlines the strengths and limitations of GPT-3.5, GPT-4, and the custom ChatGPT, illuminating their respective suitability for practical applications in email security. These results underscore the potential of LLMs in effectively identifying phishing emails and their significant implications for enhancing cybersecurity measures and safeguarding users from the risks of online fraud. © 2024 IEEE.

 

3) Chataut, R., Usman, Y., Rahman, C.M.A., Gyawali, S., Gyawali, P.K.
Enhancing Phishing Detection with AI: A Novel Dataset and Comprehensive Analysis Using Machine Learning and Large Language Models
(2024) 2024 IEEE 15th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference, UEMCON 2024, pp. 226-232. 

Abstract
Phishing emails are a significant threat to organizations, with over 90 % of cyber attacks starting from a malicious email. Despite built-in security measures, relying solely on these defenses can leave organizations vulnerable to cybercriminals who exploit human nature and the lack of tight security. Phishing emails, designed to deceive recipients into disclosing personal and financial information, represent a significant cybersecurity challenge. This paper introduces a comprehensive dataset curated explicitly for detecting phishing emails, featuring a collection of authentic and phishing emails. The dataset includes a broad spectrum of phishing techniques, such as sophisticated social engineering tactics, impersonation of reputable entities, and using urgent or threatening language to manipulate recipients. Phishing emails were collected to cover various scenarios, including financial fraud, account verification, and malware dissemination attempts. Our analysis involves a range of classical machine learning models alongside exploratory analysis with LLMs. The performance of these models was rigorously evaluated to furnish a comparative analysis of their detection capabilities. The dataset, one of the largest of its kind, offers a significant resource for researchers and cybersecurity professionals aiming to advance phishing detection methods. The dataset used in this research is publicly available, enabling further exploration and replication of the findings by the research community [1]. © 2024 IEEE.

 

4) Chataut, R., Upadhyay, A., Usman, Y., Nankya, M., Gyawali, P.K.
Spam No More: A Cross-Model Analysis of Machine Learning Techniques and Large Language Model Efficacies
(2024) Proceedings of the 8th Cyber Security in Networking Conference: AI for Cybersecurity, CSNet 2024, pp. 116-122. 

Abstract
With the increasing sophistication of phishing scams, financial fraud, and malicious cyber-attacks, the need for effective spam detection mechanisms to safeguard users is more critical than ever. In this paper, we present a comprehensive evaluation of traditional machine learning models and Large Language Models (LLMs) in the context of spam detection. By assessing a variety of traditional ML models such as Support Vector Machines (SVM), Logistic Regression, Random Forest, Naive Bayes, K-Nearest Neighbors (KNN), and XGBoost on several performance metrics, we establish a baseline of effectiveness for spam identification tasks. We extend our analysis to include LLMs, specifically ChatGPT 3.5, Perplexity AI, and our own customized fine-tuned GPT model, referred to as TextGPT. Our findings show that while traditional ML models are effective, LLMs demonstrate exceptional potential in enhancing spam detection. Through a rigorous comparative analysis, this study highlights the strengths of both traditional and advanced approaches, showcasing the promising application of LLMs in improving spam detection processes. © 2024 IEEE.

 

5) Gürpinar, T.
Towards web 4.0: frameworks for autonomous AI agents and decentralized enterprise coordination
(2025) Frontiers in Blockchain, 8, art. no. 1591907, . 

Abstract
The rise of Web 4.0 marks a shift toward decentralized, autonomous AI-driven ecosystems, where intelligent agents interact, transact, and self-govern across digital and physical environments. This paper presents a layered framework outlining the infrastructural, behavioral, and governance dimensions required for enabling autonomous AI agents in decentralized ecosystems. It also explores how enterprises can strategically adopt Web 4.0 applications while mitigating risks related to decentralization and AI coordination. A conceptual approach is adopted, synthesizing research on blockchain-enabled AI, decentralized governance, and autonomous agent interactions. The paper introduces a six-layer framework visualizing key dimensions for Web 4.0 adoption, alongside a framework focusing on enterprise integration guidelines. The study identifies six essential dimensions – spanning infrastructure, trust, and governance – that collectively enable Web 4.0. AI agents require decentralized coordination, transparent behavioral norms, and scalable governance structures to operate autonomously and ethically. Enterprises adopting Web 4.0 must address challenges in data privacy, AI training, multi-agent interaction, and governance. The findings highlight that successful enterprise adoption will depend on trust mechanisms, regulatory alignment, and scalable AI deployment models that balance autonomy with accountability. Copyright © 2025 Gürpinar.

 

6) Hogrefe, J., Cruz, E., Jaiswal, C., Riofrio, J.
AITracker: A neural network designed for efficient and affordable eye tracking
(2024) 2024 IEEE 15th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference, UEMCON 2024, pp. 100-107. 

Abstract
Eye-tracking technology has long been a cornerstone in both academic and research fields, offering insights into behavior, cognition, and visual perception. That being said, however, its accessibility is hindered by the high costs and proprietary nature of existing methodologies. To address these issues, we present AITracker, an open-source application that works using only a standard webcam, leveraging a deep-learning model in order to provide accurate eye-tracking capabilities. Our solution offers flexibility and adaptability, enabling users to customize individual parameters in the software to meet their specific needs. Through robust data collection and neural network training, AITracker achieves incredibly fast response times with a high degree of accuracy, enabling gaze-tracking in up to eight directions, as well as blink detection. In order to better understand the impact of this technology in the context of existing solutions, this paper compares AITracker's multi-layered neural network to various other prevalent eye-tracking methodologies. To that end, this paper also notes certain limitations that inhibit the software, including an undersized dataset and problematic distribution. Additionally, we explore various application scenarios, including hardware integration for assistive technology, hands-free gaming interfaces, advertising research, and attention monitoring in education. Moreover, feedback gathered from different users highlights the effectiveness and impact of AITracker across a diverse array of contexts. © 2024 IEEE.

7) Nankya, M., Mugisa, A., Usman, Y., Upadhyay, A., Chataut, R.
Security and Privacy in E-Health Systems: A Review of AI and Machine Learning Techniques
(2024) IEEE Access, . 

Abstract
The adoption of electronic health (e-health) systems has transformed healthcare delivery by harnessing digital technologies to enhance patient care, optimize operations, and improve health outcomes. This paper provides a comprehensive overview of the current state of e-health systems, tracing their evolution from traditional paper-based records to advanced Electronic Health Record Systems(EHRs) and examining the diverse components and applications that support healthcare providers and patients. A key focus is on the emerging trends in AI-driven cybersecurity for e-health, which are essential for protecting sensitive health data. AI's capabilities in continuous monitoring, advanced pattern recognition, real-time threat response, predictive analytics, and scalability fundamentally change the security landscape of e-health systems. The paper discusses how AI strengthens data security through techniques like anomaly detection, automated countermeasures, and adaptive learning algorithms, enhancing the efficiency and accuracy of threat detection and response. Furthermore, the paper delves into future directions and research opportunities in AI-driven cybersecurity for e-health. These include the development of advanced threat detection systems that adapt through continuous learning, quantum-resistant encryption to safeguard against future threats, and privacy-preserving AI techniques that protect patient confidentiality while ensuring data remains useful for analysis. The importance of automating regulatory compliance, securing data interoperability via blockchain, and prioritizing ethical AI development are also highlighted as critical research areas. By emphasizing innovative security solutions, collaborative efforts, ongoing research, and ethical practices, the e-health sector can build resilient and secure healthcare infrastructures, ultimately enhancing patient care and health outcomes. © 2013 IEEE.

 

8) Nichols, T., Zemlanicky, J., Luo, Z., Li, Q., Zheng, J.
Image-based PDF Malware Detection Using Pre-trained Deep Neural Networks
(2024) 12th International Symposium on Digital Forensics and Security, ISDFS 2024, . 

Abstract
PDF is a popular document file format with a flexible file structure that can embed diverse types of content, including images and JavaScript code. However, these features make it a favored vehicle for malware attackers. In this paper, we propose an image-based PDF malware detection method that utilizes pre-trained deep neural networks (DNNs). Specifically, we convert PDF files into fixed-size grayscale images using an image visualization technique. These images are then fed into pre-trained DNN models to classify them as benign or malicious. We investigated four classical pre-trained DNN models in our study. We evaluated the performance of the proposed method using the publicly available Contagio PDF malware dataset. Our results demonstrate that MobileNetv3 achieves the best detection performance with an accuracy of 0.9969 and exhibits low computational complexity, making it a promising solution for image-based PDF malware detection. © 2024 IEEE.

 

9) Przegalinska, A., Triantoro, T., Kovbasiuk, A., Ciechanowski, L., Freeman, R.B., Sowa, K.
Collaborative AI in the workplace: Enhancing organizational performance through resource-based and task-technology fit perspectives
(2025) International Journal of Information Management, 81, art. no. 102853, . 

Abstract
This research examines how artificial intelligence, human capabilities, and task types influence organizational outcomes. By leveraging the frameworks of the Resource-Based View and Task Technology Fit theories, we executed two distinct studies to assess the effectiveness of a generative AI tool in aiding task performance across a spectrum of task complexities and creative demands. The initial study tested the utility of generative AI across diverse tasks and the significance of AI-related skills enhancement. The subsequent study explored interactions between humans and AI, analyzing emotional tone, sentence structure, and word choice. Our results indicate that incorporating AI can significantly improve organizational task performance in areas such as automation, support, creative endeavors, and innovation processes. We also observed that generative AI generally presents more positive sentiment, utilizes simpler language, and has a narrower vocabulary than human counterparts. These insights contribute to a broader understanding of AI's strengths and weaknesses in organizational settings and guide the strategic implementation of AI systems. © 2024 The Authors

 

10) Przegalinska, A., Triantoro, T.
Converging Minds: The Creative Potential of Collaborative AI
(2024) Converging Minds: The Creative Potential of Collaborative AI, pp. 1-158. 

Abstract
This groundbreaking book explores the power of collaborative AI in amplifying human creativity and expertise. Written by two seasoned experts in data analytics, AI, and machine learning, the book offers a comprehensive overview of the creative process behind AI-powered content generation. It takes the reader through a unique collaborative process between human authors and various AI-based topic experts, created, prompted, and fine-tuned by the authors. This book features a comprehensive list of prompts that readers can use to create their own ChatGPT-powered topic experts. By following these expertly crafted prompts, individuals and businesses alike can harness the power of AI, tailoring it to their specific needs and fostering a fruitful collaboration between humans and machines. With real-world use cases and deep insights into the foundations of generative AI, the book showcases how humans and machines can work together to achieve better business outcomes and tackle complex challenges. Social and ethical implications of collaborative AI are covered and how it may impact the future of work and employment. Through reading the book, readers will gain a deep understanding of the latest advancements in AI and how they can shape our world. Converging Minds: The Creative Potential of Collaborative AI is essential reading for anyone interested in the transformative potential of AI-powered content generation and human-AI collaboration. It will appeal to data scientists, machine learning architects, prompt engineers, general computer scientists, and engineers in the fields of generative AI and deep learning. Chapter 1 of this book is freely available as a downloadable Open Access PDF at https://www.taylorfrancis.com under a Creative Commons [Attribution- No Derivatives (CC-BY -ND)] 4.0 license. © 2024 Aleksandra Przegalinska and Tamilla Triantoro.

 

11) Usman, Y., Gyawali, P.K., Gyawali, S., Chataut, R.
The Dark Side of AI: Large Language Models as Tools for Cyber Attacks on Vehicle Systems
(2024) 2024 IEEE 15th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference, UEMCON 2024, pp. 169-175. 

Abstract
The rapid evolution of autonomous vehicles (AVs) presents significant opportunities for enhancing transportation safety and efficiency. However, with increasing connectivity and complex electronic systems, AVs also become vulnerable to cyberattacks. This paper investigates cybersecurity challenges in the realm of AVs, highlighting the role of artificial intelligence (AI), specifically Large Language Models (LLMs), in exploiting vulnerabilities. We analyze various attack vectors, including Controller Area Network (CAN) manipulation, Bluetooth vulnerabilities, and Key Fob hacking, emphasizing the need for proactive cybersecurity measures. Recent incidents, such as the remote compromise of various vehicle models, underscore the urgent need for robust security solutions in the automotive industry. By leveraging LLMs, attackers can craft sophisticated cyber threats targeting AVs, posing risks to both safety and privacy. We introduce HackerGPT, a customized LLM tailored for cyber attack generation, and demonstrate attacks on virtual CAN networks, Bluetooth systems, and Key Fobs. At the same time, our experiments reveal successful compromises in certain vehicle models; limitations exist, particularly in vehicles with advanced encryption and robust signal transmission protocols. However, this research underscores the broader need for increased awareness and proactive security measures in the automotive sector. Our findings aim to contribute significantly to the ongoing discourse on automotive cybersecurity, offering actionable insights for manufacturers and cybersecurity professionals to safeguard the future of mobility. © 2024 IEEE.