Chataut, R., Nankya, M., & Akl, R. (2024). 6G networks and the AI Revolution—Exploring technologies, applications, and emerging challenges. Sensors, 24(6) doi:10.3390/s24061888
In the rapidly evolving landscape of wireless communication, each successive generation of networks has achieved significant technological leaps, profoundly transforming the way we connect and interact. From the analog simplicity of 1G to the digital prowess of 5G, the journey of mobile networks has been marked by constant innovation and escalating demands for faster, more reliable, and more efficient communication systems. As 5G becomes a global reality, laying the foundation for an interconnected world, the quest for even more advanced networks leads us to the threshold of the sixth-generation (6G) era. This paper presents a hierarchical exploration of 6G networks, poised at the forefront of the next revolution in wireless technology. This study delves into the technological advancements that underpin the need for 6G, examining its key features, benefits, and key enabling technologies. We dissect the intricacies of cutting-edge innovations like terahertz communication, ultra-massive MIMO, artificial intelligence (AI), machine learning (ML), quantum communication, and reconfigurable intelligent surfaces. Through a meticulous analysis, we evaluate the strengths, weaknesses, and state-of-the-art research in these areas, offering a wider view of the current progress and potential applications of 6G networks. Central to our discussion is the transformative role of AI in shaping the future of 6G networks. By integrating AI and ML, 6G networks are expected to offer unprecedented capabilities, from enhanced mobile broadband to groundbreaking applications in areas like smart cities and autonomous systems. This integration heralds a new era of intelligent, self-optimizing networks that promise to redefine the parameters of connectivity and digital interaction. We also address critical challenges in the deployment of 6G, from technological hurdles to regulatory concerns, providing a holistic assessment of potential barriers. By highlighting the interplay between 6G and AI technologies, this study maps out the current landscape and lights the path forward in this rapidly evolving domain. This paper aims to be a cornerstone resource, providing essential insights, addressing unresolved research questions, and stimulating further investigation into the multifaceted realm of 6G networks. By highlighting the synergy between 6G and AI technologies, we aim to illuminate the path forward in this rapidly evolving field. © 2024 by the authors.
Chataut, R., Upadhyay, A., Usman, Y., Nankya, M., & Gyawali, P. K. (2024). Spam no more: A cross-model analysis of machine learning techniques and large language model efficacies. Paper presented at the 116–122. Retrieved from https://www.scopus.com/inward/record.uri?eid=2-s2.0-85218343734&doi=10.1109%2fCSNet64211.2024.10851763&partnerID=40&md5=7d3c6e638e343a87b5aab18e02cb1dc2
With the increasing sophistication of phishing scams, financial fraud, and malicious cyber-attacks, the need for effective spam detection mechanisms to safeguard users is more critical than ever. In this paper, we present a comprehensive evaluation of traditional machine learning models and Large Language Models (LLMs) in the context of spam detection. By assessing a variety of traditional ML models such as Support Vector Machines (SVM), Logistic Regression, Random Forest, Naive Bayes, K-Nearest Neighbors (KNN), and XGBoost on several performance metrics, we establish a baseline of effectiveness for spam identification tasks. We extend our analysis to include LLMs, specifically ChatGPT 3.5, Perplexity AI, and our own customized fine-tuned GPT model, referred to as TextGPT. Our findings show that while traditional ML models are effective, LLMs demonstrate exceptional potential in enhancing spam detection. Through a rigorous comparative analysis, this study highlights the strengths of both traditional and advanced approaches, showcasing the promising application of LLMs in improving spam detection processes. © 2024 IEEE.
Hoang, D., Errahmouni, H., Chen, H., Rachuri, S., Mannan, N., ElKharboutly, R., . . . Imani, F. (2024). Hierarchical representation and interpretable learning for accelerated quality monitoring in machining process. CIRP Journal of Manufacturing Science and Technology, 50, 198–212. doi:10.1016/j.cirpj.2024.02.010
While modern 5-axis computer numerical control (CNC) systems offer enhanced design flexibility and reduced production time, the dimensional accuracy of the workpiece is significantly compromised by geometric errors, thermal deformations, cutting forces, tool wear, and fixture-related factors. In-situ sensing, in conjunction with machine learning (ML), has recently been implemented on edge devices to synchronously acquire and agilely analyze high-frequency and multifaceted data for the prediction of workpiece quality. However, limited edge computational resources and lack of interpretability in ML models obscure the understanding of key quality-influencing signals. This research introduces InterpHD, a novel graph-based hyperdimensional computing framework that not only assesses workpiece quality in 5-axis CNC on edge, but also characterizes key signals vital for evaluating the quality from in-situ multichannel data. Specifically, a hierarchical graph structure is designed to represent the relationship between channels (e.g., spindle rotation, three linear axes movements, and the rotary A and C axes), parameters (e.g., torque, current, power, and tool speed), and the workpiece dimensional accuracy. Additionally, memory refinement, separability, and parameter significance are proposed to assess the interpretability of the framework. Experimental results on a hybrid 5-axis LASERTEC 65 DED CNC machine indicate that InterpHD not only achieves a 90.7% F1-Score in characterizing a 25.4 mm counterbore feature deviation but also surpasses other ML models with an F1-Score margin of up to 73.0%. The interpretability of the framework reveals that load and torque have 12 times greater impact than power and velocity feed forward for the characterization of geometrical dimensions. InterpHD offers the potential to facilitate causal discovery and provide insights into the relationships between process parameters and part quality in manufacturing. © 2024 CIRP
Lang, G., Triantoro, T., & Sharp, J. H. (2024). Large language models as AI-powered educational assistants: Comparing GPT-4 and gemini for writing teaching cases. Journal of Information Systems Education, 35(3), 390–407. doi:10.62273/YCIJ6454
This study explores the potential of large language models (LLMs), specifically GPT-4 and Gemini, in generating teaching cases for information systems courses. A unique prompt for writing three different types of teaching cases such as a descriptive case, a normative case, and a project-based case on the same IS topic (i.e., the introduction of blockchain technology in an insurance company) was developed and submitted to each LLM. The generated teaching cases from each LLM were assessed using subjective content evaluation measures such as relevance and accuracy, complexity and depth, structure and coherence, and creativity as well as objective readability measures such as Automated Readability Index, Coleman-Liau Index, Flesch-Kincaid Grade Level, Gunning Fog Index, Linsear Write Index, and SMOG Index. The findings suggest that while both LLMs perform well on objective measures, GPT-4 outperforms Gemini on subjective measures, indicating a superior ability to create content that is more relevant, complex, structured, coherent, and creative. This research provides initial empirical evidence and highlights the promise of LLMs in enhancing IS education while also acknowledging the need for careful proofreading and further research to optimize their use. © Copyright ©2024 by the Information Systems & Computing Academic Professionals, Inc. (ISCAP). Permission to make digital or hard copies of all or part of this journal for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial use. All copies must bear this notice and full citation. Permission from the Editor is required to post to servers, redistribute to lists, or utilize in a for-profit or commercial use. Permission requests should be sent to the Editor-in-Chief, Journal of Information Systems Education, editor@jise.org.
Nankya, M., Mugisa, A., Usman, Y., Upadhyay, A., & Chataut, R. (2024). Security and privacy in E-health systems: A review of AI and machine learning techniques. IEEE Access, doi:10.1109/ACCESS.2024.3469215
The adoption of electronic health (e-health) systems has transformed healthcare delivery by harnessing digital technologies to enhance patient care, optimize operations, and improve health outcomes. This paper provides a comprehensive overview of the current state of e-health systems, tracing their evolution from traditional paper-based records to advanced Electronic Health Record Systems(EHRs) and examining the diverse components and applications that support healthcare providers and patients. A key focus is on the emerging trends in AI-driven cybersecurity for e-health, which are essential for protecting sensitive health data. AI's capabilities in continuous monitoring, advanced pattern recognition, real-time threat response, predictive analytics, and scalability fundamentally change the security landscape of e-health systems. The paper discusses how AI strengthens data security through techniques like anomaly detection, automated countermeasures, and adaptive learning algorithms, enhancing the efficiency and accuracy of threat detection and response. Furthermore, the paper delves into future directions and research opportunities in AI-driven cybersecurity for e-health. These include the development of advanced threat detection systems that adapt through continuous learning, quantum-resistant encryption to safeguard against future threats, and privacy-preserving AI techniques that protect patient confidentiality while ensuring data remains useful for analysis. The importance of automating regulatory compliance, securing data interoperability via blockchain, and prioritizing ethical AI development are also highlighted as critical research areas. By emphasizing innovative security solutions, collaborative efforts, ongoing research, and ethical practices, the e-health sector can build resilient and secure healthcare infrastructures, ultimately enhancing patient care and health outcomes. © 2013 IEEE.
Nichols, T., Zemlanicky, J., Luo, Z., Li, Q., & Zheng, J. (2024). Image-based PDF malware detection using pre-trained deep neural networks. Paper presented at the Retrieved from https://www.scopus.com/inward/record.uri?eid=2-s2.0-85194066547&doi=10.1109%2fISDFS60797.2024.10527343&partnerID=40&md5=ad96ad418e7d04755b26f9aa44d692fb
PDF is a popular document file format with a flexible file structure that can embed diverse types of content, including images and JavaScript code. However, these features make it a favored vehicle for malware attackers. In this paper, we propose an image-based PDF malware detection method that utilizes pre-trained deep neural networks (DNNs). Specifically, we convert PDF files into fixed-size grayscale images using an image visualization technique. These images are then fed into pre-trained DNN models to classify them as benign or malicious. We investigated four classical pre-trained DNN models in our study. We evaluated the performance of the proposed method using the publicly available Contagio PDF malware dataset. Our results demonstrate that MobileNetv3 achieves the best detection performance with an accuracy of 0.9969 and exhibits low computational complexity, making it a promising solution for image-based PDF malware detection. © 2024 IEEE.
Przegalinska, A., & Triantoro, T. (2024). Converging minds: The creative potential of collaborative AI. (pp. 1–158) Retrieved from https://www.scopus.com/inward/record.uri?eid=2-s2.0-85195743463&doi=10.1201%2f9781032656618&partnerID=40&md5=5dc2ac21bb905afefce724640a19ff9b
This groundbreaking book explores the power of collaborative AI in amplifying human creativity and expertise. Written by two seasoned experts in data analytics, AI, and machine learning, the book offers a comprehensive overview of the creative process behind AI-powered content generation. It takes the reader through a unique collaborative process between human authors and various AI-based topic experts, created, prompted, and fine-tuned by the authors. This book features a comprehensive list of prompts that readers can use to create their own ChatGPT-powered topic experts. By following these expertly crafted prompts, individuals and businesses alike can harness the power of AI, tailoring it to their specific needs and fostering a fruitful collaboration between humans and machines. With real-world use cases and deep insights into the foundations of generative AI, the book showcases how humans and machines can work together to achieve better business outcomes and tackle complex challenges. Social and ethical implications of collaborative AI are covered and how it may impact the future of work and employment. Through reading the book, readers will gain a deep understanding of the latest advancements in AI and how they can shape our world. Converging Minds: The Creative Potential of Collaborative AI is essential reading for anyone interested in the transformative potential of AI-powered content generation and human-AI collaboration. It will appeal to data scientists, machine learning architects, prompt engineers, general computer scientists, and engineers in the fields of generative AI and deep learning. Chapter 1 of this book is freely available as a downloadable Open Access PDF at https://www.taylorfrancis.com under a Creative Commons [Attribution- No Derivatives (CC-BY -ND)] 4.0 license. © 2024 Aleksandra Przegalinska and Tamilla Triantoro.
Yan, Y., Yang, Y., Ma, Y., Reed, K., Li, S., Pei, S., . . . Lin, R. (2025). Machine learning for 2D material–based devices. Materials Science and Engineering R: Reports, 166 doi:10.1016/j.mser.2025.101085
Two-dimensional (2D) materials have emerged as a cornerstone for next-generation electronics, offering unprecedented opportunities for device miniaturization, energy-efficient computing, and novel functional applications. Their atomic-scale thickness, coupled with exceptional electrical, mechanical, and optical properties, makes them highly promising for applications ranging from ultra-scaled transistors to neuromorphic and quantum devices. However, optimizing these materials for device fabrication remains a complex and resource-intensive challenge due to the vast parameter space involved in their synthesis, processing, and integration. Machine learning (ML), a pivotal aspect of artificial intelligence (AI), has emerged as a powerful tool to accelerate the development of 2D material–based electronics by extracting insights from large experimental datasets and automating decision-making in high-throughput experimentation. This review highlights the critical role of ML in advancing 2D material research, focusing on growth optimization through material selection and morphology control, characterization for quality assessment, and device design through fabrication parameter optimization and performance prediction. This work aims to provide a comprehensive overview of the synergistic relationship between ML and 2D materials, outlining current advancements, challenges, and future prospects in AI-assisted material and device engineering. © 2025 Elsevier B.V.