Tushar Gupta and Shirin Bhambhani, Senior Member, IEEE, USA
The exponential growth of the Internet of Things (IoT) has introduced significant challenges in managing scalability, availability, and efficiency. With billions of interconnected devices generating vast amounts of data, traditional frameworks struggle to handle the complex requirements of modern IoT applications. Addressing these challenges is crucial to fully leverage the potential of IoT in various domains, including smart homes, healthcare, and industrial automation. This paper proposes a novel framework for integrating IoT applications with cloud environments to achieve scalability and high availability. The framework is structured into four layers: the device layer leverages Message Queuing Telemetry Transport (MQTT) as a low overhead communication protocol;the edge gateway layer aggregates data using lightweight Kubernetes; the ingestion and cloud infrastructure layer employs Kafka and Apache Spark for ingesting data and data transformation; and the data processing and analytics layer utilizes the cloud infrastructure layer to send the data to databases for data visualization. By leveraging cloud environments, this solution enhances scalability, availability, and overall system robustness. This paper also explores the challenges involved in implementing such an architecture, and provides insights into future advancements in IoT cloud integration.
Internet of Things (IoT), IoT architecture, Cloud Computing.
Tessa E Andersen, Ayanna Marie Avalos, Gaby G. Dagher, and Min Long, Department of Computer Science, Boise State University, Brigham Young University, California State University, Fresno
Large Language Models (LLMs) have seen an increased use across various applications throughout the world, providing more accurate and reliable models for Artificial Intelligence (AI) systems. While powerful, LLMs do not always produce accurate or up-to-date information. Retrieval Augmented Generation (RAG) is one of the solutions that has emerged to help LLMs to give more recent and accurate responses. While RAG has been successful in reducing hallucinations within LLMs, it remains susceptible to inaccurate and maliciously manipulated data. In this paper, we present Distributed-RAG (D-RAG), a novel blockchain-based framework designed to increase the integrity of the RAG system. D-RAG addresses the risks of malicious data by replacing RAG’s traditionally centralized database with communities, each consisting of a database and a permissioned blockchain. The communities are based on different subjects, each containing experts in the field who verify data through a privacy-preserving consensus protocol before it is added to the database. A Retrieval Blockchain is also designed to communicate between the multiple communities. The miners on this Retrieval Blockchain are responsible for retrieving documents from the database for each query and ranking them using an LLM. These rankings are agreed upon, with the top documents being provided to the LLM with the query to generate a response. D-RAG increases the integrity and security of RAG incorporated LLMs.
Blockchain, RAG, LLM, Privacy-Preserving.
Valentin Colliard, Alain Peres, and Vincent Corruble,Sorbonne Universit´e, CNRS, LIP6 Thales LAS France
In this paper, we introduce two deep reinforcement learning approaches designed to tackle the challenges of air defense systems. StarCraft II has been used as a game environment to create attack scenarios where the agent must learn to defend its assets and points of interest against aerial units. Our agent, by estimating a value for each weapon-target pair, shows an ability to be robust in multiple scenarios to truly distinguish and prioritize targets in order to protect its units. These two methods, one using multi- layer perceptrons and the other using the attention mechanism, are compared with rule-based algorithms. Through empirical evaluation, we validate their efficacy in achieving resilient defense trategies across diverse and dynamic environments.
Deep Reinforcement Learning, Weapon-Target Assignment, Simulation
Xiaowei Shao1, Mariko Shibasaki2 and Ryosuke Shibasaki1, 2, 1Department of Engineering, Reitaku University, Kashiwa, Japan, 2LocationMind, Tokyo, Japan
This paper explores the integration of artificial intelligence (AI) into software engineering. It examines how AI can be effectively incorporated throughout the software development lifecycle, encompassing phases like requirement analysis, system design, code generation, testing, and deployment. It highlights the potential benefits of AI-driven software development, such as increased development efficiency, improved software quality, and enhanced performance. The discussion extends to addressing the substantial challenges that accompany the integration of AI within software development frameworks. These include the limitations of current AI technology in achieving complete automation of large software projects, the need to ensure the accuracy and reliability of AI-generated code, complex task decomposition and verification, multi-agent collaboration, external knowledge utilization, and AI integration within project management workflows. This paper concludes by discussing the future directions in AI-driven software development.
Artificial Intelligence, Software Engineering, Multi-agent.
Vincent Froom
Artificial Intelligence (AI) and Quantum Computing (QC) represent two of the most transformative technologies of the 21st century. This paper explores the integration of AI and QC, motivated by the potential to overcome the computational limitations of classical systems in solving complex problems. Using a hybrid approach that combines quantum-enhanced algorithms with traditional AI techniques, the paper examines advancements in areas such as optimization, cryptography, and machine learning. Key findings highlight how quantum systems can accelerate AI training, improve model precision, and unlock solutions to previously intractable problems. The study also addresses current challenges, including hardware limitations, algorithmic inefficiencies, and ethical considerations. The implications of this research are far-reaching, with potential applications in healthcare, cybersecurity, climate modeling, and beyond, signaling a new frontier in computational learning and innovation.
Shivam Sharma1, Shahram Latifi2, Pushkin Kachroo3 Dept. of Electrical & ComputerEngineering,University of Nevada, Las Vegas, Las Vegas, USA
This paper introduces a novel reinforcement learning framework for portfolio optimization that leverages the complex statistical properties of financial markets through fractional Brownian motion (fBM). Unlike traditional methods that rely on memoryless or mean-reverting processes, our approach captures the long-range dependencies, persistence, and anti-persistence observed in empirical asset returns. Central to this framework is a meta-controller that dynamically calibrates the underlying Hurst parameter, enabling the trading agent to switch adaptively among specialized strategies trained for different market regimes. By integrating non-Markovian dynamics into the sim- ulation environment and employing a hierarchical control structure, our method allows the agent to learn more robust and context-aware policies. Empirical evaluations demonstrate that agents operating under this adaptive, fBM-driven paradigm achieve near-optimal performance in fluctuating market conditions, underscoring the model’s potential to better mirror real-world complexity and enhance decision-making in financial applications.
Reinforcement learning, fractional Brownian motion, portfolio optimization, stochastic processes, financial modeling.
Tessa E Andersen, Ayanna Marie Avalos, Gaby G. Dagher, and Min Long, Department of Computer Science, Boise State University, Brigham Young University, California State University, Fresno
Large Language Models (LLMs) have seen an increased use across various applications throughout the world, providing more accurate and reliable models for Artificial Intelligence (AI) systems. While powerful, LLMs do not always produce accurate or up-to-date information. Retrieval Augmented Generation (RAG) is one of the solutions that has emerged to help LLMs to give more recent and accurate responses. While RAG has been successful in reducing hallucinations within LLMs, it remains susceptible to inaccurate and maliciously manipulated data. In this paper, we present Distributed-RAG (D-RAG), a novel blockchain-based framework designed to increase the integrity of the RAG system. D-RAG addresses the risks of malicious data by replacing RAG’s traditionally centralized database with communities, each consisting of a database and a permissioned blockchain. The communities are based on different subjects, each containing experts in the field who verify data through a privacy-preserving consensus protocol before it is added to the database. A Retrieval Blockchain is also designed to communicate between the multiple communities. The miners on this Retrieval Blockchain are responsible for retrieving documents from the database for each query and ranking them using an LLM. These rankings are agreed upon, with the top documents being provided to the LLM with the query to generate a response. D-RAG increases the integrity and security of RAG incorporated LLMs.
Blockchain, RAG, LLM, Privacy-Preserving.
Hams Alsirhani1 and Salma, M, Elhag2, 1King Abdulaziz and His Companions Foundation for Giftedness and Creativity,“Mawhiba” Riyadh, Saudi Arabia, 2Department of Information Systems Abdul-Aziz University Jeddah, Saudi Arabia
Researchers have recognized the potential of employing soft robots in minimally invasive surgeries (MIS), which could significantly reduce the side effects associated with traditional surgical methods and enhance patient outcomes. However, the extent to which this potential is realized depends on the level of autonomy achieved by the soft robots. Lower levels of autonomy necessitate increased hands-on involvement during MIS, potentially compromising the robots’ ability to perform procedures with consistent accuracy. Consequently, achieving high levels of accuracy in the autonomy of soft robots remains a significant challenge. The autonomy of these robots is influenced by various factors, including their capacity to accurately classify their surroundings, particularly anatomical structures, which is crucial for effective decision-making. To address the challenge posed by our research question—How can the application of Deep Neural Networks (DNNs) and Deep Reinforcement Learning (DRL) improve the autonomy of soft robots in performing complex tasks in MIS, particularly in the precise classification of anatomical structures and decision making? — we conducted an in- depth literature review and investigation into the integration of DNNs and DRL within this context. Our study employed a modelling and simulation method ology to evaluate and quantify the benefits of incorporating these advanced AI techniques. Through this approach, we measured the effects of DNNs and DRL on enhancing the autonomy of soft robots, particularly in their ability to perform complex tasks in MIS with improved precision and decision-making capabilities. This work represents a step toward optimizing robotic autonomy in surgical environments, potentially leading to more efficient and accurate outcomes in minimally invasive procedures.
Deep Reinforcement Learning, Deepneural Networks,minimalinvasive Surgery& Robot Assessed Surgery.
Dirk Friedenberger, Lukas Pirl, Arne Boockmeyer, and Andreas Polze,Hasso-Plattner Institute University of Potsdam,Potsdam, Germany
Model-based systems engineering can be one of the key enablers for developing increasingly complex systems. Although the topic is actively developed in academia and industry, a holistic approach that is based on openly available tools and that considers all phases of the development life cycle is yet to be established. Addressing this, we propose a new approach in ontology-based systems engineering. We introduce micro models for modeling and use transformation to simplify the usability of the models. The micro models are loosely coupled, domain-specific, and result in an overall composite model. This decomposition of the overall model reduces complexity, offers reusability of models, and allows fast itera- tions when designing systems. Using the transformations, assets, such as source code, documentation, or simulations, can be created from the models. The prospects for automation can support agile development processes. To ensure accessibility, we have based our work on open source resources only. As an evaluation, we have used our approach to verify, develop, test, and simulate the Train Dispatcher in the Cloud (ZLiC), a cloud-based approach to digitalize the German Zugleitbetrieb. The prototype is being used to derive the requirements for a productive system.
Model-based systems engineering, ontology-based systems engineering, railway.
Madhushi D. W. Dias1, Dulan S. Dias2, and Michael ODea3, 1School of Computing, Ulster University, Belfast, United Kingdom, 2School of Mathematics and Physics, Queen’s University Belfast, United Kingdom, 3Department of Computer Science, University of York, York, United Kingdom
This research evaluates the influence of economic indicators and property attributes on housing market valuations in Northern Ireland (NI) using machine learning models. A comprehensive dataset was constructed from multiple sources, integrating property characteristics and economic indicators, and analyzed using various machine learning techniques. The Gradient Boosting Machine (GBM) and Distributed Random Forest (DRF) models demonstrated high predictive accuracy, with RMSE values of 37,051.21 and 37,471.5 on the test set, respectively. The stacked ensemble model achieved superior performance with an RMSE of 37,384.09 and an R² of 0.795. Post-modelling calibration further enhanced the accuracy, reducing the RMSE to 18,613.59 and achieving an R² of 1.0 on the test set. Key economic indicators such as the Bank of England bank rate, GDP of the UK, CPIH rate, and unemployment rate in NI were identified as influential factors. This study provides valuable insights for real estate stakeholders, potentially influencing pricing strategies, investment decisions, and policy formulations, and contributes to the fields of data science and real estate economics.
Housing Market Valuation, Economic Indicators, Machine Learning, Northern Ireland, Gradient Boosting Machine (GBM), Distributed Random Forest (DRF), Stacked Ensemble Model, Predictive Modelling, Real Estate Economics, Data Science.
Xia Li, Allen Kim,The Department of Software Engineering and Game Design and Development, Kennesaw State University, Marietta, USA
Classifying Non-Functional Requirements (NFRs) in software development life cycle is critical. Inspired by the theory of transfer learning, researchers apply powerful pre-trained models for NFR classification. However, full fine-tuning by updating all parameters of the pre-trained models is often impractical due to the huge number of parameters involved (e.g., 175 billion trainable parameters in GPT-3). In this paper, we apply Low-Rank Adaptation (LoRA) fine- tuning approach into NFR classification based on prompt-based learning to investigate its impact. The experiments show that LoRA can significantly reduce the execution cost (up to 68% reduction) without too much loss of effectiveness in classification (only 2%-3% decrease). The results show that LoRA can be practical in more complicated classification cases with larger dataset and pre-trained models.
Non-functional requirements classification, low-rank adaptation (LoRA), pre-trained models, fine-tuning
Emmanuel Udekwe and Chux Gervase Iwu, Economic and Management Sciences, University of the Western Cape, South Africa
The responsiveness of Human Resource Information Systems (HRIS) as alternative enablers that assist several sectors in achieving competitiveness is significant. The necessity for HR performance in the healthcare sector often leads to several research conducts. Surprisingly, the advantage of HRIS in healthcare continues to attract HR practitioners and researchers in that category. However, they are yet to determine how HRIS could influence health workforce sustainability in the sector. For this reason, the study intends to highlight the reasons that deprive HRIS influence in the healthcare sector in the Western Cape (WC) of South Africa (SA). Data collected from four public healthcare in the WC were subjected to mixed-model research methods involving qualitative and quantitative scrutiny. Purposively selected employees were involved in the study, interviews (forty-one) were conducted, and questionnaires (forty-six) were collated. The study identified poor organisational structures, absence of resistance to change, deficiency of upgraded HRIS and Automated Information Systems (ISs), and deficiency of knowledge, and awareness of HRIS and manual HR practices amongst others. Ethics and approvals were granted by the public healthcare management of SA and the affiliated institution. The study concluded with procedures that influence HRIS through knowledge, awareness, improvement and automation of technology and infrastructures, and availability of funds. These conclusions are significant in addressing the limitations associated with workforce sustainability in the healthcare sector through HRIS. Recommendations are further indicated in the study.
HRIS, Human Resource Information System, Manual HR, Healthcare Sector, Workforce Sustenance.
Hector Benitez Ventura1, and Yashraj patel2, 1Michigan Medicine, University of Michigan, Ann Arbor MI, USA, 2Department of Computer Science, University of Michigan, Ann Arbor MI, USA
Introduction: Nursing is currently burdened by excessive documentation tasks, reducing the time available for direct patient care. This white paper explores the potential of ambient AI to alleviate these burdens through technology designed to integrate seamlessly into nursing workflows. Ambient AI in Healthcare: Ambient AI refers to background technology that helps streamline healthcare processes, such as documentation, without disrupting clinical workflows. Developing a nursing-specific ambient AI involves choosing either an off-the-shelf or proprietary AI model, adding agent capabilities, fine-tuning it using relevant data, and validating its performance through internal testing, prospective validation, and compliance with regulatory standards. EHR Infrastructure and Interoperability: Integrating ambient AI requires compatibility with electronic health record (EHR) systems. Enhanced interoperability between EHRs using technologies like FHIR APIs allows smoother data exchange, enabling AI to provide insights directly within clinical workflows. AI Procurement Process: Implementing AI solutions in healthcare involves a structured procurement process, including budget cycle considerations, a bidding process, compliance checks, pilot programs, and contract negotiations to ensure that AI solutions meet clinical, regulatory, and operational needs. Workflow Implementation: Successful implementation of AI technology in nursing requires prospective validation, careful integration into existing workflows, ongoing monitoring, and continuous updates. This process ensures that the solution is adopted effectively, supports clinical needs, and evolves over time. Future of Nursing Ambient AI: The future of ambient AI in nursing is promising, focusing on reducing the documentation burden, improving clinical workflows, and advancing patient-centered care. However, only the most effective and well-implemented solutions will be able to meet the evolving challenges of healthcare delivery.
Artificial Intelligence, Healthcare Informatics, Ambient Technology, Nursing Workflows.
Rory Zhang1, Ang Li2, 1Chino Hills High School, 16150 Pomona Rincon Rd, Chino Hills, CA 91709, 2California State University Long Beach, 1250 Bellflower Blvd, Long Beach, CA 90840
This paper presents the design and implementation of an interactive eye-training game system aimed at improving visual acuity in children with amblyopia. The system leverages Unity for game development, Firebase for real-time data storage, and C# for scripting core functionalities. It features three gamified therapeutic activities—Mole Game, Gun Game, and Cup Game—each designed to enhance specific aspects of visual coordination and tracking. A dynamic difficulty adjustment mechanism, powered by player performance metrics, ensures personalized and engaging gameplay. The program also incorporates a comprehensive score management system with real-time UI updates and progress tracking. To validate the system, an experimental study compared adaptive and static difficulty models, highlighting the effectiveness of dynamic scaling in maintaining engagement and accelerating therapeutic outcomes. This research demonstrates the potential of gamified solutions in modern amblyopia therapy, addressing traditional challenges of adherence and motivation.
3D Modeling, Amblyopia, Machine Learning.
Ayla Zhang1, 2 and Jake Y. Chen1, 2, 1AlphaMind Club, 2Systems Pharmacology AI Research Center, School of Medicine, The University of Alabama at Birmingham
Pancreatic adenocarcinoma (PAAD) is an aggressive cancer with limited treatment options and a poor prognosis. This study introduces an innovative framework to prioritize therapeutic targets by integrating bioinformatics and AI-driven evaluations. Using PAGER, we identified 252 candidate genes and their associated pathways. ChatGPT-4 was employed to systematically evaluate these pathways across seven categories, including relevance to PAAD, druggability, and biomarker availability. This approach emphasized biologically significant pathways over solely genetic mutation prevalence. PAAD-related pathways demonstrated significantly higher scores than unrelated pathways (t(489) = -12.06, p<0.00001, Hedges g = 1.24), highlighting their relevance. Our results underscore the utility of AI in accelerating drug discovery while identifying novel, biologically validated therapeutic targets. Future research should focus on experimental validation and multi-omics integration to further refine this framework for clinical applications.
Pancreatic adenocarcinoma, drug target validation, ChatGPT-4 evaluation framework, oncology drug development.
Emily Chang and Nada Basit, Department of Computer Science, School of Engineering and Applied Science, University of Virginia, Charlottesville, Virginia, United States of America
Low-resource languages lack the data necessary to completely outright train a model, making crosslingual transfer a potential solution. Even under this methodology, data scarcity remains an issue. We seek to identify a lower limit to the amount of data required to perform cross-lingual transfer learning, namely the smallest vocabulary size needed to create a sentence embedding space. Using various widelyspoken languages as proxies for low-resource languages, we discover that the relationship between a sentence embeddings vocabulary size and performance is logarithmic with performance plateauing at a vocabulary size of 25,000. However, this relationship cannot be replicated across all languages, and most low-resource languages lack this level of documentation. In establishing this lower-bound, we can better assess whether a low-resource language has enough documentation to support the creation of a sentence embedding and language model.
Cross-lingual Transfer, Low-Resource Language, & Sentence Embedding Space.
Pavel Cherkashin, Mindrock Institute, Los Altos, California, USA
This paper presents the Self-Optimization Principle (SOP) as a comprehensive framework for understanding complex systems across disciplines, including biology, artificial intelligence, materials science, and social sciences. Rooted in the foundational axioms of fractality, balance, evolution, and tolerance, the SOP elucidates the mechanisms that drive system emergence, adaptability, and resilience. By integrating the paradox of equivalence and examples from plasma-organic systems, we highlight the universality of these principles in linking diverse domains. Case studies demonstrate SOP’s applications in AI development, advanced material design, and social dynamics, of ering actionable insights for innovation and sustainability.
Self-Optimization, Fractality, Balance, Evolution, Tolerance, Paradox of Equivalence, Plasma-Organic Systems, Complex Systems, Interdisciplinary Research.