Movatterモバイル変換


[0]ホーム

URL:


 
 
Search for Articles:
Title / Keyword
Author / Affiliation / Email
Journal
Article Type
 
 
Section
Special Issue
Volume
Issue
Number
Page
 
Logical OperatorOperator
Search Text
Search Type
 
add_circle_outline
remove_circle_outline
 
 
Journals
Future Internet

Journal Description

Future Internet

Future Internet is an international, peer-reviewed, open access journal on internet technologies and the information society, published monthly online by MDPI.  
Impact Factor: 3.6 (2024); 5-Year Impact Factor: 3.5 (2024)

Latest Articles

43 pages, 5836 KB  
Review
Defending the Distributed Skies: A Comprehensive Literature Review of the Arena of Multi-Cloud Environment
byLabib Hasan Bayzid,Tonny Shekha Kar,Mohammad Tariqul Islam,Md. Shabiul Islam andFiroz Ahmed
Future Internet2025,17(12), 548; https://doi.org/10.3390/fi17120548 (registering DOI) - 28 Nov 2025
Abstract
The rapid implementation of multi-cloud architectures, such as the integration of services from multiple cloud providers, gives organizations enhanced flexibility, resilience, and vendor independence. However, the multi-cloud model presents complicated security challenges due to diverse platforms, fragmented governance, and increased areas of concern. [...] Read more.
The rapid implementation of multi-cloud architectures, such as the integration of services from multiple cloud providers, gives organizations enhanced flexibility, resilience, and vendor independence. However, the multi-cloud model presents complicated security challenges due to diverse platforms, fragmented governance, and increased areas of concern. This paper presents a comprehensive literature review of the multi-cloud environment arena, focusing on the analysis of threats, vulnerabilities, cost optimization, mitigation strategies, and research trends. It covers a comprehensive range of risks, including data breaches, insider threats, API exploitation, configuration errors, and emerging multi-vector attacks, as well as the cumulative complexity of aligning policies, managing identities, and ensuring compliance across diverse providers. The review analyzes existing and proposed defence mechanisms, spanning cryptographic techniques, fuzzy-logic decision frameworks, AI and ML-driven detection systems, as well as integrated Identity and Access Management (IAM) systems. Analysis of relevant literature reveals a progression from basic foundational encryption systems toward more sophisticated, policy-driven, and collaboration-capable security frameworks. Additionally, the paper identifies significant research gaps in real-world validation, cost optimization, and unified governance models. This research departs from prior work by integrating multiple perspectives rather than limiting its scope to a single area such as security, defence, or cost optimization. It also provides new researchers with comprehensive background information on cloud architecture within a single article.Full article
Show Figures

Graphical abstract

39 pages, 1506 KB  
Article
Permissionless Blockchain Recent Trends, Privacy Concerns, Potential Solutions and Secure Development Lifecycle
byTalgar Bayan,Adnan Yazici andRichard Banach
Future Internet2025,17(12), 547;https://doi.org/10.3390/fi17120547 - 28 Nov 2025
Abstract
Permissionless blockchains have evolved beyond cryptocurrency into foundations for Web3 applications, decentralized finance (DeFi), and digital asset ownership, yet this rapid expansion has intensified privacy vulnerabilities. This study provides a comprehensive review of recent trends, emerging privacy threats, and mitigation strategies in permissionless [...] Read more.
Permissionless blockchains have evolved beyond cryptocurrency into foundations for Web3 applications, decentralized finance (DeFi), and digital asset ownership, yet this rapid expansion has intensified privacy vulnerabilities. This study provides a comprehensive review of recent trends, emerging privacy threats, and mitigation strategies in permissionless blockchain ecosystems. We examine six developments reshaping the landscape: meme coin proliferation on high-throughput networks, real-world asset tokenization linking on-chain activity to regulated identities, perpetual derivatives exposing trading strategies, institutional adoption concentrating holdings under regulatory oversight, prediction markets creating permanent records of beliefs, and blockchain–AI integration enabling both privacy-preserving analytics and advanced deanonymization. Through this work and forensic analysis of documented incidents, we analyze seven critical privacy threats grounded in verifiable 2024–2025 transaction data: dust attacks, private key management failures, transaction linking, remote procedure call exposure, maximal extractable value extraction, signature hijacking, and smart contract vulnerabilities. Blockchain exploits reached $2.36 billion in 2024 and $2.47 billion in the first half of 2025, with over 80% attributed to compromised private keys and signature vulnerabilities. We evaluate privacy-enhancing technologies, including zero-knowledge proofs, ring signatures, and stealth addresses, identifying the gap between academic proposals and production deployment. We further propose a Secure Development Lifecycle framework incorporating measurable security controls validated against incident data. This work bridges the disconnect between privacy research and industrial practice by synthesizing current trends, providing insights, documenting real-world threats with forensic evidence, and providing actionable insights for both researchers advancing privacy-preserving techniques and developers building secure blockchain applications.Full article
(This article belongs to the Special IssueSecurity and Privacy in Blockchains and the IoT—3rd Edition)
Show Figures

Figure 1

26 pages, 1122 KB  
Article
Emotional Sequencing as a Marker of Manipulation in Social Media Disinformation
byRenatha Souza Vieira andÁlvaro Figueira
Future Internet2025,17(12), 546;https://doi.org/10.3390/fi17120546 - 28 Nov 2025
Abstract
The proliferation of disinformation on social media platforms poses a significant challenge to the reliability of online information ecosystems and the protection of public discourse. This study investigates the role of emotional sequences in detecting intentionally misleading messages disseminated on social networks. To [...] Read more.
The proliferation of disinformation on social media platforms poses a significant challenge to the reliability of online information ecosystems and the protection of public discourse. This study investigates the role of emotional sequences in detecting intentionally misleading messages disseminated on social networks. To this end, we apply a methodological pipeline that combines semantic segmentation, automatic emotion recognition, and sequential pattern mining. Emotional sequences are extracted at the subsentence level, preserving each message’s temporal order of emotional cues. Comparative analyses reveal that disinformation messages exhibit a higher prevalence of negative emotions, particularly fear, anger, and sadness, interspersed with neutral segments. Moreover, false messages frequently employ complex emotional progressions—alternating between high-intensity negative emotions and emotionally neutral passages—designed to capture attention and maximize engagement. In contrast, messages from reliable sources tend to follow simpler, more linear emotional trajectories, with a greater prevalence of positive emotions such as joy. Our dataset encompasses multiple categories of disinformation, enabling a fine-grained analysis of how emotional sequencing varies across different types of misleading content. Furthermore, we validate our approach by comparing it against a publicly available disinformation dataset, demonstrating the generalizability of our findings. The results highlight the importance of analyzing temporal emotional patterns to distinguish disinformation from verified content, reinforcing the value of integrating emotional sequences into machine learning pipelines to enhance disinformation detection. This work contributes to the growing body of research emphasizing the relationship between emotional manipulation and the virality of misleading content online.Full article
(This article belongs to the Special IssueInformation Communication Technologies and Social Media)
Show Figures

Graphical abstract

49 pages, 1583 KB  
Review
Federated Learning for Smart Cities: A Thematic Review of Challenges and Approaches
byLaila Alterkawi andFadi K. Dib
Future Internet2025,17(12), 545;https://doi.org/10.3390/fi17120545 - 28 Nov 2025
Abstract
Federated Learning (FL) offers a promising way to train machine learning models collaboratively on decentralized edge devices, addressing key privacy, communication, and regulatory challenges in smart city environments. This survey adopts a narrative approach, guided by systematic review principles such as PRISMA and [...] Read more.
Federated Learning (FL) offers a promising way to train machine learning models collaboratively on decentralized edge devices, addressing key privacy, communication, and regulatory challenges in smart city environments. This survey adopts a narrative approach, guided by systematic review principles such as PRISMA and Kitchenham, to synthesize current FL research in urban contexts. Unlike prior domain-focused surveys, this work introduces a challenge-oriented taxonomy and integrates an explicit analysis of reproducibility, including datasets and deployment artifacts, to assess real-world readiness. The review begins by examining how FL supports the privacy-preserving analysis of environmental and mobility data. It then explores strategies for resource optimization, including load balancing, model compression, and hierarchical aggregation. Applications in anomaly and event detection across power grids, water infrastructure, and surveillance systems are also discussed. In the energy sector, the survey emphasizes the role of FL in demand forecasting, renewable integration, and sustainable logistics. Particular attention is given to security issues, including defenses against poisoning attacks, Byzantine faults, and inference threats. The study identifies ongoing challenges such as data heterogeneity, scalability, resource limitations at the edge, privacy–utility trade-offs, and lack of standardization. Finally, it outlines a structured roadmap to guide the development of reliable, scalable, and sustainable FL solutions for smart cities.Full article
(This article belongs to the Special IssueDistributed Machine Learning and Federated Edge Computing for IoT)
Show Figures

Figure 1

21 pages, 481 KB  
Article
Transformer-Based Intrusion Detection for Post-5G and 6G Telecommunication Networks Using Dynamic Semantic Embedding
byHaonan Yan,Xin Pang,Shaopeng Zhou andHonghui Fan
Future Internet2025,17(12), 544;https://doi.org/10.3390/fi17120544 - 27 Nov 2025
Abstract
Post-5G and 6G telecommunication infrastructures face critical information security challenges due to increasing network complexity and sophisticated cyberattacks. Traditional intrusion detection systems based on statistical traffic analysis struggle to identify advanced threats that exploit semantic-level vulnerabilities in modern communication protocols. This paper proposes [...] Read more.
Post-5G and 6G telecommunication infrastructures face critical information security challenges due to increasing network complexity and sophisticated cyberattacks. Traditional intrusion detection systems based on statistical traffic analysis struggle to identify advanced threats that exploit semantic-level vulnerabilities in modern communication protocols. This paper proposes a Transformer-based intrusion detection system specifically designed for post-5G and 6G networks. Our approach integrates three key innovations: First, a comprehensive feature extraction method capturing both semantic content characteristics and communication behavior patterns. Second, a dynamic semantic embedding mechanism that adaptively adjusts positional encoding based on semantic context changes. Third, a Transformer-based classifier with multi-head attention mechanisms to model long-range dependencies in attack sequences. Extensive experiments on CICIDS2017 and UNSW-NB15 datasets demonstrate superior performance compared to LSTM, GRU, and CNN baselines across multiple evaluation metrics. Robustness testing and cross-dataset validation confirm strong generalization capability, making the system suitable for deployment in heterogeneous post-5G and 6G telecommunication environments.Full article
(This article belongs to the Special IssueInformation Security in Telecommunication Systems)
Show Figures

Figure 1

24 pages, 1336 KB  
Systematic Review
BERT-Based Approaches for Web Service Selection and Recommendation: A Systematic Review with a Focus on QoS Prediction
byVijayalakshmi Mahanra Rao,R Kanesaraj Ramasamy andMd Shohel Sayeed
Future Internet2025,17(12), 543;https://doi.org/10.3390/fi17120543 - 27 Nov 2025
Abstract
Effective web service selection and recommendation are critical for ensuring high-quality performance in distributed and service-oriented systems. Recent research has increasingly explored the use of BERT (Bidirectional Encoder Representations from Transformers) to enhance semantic understanding of service descriptions, user requirements, and Quality of [...] Read more.
Effective web service selection and recommendation are critical for ensuring high-quality performance in distributed and service-oriented systems. Recent research has increasingly explored the use of BERT (Bidirectional Encoder Representations from Transformers) to enhance semantic understanding of service descriptions, user requirements, and Quality of Service (QoS) prediction. This systematic review examines the application of BERT-based models in QoS-aware web service selection and recommendation. A structured database search was conducted across IEEE, ACM, ScienceDirect, and Google Scholar covering studies published between 2020 and 2024, resulting in twenty-five eligible articles based on predefined inclusion criteria and PRISMA screening. The review shows that BERT improves semantic representation and mitigates cold-start and sparsity issues, contributing to better service ranking and QoS prediction accuracy. However, challenges persist, including limited availability of benchmark datasets, high computational overhead, and limited interpretability of model decisions. The review identifies five key research gaps and outlines future directions, including domain-specific pre-training, hybrid semantic–numerical models, multi-modal QoS reasoning, and lightweight transformer architectures for deployment in dynamic and resource-constrained environments. These findings highlight the potential of BERT to support more intelligent, adaptive, and scalable web service management.Full article
Show Figures

Figure 1

28 pages, 13653 KB  
Article
Computation Offloading in Space–Air–Ground Integrated Networks for Diverse Task Requirements with Integrated Reliability Mechanisms
byYitian Chen andYinghua Tong
Future Internet2025,17(12), 542;https://doi.org/10.3390/fi17120542 - 27 Nov 2025
Abstract
The sixth-generation (6G) system has been attracting increasing attention from both industry and academia, with the space–air–ground integrated network (SAGIN) identified as one of its key applications. This study investigates a SAGIN framework tailored for deployment in remote areas. To address the differing [...] Read more.
The sixth-generation (6G) system has been attracting increasing attention from both industry and academia, with the space–air–ground integrated network (SAGIN) identified as one of its key applications. This study investigates a SAGIN framework tailored for deployment in remote areas. To address the differing needs of users with emergency and routine tasks, an offloading strategy is proposed that enables direct offloading for emergency tasks and optimized UAV-assisted offloading for routine tasks. Additionally, considering the limited satellite coverage duration, a reliability mechanism for task offloading is designed. The study formulates a task offloading optimization problem aimed at maximizing the completion rate of routine tasks—while reducing their energy consumption and latency—under the premise of guaranteeing the completion of emergency task offloading. The problem is modeled as a Markov Decision Process (MDP). To solve it, a D-MAPPO reinforcement learning algorithm is proposed, which integrates the Dirichlet distribution with the Multi-Agent Proximal Policy Optimization (MAPPO) framework. Simulation results show that, compared with the MAPPO and PPO algorithms, the delay is reduced by 38% and 31%, respectively, while the energy consumption is reduced by 7% and 48%, respectively.Full article
Show Figures

Graphical abstract

16 pages, 1392 KB  
Systematic Review
Artificial Intelligence-Enabled Facial Expression Analysis for Mental Health Assessment in Older Adults: A Systematic Review and Research Agenda
byFernando M. Runzer-Colmenares,Nelson Luis Cahuapaza-Gutierrez,Cielo Cinthya Calderon-Hernandez andChristian Loret de Mola
Future Internet2025,17(12), 541;https://doi.org/10.3390/fi17120541 - 26 Nov 2025
Abstract
Facial expression analysis using artificial intelligence (AI) represents an emerging approach for assessing mental health, particularly in neurocognitive disorders. This study encompassed observational investigations that assessed facial expressions in individuals aged 60 years and above. A comprehensive literature search was carried out across [...] Read more.
Facial expression analysis using artificial intelligence (AI) represents an emerging approach for assessing mental health, particularly in neurocognitive disorders. This study encompassed observational investigations that assessed facial expressions in individuals aged 60 years and above. A comprehensive literature search was carried out across PubMed, Scopus, EMBASE, and Web of Science. Risk of bias and study quality were assessed using the QUADAS-2 and CLAIM tools. Descriptive analysis and meta-analysis of proportions were performed using STATA version 19. The pooled effect size (ES) was calculated using a random-effects model (DerSimonian–Laird method), and results were presented with corresponding 95% confidence intervals (CI). Six studies were analyzed, comprising a total of 433 participants aged over 60 years, representing diverse AI applications in the detection of neurocognitive disorders. The disorders evaluated included mild cognitive impairment (MCI) (37.4%), dementia (29.3%), and Alzheimer’s disease (AD) (33.3%). Most studies (83.3%) used video-based facial recordings analyzed through deep learning algorithms and emotion recognition models. The pooled meta-analysis demonstrated that AI-based facial recognition algorithms achieved a high overall detection accuracy in older adults (ES = 0.84; 95% CI: 0.77–0.91), with the best performance observed in Alzheimer’s disease (ES = 0.93; 95% CI: 0.89–0.97). AI-based facial analysis demonstrates high, robust, and non-invasive accuracy for the early and differential detection of neurocognitive disorders, including MCI, dementia-related conditions, and AD, in older adults.Full article
Show Figures

Figure 1

31 pages, 1069 KB  
Systematic Review
The Challenge of Dynamic Environments in Regard to RSSI-Based Indoor Wi-Fi Positioning—A Systematic Review
byZi Yang Chia,Pey Yun Goh,Lee Yeng Ong andShing Chiang Tan
Future Internet2025,17(12), 540;https://doi.org/10.3390/fi17120540 - 25 Nov 2025
Abstract
Among indoor positioning technologies, Wi-Fi fingerprinting using the Received Signal Strength Indicator (RSSI) is the most convenient and cost-effective method for indoor positioning. Instability and interference in wireless signal transmission cause significant variations in the RSSI, especially in a dynamic environment (DE). These [...] Read more.
Among indoor positioning technologies, Wi-Fi fingerprinting using the Received Signal Strength Indicator (RSSI) is the most convenient and cost-effective method for indoor positioning. Instability and interference in wireless signal transmission cause significant variations in the RSSI, especially in a dynamic environment (DE). These factors hamper the accuracy of fingerprint-based indoor positioning system (IPSs), as these systems may struggle to reliably match observed signal patterns with stored fingerprints. Thus, ensuring positioning accuracy is critically important when designing and implementing Wi-Fi IPSs. Currently, there is a lack of surveys that provide a detailed and systematic analysis of the impact of DEs on the accuracy and reliability of Wi-Fi indoor positioning. This systematic literature review (SLR) was conducted to examine three aspects of Wi-Fi indoor positioning based on the RSSI: the impact of a DE on indoor positioning accuracy, the importance of constructing radio maps for indoor localization, and the role of machine learning (ML)/deep learning (DL) models in predicting indoor position with minimal error despite the DE. This review was conducted according to a structured and well-defined methodology to search for and filter relevant studies on Wi-Fi indoor positioning using the RSSI. Through this systematic process, 128 papers (2018–2024) were identified as relevant and then extracted and thoroughly analyzed to effectively answer the specified research questions. Additionally, this review highlights gaps in existing research, suggests directions for future studies, and provides practical recommendations for enhancing Wi-Fi-based indoor positioning in DEs.Full article
Show Figures

Figure 1

30 pages, 3129 KB  
Article
Research on a Blockchain Adaptive Differential Privacy Mechanism for Medical Data Protection
byWang Feier andGuo Rongzuo
Future Internet2025,17(12), 539;https://doi.org/10.3390/fi17120539 - 25 Nov 2025
Abstract
To address the issues of privacy-utility imbalance, insufficient incentives, and lack of verifiable computation in current medical data sharing, this paper proposes a blockchain-based fair verification and adaptive differential privacy mechanism. The mechanism adopts an integrated design that systematically tackles three core challenges: [...] Read more.
To address the issues of privacy-utility imbalance, insufficient incentives, and lack of verifiable computation in current medical data sharing, this paper proposes a blockchain-based fair verification and adaptive differential privacy mechanism. The mechanism adopts an integrated design that systematically tackles three core challenges: privacy protection, fair incentives, and verifiability. Instead of using a traditional fixed privacy budget allocation, it introduces a reputation-aware adaptive strategy that dynamically adjusts the privacy budget based on the contributors’ historical behavior and data quality, thereby improving aggregation performance under the same privacy constraints. Meanwhile, a fair incentive verification layer is established via smart contracts to quantify and confirm data contributions on-chain, automatically executing reciprocal rewards and mitigating the trust and motivation deficiencies in collaboration. To ensure enforceable privacy guarantees, the mechanism integrates lightweight zero-knowledge proof (zk-SNARK) technology to publicly verify off-chain differential privacy computations, proving correctness without revealing private data and achieving auditable privacy protection. Experimental results on multiple real-world medical datasets demonstrate that the proposed mechanism significantly improves analytical accuracy and fairness in budget allocation compared with baseline approaches, while maintaining controllable system overhead. The innovation lies in the organic integration of adaptive differential privacy, blockchain, fair incentives, and zero-knowledge proofs, establishing a trustworthy, efficient, and fair framework for medical data sharing.Full article
Show Figures

Figure 1

20 pages, 13789 KB  
Article
Design of an Improved IoT-Based PV-Powered Soil Remote Monitoring System with Low Data Acquisition Failure Rate
byFuqiang Li,Zhe Li,Lisai Gao andChen Peng
Future Internet2025,17(12), 538;https://doi.org/10.3390/fi17120538 - 25 Nov 2025
Abstract
To enable remote and automatic monitoring of the farmland soil information, this paper has developed a soil monitoring system based on the Internet of Things (IoT), which mainly involves the development of a gateway server node, wireless sensor nodes, a remote monitoring platform, [...] Read more.
To enable remote and automatic monitoring of the farmland soil information, this paper has developed a soil monitoring system based on the Internet of Things (IoT), which mainly involves the development of a gateway server node, wireless sensor nodes, a remote monitoring platform, and photovoltaic (PV) modules. The Raspberry Pi 5-based gateway server periodically sends data acquisition commands to wireless sensor nodes via LoRa, receives soil data returned by sensor nodes, and stores them in a MySQL database. Using a remote monitoring platform, Internet users can monitor real-time and historical soil data stored in the database. The STM32F103C8T6-based wireless sensor node receives data acquisition commands from the gateway server, uses soil temperature and humidity sensors as well as a pH sensor to collect soil status, and then sends sensor data back to the gateway server via LoRa. The system is powered by both PV energy and batteries, which enhances the endurance capability. Experimental results show that the designed system works well in remotely monitoring soil information. Using the proposed query attempt dynamic adjustment (QADA) method, the wireless sensor node dynamically adjusts the number of query attempts, which reduces the data acquisition failure rate from 21–25% to no more than 0.33%. Using the obtained qualitative relationship that the data acquisition delay varies inversely with the LoRa transfer rate, the data acquisition delay can be reduced to less than 67 ms.Full article
Show Figures

Figure 1

43 pages, 1321 KB  
Review
Survey of Intra-Node GPU Interconnection in Scale-Up Network: Challenges, Status, Insights, and Future Directions
byXiaoyong Song,Danyuan Zhou,Kai Li,Jiayuan Chen,Hao Zhang,Xiaoguang Zhang andXuxia Zhong
Future Internet2025,17(12), 537;https://doi.org/10.3390/fi17120537 - 24 Nov 2025
Abstract
Nowadays, driven by the exponential growth of parameters and training data of AI applications and Large Language Models, a single GPU is no longer sufficient in terms of computing power and storage capacity. Building high-performance multi-GPU systems or a GPU cluster via vertical [...] Read more.
Nowadays, driven by the exponential growth of parameters and training data of AI applications and Large Language Models, a single GPU is no longer sufficient in terms of computing power and storage capacity. Building high-performance multi-GPU systems or a GPU cluster via vertical scaling (scale-up) has thus become an effective approach to break the bottleneck and has further emerged as a key research focus. Given that traditional inter-GPU communication technologies fail to meet the requirement of GPU interconnection in vertical scaling, a variety of high-performance inter-GPU communication protocols tailored for the scale-up domain have been proposed recently. Notably, due to the emerging nature of these demands and technologies, academic research in this field remains scarce, with limited deep participation from the academic community. Inspired by this trend, this article identifies the challenges and requirements of a scale-up network, analyzes the bottlenecks of traditional technologies like PCIe in a scale-up network, and surveys the emerging scale-up targeted technologies, including NVLink, OISA, UALink, SUE, and other X-Links. Then, an in-depth comparison and discussion is conducted, and we express our insights in protocol design and related technologies. We also highlight that existing emerging protocols and technologies still face limitations, with certain technical mechanisms requiring further exploration. Finally, this article presents future research directions and opportunities. As the first review article fully focusing on intra-node GPU interconnection in a scale-up network, this article aims to provide valuable insights and guidance for future research in this emerging field, and we hope to establish a foundation that will inspire and direct subsequent studies.Full article
Show Figures

Figure 1

33 pages, 708 KB  
Review
A Literature Review of Personalized Large Language Models for Email Generation and Automation
byRodrigo Novelo,Rodrigo Rocha Silva andJorge Bernardino
Future Internet2025,17(12), 536;https://doi.org/10.3390/fi17120536 - 24 Nov 2025
Abstract
In 2024, a total of 361 billion emails were sent and received by businesses and consumers each day. Email remains the preferred method of communication for work-related matters, with knowledge workers spending two to five hours a day managing their inboxes. The advent [...] Read more.
In 2024, a total of 361 billion emails were sent and received by businesses and consumers each day. Email remains the preferred method of communication for work-related matters, with knowledge workers spending two to five hours a day managing their inboxes. The advent of Large Language Models (LLMs) has introduced new possibilities for personalized email automation, offering context-aware and stylistically adaptive responses. However, achieving effective personalization introduces technical, ethical, and security challenges. This survey presents a systematic review of 32 papers published between 2021 and 2025, identified using the PRISMA methodology across Google Scholar, IEEE Xplore, and the ACM Digital Library. Our analysis reveals that state-of-the-art email assistants integrate RAG and PEFT with feedback-driven refinement. User-centric interfaces and privacy-aware architectures support these assistants. Nevertheless, these advances also expose systems to new risks such as Trojan plugins and adversarial prompt injections. This highlights the importance of integrated security frameworks. This review provides a structured approach to advancing personalized LLM-based email systems, identifying persistent research gaps in adaptive learning, benchmark development, and ethical design. This work is intended to guide researchers and developers who are looking to create secure, efficient, and human-aligned communication assistants.Full article
Show Figures

Graphical abstract

19 pages, 4893 KB  
Article
LLMs in Staging: An Orchestrated LLM Workflow for Structured Augmentation with Fact Scoring
byGiuseppe Trimigno,Gianfranco Lombardo,Michele Tomaiuolo,Stefano Cagnoni andAgostino Poggi
Future Internet2025,17(12), 535;https://doi.org/10.3390/fi17120535 - 24 Nov 2025
Abstract
Retrieval-augmented generation (RAG) enriches prompts with external knowledge, but it often relies on additional infrastructure that may be impractical in resource-constrained or offline settings. In addition, updating the internal knowledge of a language model through retraining is costly and inflexible. To address these [...] Read more.
Retrieval-augmented generation (RAG) enriches prompts with external knowledge, but it often relies on additional infrastructure that may be impractical in resource-constrained or offline settings. In addition, updating the internal knowledge of a language model through retraining is costly and inflexible. To address these limitations, we propose an explainable and structured prompt augmentation pipeline that enhances inputs using pre-trained models and rule-based extractors, without requiring external sources. We describe this approach as an orchestrated LLM workflow: a structured sequence in which lightweight LLM modules assume specialized roles. Specifically, (1) an extractor module identifies factual triples from input prompts by combining dependency parsing with a rule-based extraction algorithm; (2) a scorer module, based on a generic lightweight LLM, evaluates the importance of each triple via its self-attention patterns, leveraging internal beliefs to promote explainability and trustworthy cooperation with the downstream model; (3) a performer module processes the augmented prompt for downstream tasks in supervised fine-tuning or zero-shot settings. Much like in a theater staging, each module operates transparently behind the scenes to support and elevate the performer’s final output. We evaluate this approach across multiple performer architectures (encoder-only, encoder-decoder, and decoder-only) and NLP tasks (multiple-choice QA, open-book QA, and summarization). Our results show that this structured augmentation with scored facts yields consistent improvements compared to baseline prompting: up to a28.78% accuracy improvement for multiple-choice QA, up to a9.42% BLEURT improvement for open-book QA, and up to a18.14% ROUGE-L improvement for summarization. By decoupling knowledge scoring from task execution, our method provides a practical, interpretable, and low-cost alternative to RAG in static or knowledge-limited environments.Full article
Show Figures

Graphical abstract

42 pages, 3449 KB  
Article
Blockchain–AI–Geolocation Integrated Architecture for Mobile Identity and OTP Verification
byGajasin Gamage Damith Sulochana andDilshan Indraraj De Silva
Future Internet2025,17(12), 534;https://doi.org/10.3390/fi17120534 - 23 Nov 2025
Abstract
One-Time Passwords (OTPs) are a core component of multi-factor authentication in banking, e-commerce, and digital platforms. However, conventional delivery channels such as SMS and email are increasingly vulnerable to SIM-swap fraud, phishing, spoofing, and session hijacking. This study proposes an end-to-end mobile authentication [...] Read more.
One-Time Passwords (OTPs) are a core component of multi-factor authentication in banking, e-commerce, and digital platforms. However, conventional delivery channels such as SMS and email are increasingly vulnerable to SIM-swap fraud, phishing, spoofing, and session hijacking. This study proposes an end-to-end mobile authentication architecture that integrates a permissioned Hyperledger Fabric blockchain for tamper-evident identity management, an AI-driven risk engine for behavioral and SIM-swap anomaly detection, Zero-Knowledge Proofs (ZKPs) for privacy-preserving verification, and geolocation-bound OTP validation for contextual assurance. Hyperledger Fabric is selected for its permissioned governance, configurable endorsement policies, and deterministic chaincode execution, which together support regulatory compliance and high throughput without the overhead of cryptocurrency. The system is implemented as a set of modular microservices that combine encrypted off-chain storage with on-chain hash references and smart-contract–enforced policies for geofencing and privacy protection. Experimental results show sub-0.5 s total verification latency (including ZKP overhead), approximately 850 transactions per second throughput under an OR-endorsement policy, and an F1-score of 0.88 for SIM-swap detection. Collectively, these findings demonstrate a scalable, privacy-centric, and interoperable solution that strengthens OTP-based authentication while preserving user confidentiality, operational transparency, and regulatory compliance across mobile network operators.Full article
(This article belongs to the Special IssueAdvances in Wireless and Mobile Networking—2nd Edition)
Show Figures

Graphical abstract

23 pages, 1937 KB  
Article
RIS-Assisted Joint Communication, Sensing, and Multi-Tier Computing Systems
byYunzhe Wang andMinzheng Li
Future Internet2025,17(12), 533;https://doi.org/10.3390/fi17120533 - 23 Nov 2025
Abstract
This paper investigates the application of Reconfigurable Intelligent Surfaces (RIS) in Joint Communication, Sensing, and Multi-tier Computing (JCSMC). An RIS-assisted JCSMC framework is proposed, wherein a full-duplex multi-antenna Base Station (BS) is employed to sense targets and provide edge computation services to User [...] Read more.
This paper investigates the application of Reconfigurable Intelligent Surfaces (RIS) in Joint Communication, Sensing, and Multi-tier Computing (JCSMC). An RIS-assisted JCSMC framework is proposed, wherein a full-duplex multi-antenna Base Station (BS) is employed to sense targets and provide edge computation services to User Equipment (UE). To enhance computational efficiency, a Multi-Tier Computing (MTC) architecture is adopted, enabling joint processing of computing tasks through the deployment of both the BS and the Cloud Servers (CS). Meanwhile, this paper studies the potential advantages of RIS in the proposed framework. It can assist in enhancing the efficiency of resource sharing between sensing and computing functions and then maximize the ability of computing the offload. This study aims to maximize the computation rate by jointly optimizing the BS transmission beamformer, RIS reflection coefficients, and computational resource allocation. The ensuing non-convex optimization problems are addressed using an alternating optimization algorithm based on Block Coordinate Ascent (BCA) for partial offloading mode, which ensures convergence to a local optimum, then extending the proposed joint design algorithms to the scenario with imperfect Self-Interference Cancellation. The effectiveness of the proposed algorithm was confirmed by analyzing and contrasting the simulation results with the benchmark scheme. The simulation results show that, when the BS resources are limited, utilizing MTC architecture can significantly improve the computation rate. In addition, the proposed RIS-assisted JSCMC framework is superior to other benchmark schemes in dealing with resource utilization between different functions, achieving superior computing power while maintaining sensing quality.Full article
Show Figures

Figure 1

25 pages, 9168 KB  
Article
A Resilient Deep Learning Framework for Mobile Malware Detection: From Architecture to Deployment
byAysha Alfaw,Mohsen Rouached andAymen Akremi
Future Internet2025,17(12), 532;https://doi.org/10.3390/fi17120532 - 21 Nov 2025
Abstract
Mobile devices are frequent targets of malware due to the large volume of sensitive personal, financial, and corporate data they process. Traditional static, dynamic, and hybrid analysis methods are increasingly insufficient against evolving threats. This paper proposes a resilient deep learning framework for [...] Read more.
Mobile devices are frequent targets of malware due to the large volume of sensitive personal, financial, and corporate data they process. Traditional static, dynamic, and hybrid analysis methods are increasingly insufficient against evolving threats. This paper proposes a resilient deep learning framework for Android malware detection, integrating multiple models and a CPU-aware selection algorithm to balance accuracy and efficiency on mobile devices. Two benchmark datasets (i.e., the Android Malware Dataset for Machine Learning and CIC-InvesAndMal2019) were used to evaluate five deep learning models: DNN, CNN, RNN, LSTM, and CNN-LSTM. The results show that CNN-LSTM achieves the highest detection accuracy of 97.4% on CIC-InvesAndMal2019, while CNN delivers strong accuracy of 98.07%, with the lowest CPU usage (5.2%) on the Android Dataset, making it the most practical for on-device deployment. The framework is implemented as an Android application using TensorFlow Lite, providing near-real-time malware detection with an inference time of under 150 ms and memory usage below 50 MB. These findings confirm the effectiveness of deep learning for mobile malware detection and demonstrate the feasibility of deploying resilient detection systems on resource-constrained devices.Full article
(This article belongs to the Special IssueCybersecurity in the Age of AI, IoT, and Edge Computing)
Show Figures

Figure 1

21 pages, 8038 KB  
Article
Semantic Data Federated Query Optimization Based on Decomposition of Block-Level Subqueries
byYuan Yao andYang Zhang
Future Internet2025,17(11), 531;https://doi.org/10.3390/fi17110531 - 20 Nov 2025
Abstract
The digital age and the rise of Internet of Things technology have led to an explosion of data, including vast amounts of semantic data. In the context of large-scale semantic data graphs, centralized storage struggles to meet the efficiency requirements of the queries. [...] Read more.
The digital age and the rise of Internet of Things technology have led to an explosion of data, including vast amounts of semantic data. In the context of large-scale semantic data graphs, centralized storage struggles to meet the efficiency requirements of the queries. This has led to a shift towards distributed semantic data systems. In federated semantic data systems, ensuring both query efficiency and comprehensive results is challenging because of data independence and privacy constraints. To address this, we propose a query processing framework featuring a block-level star decomposition method for generating efficient query plans, augmented by auxiliary indexes to guarantee the completeness of the results. A specialized FEDERATEDAND BY keyword is introduced for federated environments, and a partition-based parallel assembly method accelerates the result integration. Our approach demonstrably improves query efficiency and is analyzed for its potential application in energy systems.Full article
(This article belongs to the Special IssueInternet of Things Technology and Service Computing)
Show Figures

Figure 1

21 pages, 653 KB  
Article
A Stateful Extension to P4THLS for Advanced Telemetry and Flow Control
byMostafa Abbasmollaei,Tarek Ould-Bachir andYvon Savaria
Future Internet2025,17(11), 530;https://doi.org/10.3390/fi17110530 - 20 Nov 2025
Abstract
Programmable data planes are increasingly essential for enabling In-band Network Telemetry (INT), fine-grained monitoring, and congestion-aware packet processing. Although the P4 language provides a high-level abstraction to describe such behaviors, implementing them efficiently on FPGA-based platforms remains challenging due to hardware constraints and [...] Read more.
Programmable data planes are increasingly essential for enabling In-band Network Telemetry (INT), fine-grained monitoring, and congestion-aware packet processing. Although the P4 language provides a high-level abstraction to describe such behaviors, implementing them efficiently on FPGA-based platforms remains challenging due to hardware constraints and limited compiler support. Building on P4THLS framework, which leverages HLS for FPGA data-plane programmability, this paper extends the approach by introducing support for P4-style stateful objects and a structured metadata propagation mechanism throughout the processing pipeline. These extensions enrich pipeline logic with real-time context and flow-level state, thereby facilitating advanced applications while preserving programmability. The generated codebase remains extensible and customizable, allowing developers to adapt the design to various scenarios. We implement two representative use cases to demonstrate the effectiveness of the approach: an INT-enabled forwarding engine that embeds hop-by-hop telemetry into packets and a congestion-aware switch that dynamically adapts to queue conditions. Evaluation of an AMD Alveo U280 FPGA implementation reveals that incorporating INT support adds roughly 900 LUTs and 1000 Flip-Flops relative to the baseline switch. Furthermore, the proposed meter maintains rate measurement errors below 3% at 700 Mbps and achieves up to a 5× reduction in LUT and 2× reduction in Flip-Flop usage compared to existing FPGA-based stateful designs, substantially expanding the applicability of P4THLS for complex and performance-critical network functions.Full article
(This article belongs to the Special IssueKey Enabling Technologies for Beyond 5G Networks—2nd Edition)
Show Figures

Figure 1

38 pages, 4889 KB  
Article
Top-K Feature Selection for IoT Intrusion Detection: Contributions of XGBoost, LightGBM, and Random Forest
byBrou Médard Kouassi,Abou Bakary Ballo,Kacoutchy Jean Ayikpa,Diarra Mamadou andMinfonga Zié Jérôme Coulibaly
Future Internet2025,17(11), 529;https://doi.org/10.3390/fi17110529 - 19 Nov 2025
Abstract
The rapid growth of the Internet of Things (IoT) has created vast networks of interconnected devices that are increasingly exposed to cyberattacks. Ensuring the security of such distributed systems requires efficient and adaptive intrusion detection mechanisms. However, conventional methods face limitations in processing [...] Read more.
The rapid growth of the Internet of Things (IoT) has created vast networks of interconnected devices that are increasingly exposed to cyberattacks. Ensuring the security of such distributed systems requires efficient and adaptive intrusion detection mechanisms. However, conventional methods face limitations in processing large and complex feature spaces. To address this issue, this study proposes an optimized intrusion detection approach based on Top-K feature selection combined with ensemble learning models, evaluated on the CICIoMT2024 dataset. Three algorithms, XGBoost, LightGBM, and Random Forest, were trained and tested on IoT datasets using three feature configurations: Top-10, Top-15, and the complete feature set. The results show that the Random Forest model provides the best balance between accuracy and computational efficiency, achieving 91.7% accuracy and an F1-score of 93% with the Top-10 subset while reducing processing time by 35%. These findings demonstrate that the Top-K selection strategy enhances the interpretability and performance of IDSs in IoT environments. Future work will extend this framework to real-time adaptive detection and edge computing integration for large-scale IoT deployments.Full article
(This article belongs to the Special IssueMachine Learning and Internet of Things in Industry 4.0)
Show Figures

Figure 1

futureinternet-logo

Journal Browser

Journal Browser

Highly Accessed Articles

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Topics

Topic inEducation Sciences,Future Internet,Information,Sustainability
Advances in Online and Distance LearningTopic Editors: Neil Gordon, Han Reichgelt
Deadline: 31 December 2025
Topic inApplied Sciences,Electronics,Future Internet,IoT,Technologies,Inventions,Sensors,Vehicles
Next-Generation IoT and Smart Systems for Communication and SensingTopic Editors: Dinh-Thuan Do, Vitor Fialho, Luis Pires, Francisco Rego, Ricardo Santos, Vasco Velez
Deadline: 31 January 2026
Topic inEntropy,Future Internet,Healthcare,Sensors,Data
Communications Challenges in Health and Well-Being, 2nd EditionTopic Editors: Dragana Bajic, Konstantinos Katzis, Gordana Gardasevic
Deadline: 28 February 2026
Topic inAI,Computers,Education Sciences,Societies,Future Internet,Technologies
AI Trends in Teacher and Student TrainingTopic Editors: José Fernández-Cerero, Marta Montenegro-Rueda
Deadline: 11 March 2026
loading...

Special Issues

Special Issue inFuture Internet
Artificial Intelligence for Smart Healthcare: Methods, Applications, and ChallengesGuest Editors: Xiuyi Fan, Si Yong Yeo, Siyuan Liu
Deadline: 30 November 2025
Special Issue inFuture Internet
Convergence of IoT, Edge and Cloud SystemsGuest Editors: Dandan Li, Li Duan
Deadline: 30 November 2025
Special Issue inFuture Internet
Information Security in Telecommunication SystemsGuest Editors: Xuyang Jing, Xian Li, Cong Wang
Deadline: 20 December 2025
Special Issue inFuture Internet
The Future Internet of Medical Things, 3rd EditionGuest Editor: Matthew Pediaditis
Deadline: 31 December 2025

Topical Collections

Topical Collection inFuture Internet
Information Systems SecurityCollection Editor: Luis Javier Garcia Villalba
Topical Collection inFuture Internet
Innovative People-Centered Solutions Applied to Industries, Cities and SocietiesCollection Editors: Dino Giuli, Filipe Portela
Topical Collection inFuture Internet
Featured Reviews of Future Internet ResearchCollection Editor: Dino Giuli
Topical Collection inFuture Internet
Machine Learning Approaches for User IdentityCollection Editors: Kaushik Roy, Mustafa Atay, Ajita Rattani
Future Internet, EISSN 1999-5903, Published by MDPI
RSSContent Alert

Further Information

Article Processing Charges Pay an Invoice Open Access Policy Contact MDPI Jobs at MDPI

Guidelines

For Authors For Reviewers For Editors For Librarians For Publishers For Societies For Conference Organizers

MDPI Initiatives

Sciforum MDPI Books Preprints.org Scilit SciProfiles Encyclopedia JAMS Proceedings Series

Follow MDPI

LinkedIn Facebook X
MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

© 1996-2025 MDPI (Basel, Switzerland) unless otherwise stated
Terms and Conditions Privacy Policy
We use cookies on our website to ensure you get the best experience.
Read more about our cookieshere.
Accept
Back to TopTop
[8]ページ先頭

©2009-2025 Movatter.jp