Effective hybrid video denoising and blending framework for Internet of Remote Things (IoRT) environments

The Internet of Remote Things (IoRT) has emerged as a transformative paradigm, merging IoT capabilities with remote technologies. IoRT environments, featuring interconnected sensors and robots, face challenges like sensor noise and low-light conditions, compromising video stream quality. This paper proposes a Hybrid Video Denoising and Blending Framework to address IoRT video data shortcomings. Leveraging spatial and temporal domain denoising techniques, the framework effectively removes noise while preserving crucial details. The inclusion of advanced blending algorithms facilitates seamless fusion of data from multiple sources, enhancing decision-making in real-world scenarios. The framework adopts a dynamic weighted averaging approach and an optimal sensor selection mechanism to intelligently choose informative data sources, improving blended output quality. Extensive experiments with a diverse IoRT dataset showcase the framework's superiority over state-of-the-art techniques, offering significant enhancements in video quality, noise reduction, and data fusion accuracy. Applications like surveillance, autonomous remotes, and industrial automation can benefit from the framework's ability to provide clearer, more reliable visual information. In conclusion, this research introduces a pioneering approach to mitigate video noise and enhance data fusion in IoRT, showcasing promising results and paving the way for further research in the integration of Remotes and IoT.


Introduction
In an era where interconnected devices are reshaping industries and our daily lives, the Internet of Things (IoT) has emerged as a transformative force.IoT technologies have extended their influence beyond urban landscapes, finding applications in the most remote and challenging environments.This expansion has given rise to the concept of the Internet of Remote Things (IoRT), a subset of IoT tailored to environments where direct human interaction is limited, dangerous, or even impossible.IoRT brings with it a new set of possibilities and complexities, particularly when it comes to processing video data a cornerstone of modern information exchange.

The unveiling of IoRT and its expansive implications
The term "Internet of Remote Things (IoRT)" refers to the integration of remote or distant devices, sensors, and systems into the broader framework of the Internet of Things (IoT).While the traditional IoT focuses on connecting devices within a localized environment, IoRT extends this concept to include objects and technologies that operate in remote or distant locations.The key distinguishing factor is the geographical separation of the devices, which can be situated in areas that are challenging to access or are not part of the immediate physical vicinity.
The concept of the Internet of Remote Things (IoRT) gains significance as our technological landscape evolves, pushing the boundaries of what is achievable with connectivity.In the realm of healthcare, IoRT can be envisioned through the deployment of medical sensors and devices in remote patient monitoring.Patients residing in distant locations or those with limited access to healthcare facilities can benefit from continuous monitoring of vital signs, enabling healthcare professionals to remotely assess and respond to their health conditions in real time.This not only enhances patient care but also contributes to the early detection of health issues.
IoRT is not limited to terrestrial applications; it extends into the exploration of space.In space missions, where human presence is limited or non-existent, IoRT can be instrumental in gathering data and controlling devices on distant planets or celestial bodies.Robotic explorers equipped with IoRT capabilities can navigate and perform tasks in extraterrestrial environments, relaying crucial information back to Earth for analy-sis and decision-making.The energy sector is another domain where IoRT can revolutionize operations.In remote areas with energy infrastructure, such as offshore wind farms or isolated power stations, IoRT enables efficient monitoring and management of equipment.Sensors and smart devices can detect anomalies, optimize energy production, and enhance maintenance procedures, ultimately ensuring the reliability and sustainability of energy sources in challenging environments.One notable aspect of IoRT is its ability to bridge geographical gaps and overcome logistical challenges.By incorporating remote devices into the IoT ecosystem, industries can enhance efficiency, optimize resource utilization, and gather valuable insights from previously inaccessible locations.For instance, in agriculture, IoRT might involve deploying sensors in remote fields to monitor soil conditions, weather patterns, and crop health, enabling farmers to make datadriven decisions even in distant agricultural landscapes.
In essence, the Internet of Remote Things extends the reach and impact of the IoT paradigm, bringing connectivity and intelligence to remote environments where traditional connectivity solutions might be impractical or unfeasible.As technology continues to advance, IoRT holds the promise of unlocking new possibilities for remote monitoring, automation, and control across a diverse range of industries and applications.As IoRT continues to evolve, it brings forth a paradigm shift in how we perceive and interact with technology across vast distances.The ability to connect and control devices in remote locations not only opens new avenues for exploration and industry but also fosters a more interconnected and intelligent world, where the benefits of technology can reach even the most distant corners of our planet and beyond.

Navigating challenges in video data quality within IoRT
In the Internet of Robotic Things (IoRT) environments, several challenges pose significant hurdles to the seamless operation and advancement of robotic technologies.One such challenge is the constraint on bandwidth.The interconnected nature of robotic devices requires a substantial amount of data exchange for real-time communication and coordination.Limited bandwidth can lead to delays in data transmission, impacting the responsiveness of robotic systems.This constraint is particularly critical in scenarios where split-second decisions are vital, such as in autonomous vehicles or emergency response robots.The implication is a potential compromise in the efficiency and reliability of IoRT applications, potentially hindering their widespread adoption in time-sensitive domains.
In the realm of the Internet of Robotic Things (IoRT), challenges abound, with noisy data posing a significant hurdle due to environmental factors such as interference and sensor malfunctions that can distort crucial sensor information.Overcoming these challenges is essential to ensure the reliability and safety of robotic technologies.Furthermore, security concerns escalate with the increased integration of robots across various industries, where cyber-attacks and unauthorized access could compromise robotic systems, especially in critical applications like healthcare.Interoperability issues also emerge as a hurdle in IoRT, as the lack of standardized communication protocols may lead to fragmentation, limiting the scalability and adaptability of robotic technologies across diverse applications.In parallel, real-time responsiveness is critical in IoRT scenarios, where any latency or video quality degradation risks delays and system efficiency.Managing data transmission and bandwidth becomes challenging in interconnected environments, necessitating efficient compression algorithms and networking solutions.Highquality video streams demand substantial bandwidth, a key factor for secure transmission and storage, particularly in human-robot interaction applications.The clarity of visuals in Human-Robot Interaction (HRI) becomes pivotal for effective communication, underscoring the significance of video quality in seamlessly integrating robotic technologies.Addressing these multifaceted challenges is imperative for unlocking the full potential of IoRT devices and fostering enhanced safety, efficiency, and user acceptance.

Paving the way for advanced video processing in IoRT
Denoising techniques play a crucial role in enhancing the quality of images by reducing unwanted noise, and several algorithms have been developed to address this challenge.On the blending front, techniques such as Alpha Blending provide a fundamental approach to combining images, assigning weights to pixels to control the intensity of each image's contribution.Widely used in image overlays and compositing, Alpha Blending is efficient and straightforward.While its simplicity is an advantage, performance evaluations highlight potential challenges in achieving seamless transitions, particularly in scenarios where image characteristics significantly differ.
Poisson Blending focuses on achieving seamless transitions between images through solving a Poisson equation, essential for image stitching and panorama creation.Multi-resolution Blending, blending images at different resolutions, is valuable in virtual reality and HDR imaging, ensuring efficient and visually appealing results.Exposure Fusion excels in HDR photography, blending multiple exposures for an extended dynamic range.Gradient Domain Blending minimizes gradient differences for seamless transitions in image stitching and compositing.Traditional video processing methods fall short in IoRT, prompting the proposal of a hybrid framework designed to tackle challenges like noise and quality degradation.The forthcoming sections will delve into the methodology, experiments, and results showcasing the potential of this hybrid framework to revolutionize IoRT video data processing.

Related works
This survey's primary aim is to explore the domain of video processing within IoRT environments, underlining the necessity to enhance video quality for better functionality and performance of remote systems.As IoRT continues to burgeon, comprehending and ameliorating video processing techniques become essential.Several key research questions will steer this study, including inquiries into the optimization of video denoising and blending techniques for IoRT, challenges and solutions in video processing in remote IoRT settings, and the impact of different video processing techniques on bandwidth and storage in IoRT applications.

Background and concepts
IoRT is a sophisticated framework that extends the connectivity capabilities of the Internet of Things (IoT) to devices and systems in remote, often isolated locations, enabling them to transmit and receive data for monitoring, control, and automation purposes [1].This technology is particularly crucial in areas where traditional network infrastructure is either non-existent or impractical to deploy.
The architecture of IoRT is a complex, multi-layered system comprising several key components: (1) Sensors/Actuators: At the foundation of the IoRT are remote sensors and actuators.Sensors collect various types of data from the environment, such as temperature, pressure, or images, while actuators perform actions based on the processed data, like adjusting a thermostat or activating a pump.( 2) Connectivity: This layer involves the communication networks that connect these remote devices to the internet.(3) Data Processing: Data transmitted by sensors is processed either at the edge (near or on the device) or in the cloud.( 4) User Interface: The top layer is where humans interact with the IoRT system, often through dashboards, mobile apps, or web applications, allowing users to monitor data, receive alerts, and perform manual overrides [2][3][4][5][6].
In the realm of video processing, techniques like video denoising and blending are essential in IoRT.Video denoising [7] is the process of removing noise or graininess from video footage, which is common in data transmitted over long distances or through suboptimal networks.This process helps in enhancing the clarity and overall quality of the video [8][9][10].Video blending involves combining video data from multiple sources or frames to produce a single, high-quality output [11,12].
However, video processing in IoRT presents significant challenges.Limited bandwidth means that transmitting high-quality video is difficult, often necessitating local processing and smart compression techniques.High latency and intermittent connectivity can delay the transmission of crucial real-time video data [13], making quick decision-making challenging.Furthermore, the need for real-time processing in potentially unstable environments requires robust [14], fault-tolerant systems and advanced algorithms that can make intelligent decisions locally [15].Addressing these challenges necessitates innovative solutions in video processing technology, network infrastructure, and data analytics, making this an exciting and rapidly evolving field of research and development

Video denoising techniques
Traditional video denoising techniques primarily revolve around temporal and spatial filtering.
(1) Temporal Filtering: This technique leverages the information from successive frames in a video sequence.By analyzing the differences and similarities across these frames, temporal filtering aims to reduce noise that varies between frames while preserving the actual motion and details within the scene [16].However, in the context of IoRT, the effectiveness of temporal filtering can be limited due to high latency in data transmission, which disrupts the sequence of real-time video frames [17].(2) Spatial Filtering: Spatial filtering, on the other hand, focuses on reducing noise within a single frame rather than across a sequence [18].It involves techniques like Gaussian blurring or median filtering, which work by analyzing the pixels around a target pixel and recalculating the target pixel's value based on its neighbours [19].While this can be effective for static noise, it often results in a loss of detail, leading to blurred images.In IoRT, the challenge intensifies due to the variable quality of transmitted images and the need for real-time processing, often leading to either over-smoothed or still noisy results [20].
Deep learning models, especially Convolutional Neural Networks (CNNs) [21] and autoencoders, have shown significant promise in distinguishing between true image content and noise [22], providing superior denoising results with better detail preservation.ML and DL offer substantial improvements in video denoising quality, their practical application in IoRT is contingent on the availability of suitable computational resources and the specific requirements of the IoRT application in question.As technology advances, we anticipate more efficient models and edge computing solutions that could make these advanced denoising techniques more accessible and suitable for IoRT environments [23][24][25][26].

Video blending techniques
To seamlessly merge video streams in the Internet of Remote Things (IoRT), essential blending techniques include alpha blending, which combines videos using transparency factors for smooth transitions but may lead to ghosting or double exposures with diverse or fast-moving scenes.Pyramid blending, a more advanced method, breaks down images into frequency layers and blends them sequentially, offering a sophisticated approach to creating cohesive video streams [27].These methods play a crucial role in ensuring fluid and artifact-free transitions for IoRT applications.Maintaining consistency across video streams is complex, given the potential for varying environmental conditions and camera settings [28].Moreover, ensuring quality is a significant hurdle, as network issues like limited bandwidth and high latency are common in IoRT scenarios, potentially disrupting the real-time transmission and processing of video data [29].In IoRT, addressing constraints involves innovative solutions such as utilizing edge computing to process video data near the source, reducing latency and bandwidth usage.Additionally, advanced algorithms can dynamically adjust blending parameters in response to network conditions and video content in real-time.Leveraging machine learning models to predict and compensate for network delays offers the potential for smoother video streams in IoRT applications, enhancing the efficiency of remote operations [29,30].

Hybrid frameworks for video processing
In the context of the Internet of Robotic Things (IoRT), a progressive approach involves combining video denoising and blending to enhance the quality of video data, especially in remote environments with unstable networks.Emerging frameworks integrate both processes, offering sequential, parallel, or integrated processing approaches, with sequential processing being simpler but potentially challenging in real-time applications due to latency [31][32][33][34][35]. Simultaneously, the integration of AI, particularly deep learning, is revolutionizing video processing frameworks by training models to identify and remove noise while seamlessly blending video streams in a unified approach [36].When coupled with edge computing in IoRT, these AI models enable more efficient video processing near the data source, effectively reducing latency.This synergy of AI and edge computing holds significant promise for enhancing video quality, thereby making remote monitoring and automation systems in IoRT more reliable, efficient, and high-quality.

Challenges and solutions in IoRT video processing
The Internet of Remote Things (IoRT) brings forth a unique set of challenges, primarily due to the remote environments in which it operates, often characterized by connectivity issues, limited power, scarce computational resources, and heightened concerns around data security and privacy [37][38][39].
(1) Connectivity Fluctuations: Remote areas often suffer from unstable internet connections or, in some cases, rely on satellite communication, which can be both slow and expensive.(2) Limited Power and Computational Resources: Devices in remote locations often have restricted access to power, relying on batteries or renewable sources.Additionally, the computational capacity of these devices is often limited, making it challenging to process complex algorithms locally.(3) Data Security: The transmission of data over long distances, potentially over unsecured or public networks, increases the risk of interception or unauthorized access.
(4) Privacy Concerns: Many IoRT applications collect sensitive information.Ensuring this data is handled and stored securely is paramount to maintaining user trust and compliance with privacy laws.
Existing solutions have sought to address these challenges with varying degrees of success: • Edge Computing: By processing data closer to where it is generated, edge computing addresses several of these issues.It reduces the need for constant connectivity and the amount of data that needs to be transmitted, conserving bandwidth.• Data Encryption and Secure Protocols: The use of end-to-end encryption and secure communication protocols like TLS/SSL can significantly enhance data security.However, these methods can also increase the computational load and require a stable connection to maintain a continuous security handshake between devices.• Privacy-Preserving Algorithms: Techniques like federated learning and differential privacy enable analysis without requiring access to raw data, helping mitigate privacy concerns.However, they can be complex to implement and may not be suitable for all types of analysis.• Energy-Efficient Hardware and Algorithms: The development of low-power hardware and energyefficient algorithms has been crucial for powerconstrained IoRT devices.However, there's often a trade-off between power efficiency and computational capacity.
Recent advances in video denoising, blending, and the Internet of Robotic Things (IoRT) are driven by breakthroughs in artificial intelligence, computer vision, and robotics.Deep learning, utilizing convolutional and recurrent neural networks, enhances video denoising, preserving fine details while eliminating noise effectively.In blending technologies, applications from augmented reality to video editing benefit from deep learning approaches, such as generative adversarial networks, improving the quality of blended content across various industries like entertainment and virtual collaboration.In IoRT, the integration of robotics and the internet facilitates seamless collaboration among robots, enhancing efficiency across industries.Realtime data exchange allows robotic systems to adapt to dynamic environments, fostering the development of smart factories and autonomous vehicles.The convergence of video processing and IoRT is promising, as integrating denoising and blending into robotic vision enhances perception, enabling informed decisionmaking and improved interaction.Considering scalability, solutions must be tailored to the specific requirements of each IoRT application and environment.

Proposed scheme
The framework combines adaptive thresholding and Fourier Transform-based filtering for denoising and employs a weighted average approach for blending, optimizing the visual quality, energy consumption, and latency in various IoRT settings.In IoRT environments, the transmission of high-quality video is crucial for various applications, including surveillance, remote monitoring, and telemedicine.The challenges posed by noisy environments, limited bandwidth, and resource constraints necessitate the development of sophisticated video processing techniques.The proposed framework addresses these challenges by integrating advanced denoising and blending methodologies, ensuring seamless video transmission with minimal resource utilization and latency.
In manufacturing and Industry 4.0, a hybrid framework seamlessly integrates traditional and collaborative robots, utilizing AI algorithms for predictive maintenance and quality control with real-time sensor data.For autonomous vehicles, the hybrid system combines edge computing for immediate decision-making by onboard AI with cloud-based analytics for long-term traffic pattern identification and route optimization.In healthcare, the hybrid approach integrates robotic assistants using local processing for immediate patient interaction and cloud-based AI for complex diagnostics, enhancing patient care.In smart agriculture, the hybrid framework optimizes precision farming through real-time data processing on drones and ground-based robots, coupled with cloud-based analysis for sustainable practices.Warehouse operations benefit from the hybrid framework, combining AGVs and robotic arms with on-board AI for immediate navigation and cloudbased analytics for long-term efficiency improvements in inventory management and order fulfilment.
• Quality of Experience: Q.

Effective hybrid video denoising and blending framework
The framework employs a hybrid denoising approach that combines adaptive thresholding and Fourier Transform-based filtering.The denoising process is mathematically represented by a comprehensive equation, considering various parameters like blending factor, frame weights, and standard deviation of the Gaussian filter.Adaptive thresholding is meticulously calculated to optimize the denoising process, considering the local characteristics of each video frame.The methodology amalgamates advanced denoising and blending techniques, aiming to elevate the quality of experience, optimize energy consumption, and minimizes data transmission latency in diverse IoRT settings as shown in Figure 1.

(i) Median Filter for Denoising
The median filter algorithm is a non-linear digital filtering technique used primarily for noise reduction in images and videos.This process involves iterating over each pixel in an image or a frame of a video, then, for each pixel, examining a surrounding window of neighbouring pixels -the size of which is defined by a predefined windowSize (e.g. 3 × 3, 5 × 5).The pixel values in this window are sorted numerically, and the median value (the middle pixel in the sorted list) is computed.The original pixel is then replaced with this median value in the output image or frame, effectively reducing noise while preserving edges within the image.

(ii) Cuckoo Search for Optimization
The Cuckoo Search Optimization algorithm for video tailors a nature-inspired method based on cuckoo bird behaviour to optimize video parameters.Initially, a population of "nests" representing potential video processing parameters is generated.As iterations progress, each "nest" undergoes adjustments based on Levy flights to find better parameter combinations.A random nest can be replaced if a new combination proves superior.To introduce randomness and escape local optima, with a certain probability, the worstperforming nests are abandoned and replaced with new random parameter sets.The algorithm evaluates each nest's quality by applying its parameters to the video and measuring specific metrics, like clarity or compression efficiency.The process iterates until predefined conditions (like a maximum number of generations) are met, ultimately returning the best video parameters discovered.

(iii) Proposed Hybrid Maximum likelihood estimation (MLE) -Maximum dynamic range (MDR) for blending
The Hybrid MLE-MDR Blending algorithm combines the statistical robustness of Maximum Likelihood Estimation (MLE) with the dynamic range enhancement of Maximum Dynamic Range (MDR) to blend multiple images.For each pixel location across the input images, the algorithm calculates a weighted mean based on a specified weighting function and the noise level (sigma).This mean is then adjusted using the MDR method, which factors in the difference between the highest and lowest pixel values from the input images, ensuring the final blended image retains good contrast and brightness.The blended pixel value is a mix of these MLE and MDR calculations, resulting in a harmonized image that incorporates the best attributes from the input images.

MLE and MDR based video frame blending and cuckoo for optimization
The framework is rigorously evaluated in a simulated IoRT environment, comparing the Quality of Experience (QoE), energy consumption, and latency with existing methodologies.The optimization process involves fine-tuning various parameters to achieve the optimal trade-off between video quality, energy consumption, and latency, aligning with the specific requirements and constraints of IoRT environments.This approach is meticulously represented by the equation: This equation is instrumental in reducing noise in video frames, considering various parameters like the blending factor, α, the number of frames being averaged, N, the weights for each frame, wi, and the standard deviation of the Gaussian filter, σ , among others.Additionally, the adaptive thresholding is calculated using the equation: This equation optimizes the denoising process by considering the local characteristics of each video frame, utilizing parameters like the window size, N, and constants, β and γ .For blending video frames, a weighted average methodology is adopted, represented by the equation: This equation ensures a seamless and effective combination of different video frames, optimizing the visual quality of the resultant video by considering the number of video frames being blended, K, the weights, w k , and the input video frames, V k .In the realm of IoRT environments, addressing data transmission and latency is crucial.The total latency and energy consumption in the IoRT network are calculated using the equations: These equations provide a comprehensive assessment of the performance and user experience, considering various parameters like the number of hops in the network, M, the propagation time for each hop, T p i , the amount of data transmitted in each hop, D i , and the entropy of the transmitted data, H(X), among others.The Quality of Experience (QoE) is a paramount metric in this research, assessed using a composite metric that amalgamates the Peak Signal-to-Noise Ratio (PSNR), the Structural Similarity Index (SSIM), and the total latency, represented by the equation: The equation precisely assesses user experience, considering video quality and transmission efficiency through a comprehensive integration of weights and parameters.The hybrid video denoising and blending framework's effectiveness is rigorously evaluated in a simulated IoRT environment, comparing QoE, energy consumption, and latency with existing methodologies across diverse scenarios to validate its robustness and adaptability.

Enhanced frame interpolation
F interp (x, y, t) = λ(F prev (x, y, t)) This equation represents an enhanced frame interpolation method.Here, interpFinterp is the interpolated frame at a given position (x,y) and time t.It is calculated based on the previous frame, prevFprev, and the next frame, nextFnext, weighted by a factor λ. where ∇2∇2 is the Laplacian operator applied to the average frame, avgFavg, and ζ is a constant.

Dynamic resource allocation
This equation models dynamic resource allocation over time.Ralloc (t) is the allocated resource at time t, calculated based on the total utility, Utotal (t), and available bandwidth, Bavail (t), weighted by constants η and ξ respectively.The denominator includes a constant θ and the pending data, Dpending (t), weighted by ρ.

Optimized data compression
This equation is for optimized data compression.Copt (t) is the optimized compression level at time t, calculated using the input signal, Sin(t), and the noise level, Nnoise(t), with α and β as weighting constants.ε is a small constant to avoid division by zero.Qlevel(t) represents the quality level at time t.

Performance evaluation
The proposed work explores video processing and analysis, emphasizing the importance of video quality and classification accuracy.It provides a thorough evaluation of denoising filters and classification methods, presenting a comprehensive comparison in Table 1.
The assessment considers processing times per frame for different video sequences, including sports activities like football, cycling, golf, and tennis, using the ucf101 action recognition dataset.With 13,000 annotated video clips covering diverse human actions, this dataset proves suitable for training models to understand and classify temporal patterns in various realworld applications.
A bar graph showcasing the denoising times for each video sequence across different filters.The x-axis represents the video sequences, while the y-axis indicates the time in seconds.The proposed method consistently shows the lowest processing time, highlighting its efficiency as shown in Figure 2. Visual representation aids in understanding data.A bar graph can quickly show which method is the most time-efficient.From the data, it's evident that the proposed method is significantly faster than traditional methods.
Table 2 delves into the performance metrics of different denoising filters.Metrics such as Peak Signal-to-Noise Ratio (PSNR), Mean Squared Error (MSE), Mean Absolute Error (MAE), and Structural Similarity Index (SSIM) are used to evaluate the quality of denoised  In image denoising, metrics such as PSNR, SSIM, MAE, and MSE play a crucial role in evaluating the performance of denoising filters.PSNR quantifies the fidelity of denoised images by comparing them to noise-free originals, with higher values indicating better performance.SSIM assesses structural resemblance, offering a comprehensive measure of perceptual quality from −1 to 1. MAE calculates the average absolute difference between pixel values, while lower values indicate more accurate denoising.MSE, despite its popularity, may not always align with human perception.Together, these metrics provide a comprehensive framework for evaluating denoising filters in terms of noise reduction, structural fidelity, and overall pixelwise accuracy.
Table 3 presents a comparative analysis of various video classification methods in terms of accuracy, F1 score, precision, and recall.Methods like LSTM, LRCN, and the proposed technique are evaluated, with the proposed method achieving the highest scores across all metrics.A bar chart displaying the accuracy percentages of different video classification methods.The proposed method occupies the largest segment, emphasizing its superior performance as shown in Figure 4. Table 4 offers insights into the sensitivity and specificity metrics for various video classification methods.These metrics are crucial in understanding the true positive rate (sensitivity) and true negative rate (specificity) of each method.A scatter plot where each point represents a classification method plotted based on its sensitivity and specificity values.The proposed method is positioned closest to the top-right corner, indicating optimal performance as shown in Figure 5.
The denoising method proposed in this study consistently surpasses other filters in processing time across diverse video sequences, as evident in Figure 2. It excels in enhancing quality by attaining superior Peak Signalto-Noise Ratio (PSNR) values compared to alternative denoising filters, as depicted in Figure 3.The proposed video classification approach outperforms its counterparts, achieving the highest accuracy, F1 score, precision, and recall scores, as detailed in Table 3.The proposed IoRT framework has transformative potential in smart manufacturing, optimizing production lines for increased efficiency, reduced downtime, and enhanced quality control.In autonomous vehicles, integrating the framework could revolutionize traffic management, reducing congestion, and improving safety through real-time communication.In healthcare, the framework enables real-time monitoring and diagnostics, enhancing patient care and contributing to a more proactive medical intervention approach.Overall, the framework holds promise for revolutionizing industries, fostering intelligent decision-making, and improving system efficiency across various realworld scenarios.In smart cities, it optimizes urban services, improves energy usage, and enhances public safety.Precision agriculture benefits from real-time monitoring, optimizing resource usage, and promoting sustainable farming practices.The logistics sector leverages the framework for seamless collaboration among autonomous drones, robotic warehouses, and IoT-enabled systems, resulting in faster order fulfilment and improved supply chain efficiency.Environmental monitoring and disaster response are enhanced by the framework's integration into robotic systems, providing early detection and more effective response strategies.In smart homes, the framework leads to enhanced automation, security, and personalized user experiences, contributing to a more comfortable and efficient living environment.

Conclusion and future work
The proposed Hybrid MLE-MDR framework signifies significant progress in addressing video data integrity challenges in remote environments.The framework exhibits notable improvements in video quality, noise reduction, and fusion precision compared to existing methods, with statistical data reinforcing its impact on decision-making processes.The video classification  method demonstrates superiority in accuracy (93.47),F1 score (0.937), precision (0.931), and recall (0.934).While the study suggests focusing on real-time processing and latency reduction, statistical assessment is recommended to quantify these improvements.Future IoRT research may explore advanced algorithms for real-time processing, adaptive video enhancement, and addressing challenges in extreme conditions, prioritizing scalability, resource efficiency, and security considerations.

Table 1 .
Process Time for noisy video sequence.

Table 3 .
F1, Precision and Recall Comparison for video classification.

Table 4 .
Sensitivity and Specificity of Classification comparison.