top of page
Untitled2.png

Content Aware Video Coding

Source: Military Airspace - The UAV video problem: using streaming video with unmanned aerial vehicles

​

Good quality video sent from drone camera to pilot and ground crew is critical for the success of every mission; During the course of one mission, the bandwidth available for communication between the UAV crew can vary significantly and unpredictably due to environmental or physical obstacles  as well as electronic countermeasures (ECM). Images needs to be encapsulated and transmitted to a ground station using a relatively low-bandwidth with minimal latency and high quality of detail and this can be challenging ....

A protocol which allows for automated quality adjustment under limited bandwidth conditions is greatly beneficial. Additionally, capability to remotely control bandwidth requirements or adjust video processing parameters to manage varying bandwidth conditions, enables basic functionality to be maintained under non-circumstances.

Low bitrate UAV Video Streaming

The ultimate challenge in video encoding is finding the perfect balance between perceptual quality and compression efficiency. For UAVs (Unmanned Aerial Vehicles) communicating with ground crews, available bandwidth can fluctuate unpredictably.

Factors such as terrain, environmental interference, distance from base stations or relays, network congestion, and electronic countermeasures all contribute to these fluctuations. As a result, bandwidth limitations often force the use of aggressive compression techniques, which can significantly compromise video quality. 

​

LUCEA Context-Aware Encoding takes advantage of the fact that compression efficiency is closely tied to the complexity of the content. By utilizing AI to analyze and understand the video, the system optimizes data transmission, particularly in bandwidth-limited environments.

​

The system dynamically adjusts encoding parameters in real time, responding to the content's complexity. This intelligent adaptation ensures optimal compression performance while preserving the perceived quality in key areas of the video. As a result, simpler content segments benefit from bitrate and bandwidth savings, while more complex scenes experience enhanced visual quality. 

Better resolution at low bandwidth

The qualitative transformation of video signal representation is driven by the sensitivity of the human eye and image Ground Resolved Distance (GRD). This method allows for the removal of minimally perceptible details in less important areas while enhancing the perceptually significant regions of the image.

​

Each frame is divided into segments based on its visual perception value, as well as the type and extent of any image errors. Compression artifacts are managed differently depending on the perception value of the segment they belong to. Additionally, the visibility of details is optimized, with perception-based coding determining which details should be enhanced and which areas can have details suppressed.

​

This approach is based on perceptual focusing, which emphasizes the information most relevant to human visual perception. By selectively removing or reducing the visibility of less significant data—such as redundant information or details that are imperceptible to the human eye—we ensure that only the essential data needed to maintain high perceived quality in critical regions of the image is preserved.

Image noise in UAV footage can arise from various factors, including rapid drone movement, camera instability, zooming, high ISO settings in low-light conditions, sensor limitations, and environmental factors such as weather.

​

To counteract this, LUCEA employs advanced techniques such as spatial and temporal filtering, chroma and luma enhancement, resolution recovery, and vibration mitigation. These methods work together to enhance the visual quality of the video, making it cleaner and clearer before the compression process begins.

 

As a result, LUCEA ensures a gracefull degradation of perceived quality when more aggressive encoding settings are applied, without introducing undesirable artifacts like blockiness, ringing, or blurring.

ECM disruption mittigation

Electronic Countermeasures (ECM) can cause noise, jamming, and signal degradation, disrupting video transmission and making the footage difficult to interpret. The interference from ECM can introduce visual artifacts such as color distortion, static, ghosting, pixelation, blurring, and a general loss of image clarity.

​

When ECM devices transmit noise or signals on frequencies shared with communication systems, they can effectively "saturate" the available bandwidth. This results in a significant reduction of the usable spectrum, severely affecting legitimate communications.

​

LUCEA offers an effective solution to ECM-induced disruptions by employing a method that restructures the video signal, prioritizing strong edges and region boundaries, which are critical for accurate image interpretation. By enhancing these key features and minimizing less prominent textures in the background, the resulting images become more resistant to distortions of any kind.

​

This approach proves especially effective in high-compression or moderate-quality scenarios, where extreme compression ratios and resilience to static interference are crucial for maintaining video integrity.

​

Enhanced Detection Capabilities

Enhancing images taken in low light with optical cameras or in the dark with infrared (IR) cameras on UAVs (unmanned aerial vehicles) offers several significant benefits, especially in applications like surveillance, search and rescue, mapping, or inspections. In low-light conditions, accurate navigation can become difficult. Enhanced images help operators maintain a better understanding of their surroundings, improving both navigation accuracy and the safety of the UAV during flight.

 

In environments with poor lighting, optical cameras struggle to capture clear, usable images. Enhancing images in such conditions ensures that the camera can gather more detail, reducing noise and brightening the image without overexposing it. This is crucial for capturing important information when natural light is insufficient.

​

In total darkness, optical cameras fail, but IR cameras can capture heat signatures of objects. Enhancing IR images allows for clearer distinctions between objects, making it easier to detect humans, animals, or equipment in dark or concealed environments.

 

Improved Identification in Zero-Light 

In the critical and time-sensitive environment of search and rescue (SAR) operations, every second counts. UAVs equipped with infrared (IR) cameras and advanced AI people detection algorithms provide a powerful tool to enhance the effectiveness of rescue teams, particularly in challenging conditions such as darkness, dense forests, or disaster-stricken areas. Here's why AI-driven IR people detection is a game changer for SAR missions.

​

Search and rescue missions often take place in environments where visibility is extremely limited—at night, in smoke-filled areas, or in dark forests. Traditional optical cameras struggle in such conditions, but IR cameras excel in detecting heat signatures, which are present even in complete darkness.

 

The AI people detector identifies the unique thermal signature of humans (or animals) by analyzing the infrared data captured by the camera, allowing rescue teams to spot potential survivors or targets that would otherwise be invisible to the naked eye. This enables SAR teams to operate effectively around the clock, regardless of the time of day or light conditions.

Video stabilisation

Video stabilization is crucial in UAVs (unmanned aerial vehicles) or environments with inherent vibrations. 

 

UAVs, especially drones, generate a lot of vibration due to the movement of motors and propellers. These vibrations can cause camera shakes and unwanted jitters in the footage, making the video shaky and hard to watch. Video stabilization helps smooth out these disturbances, ensuring that the footage remains steady and clear.

 

Stabilized video without vibrations is crucial for AI detection, particularly in applications like UAVs (drones), surveillance, search and rescue, and other high-precision tasks. In scenarios like search and rescue or surveillance, where AI detection is used to track moving objects or people, video stability is vital. When the video is shaky, the AI’s ability to lock onto and follow moving targets diminishes. AI systems often rely on visual cues such as edges, shapes, and movement patterns to track objects. Vibrations distort these visual cues, leading to potential loss of target or false tracking errors.

Untitled.png

Lucea addresses the growing need for video compression that surpasses the capabilities of traditional encoding methods.

The algorithm’s potential spans both cloud and edge computing environments, where bandwidth limitations are a key concern. LUCEA delivers significant bandwidth savings, allowing for the transmission of higher-quality videos over low-bandwidth links. It also supports lower transmission frequencies, enabling seamless integration into frequency hopping systems.

Experimental results demonstrate that LUCEA effectively reduces bitrate requirements through its unique, graceful image degradation method, all while maintaining superior video quality. This breakthrough paves the way for more efficient, high-quality video transmission in a wide range of modern technological contexts.

bottom of page