成人视频色情片

Skip to main content Skip to search

YU News

YU News

AI Breakthrough Ensures Flawless Video Streaming in Any Network Condition

Hang Yu, above, lead author of the study and a student in the Katz School's M.S. in Artificial Intelligence, will present his research, which he co-authored with Dr. David Li, program director of the M.S. in Data Analytics and Visualization, in January at the 2025 IEEE Conference.

By Dave DeFusco

Imagine a future where streaming videos, video calls or surveillance footage look flawless, no matter the network conditions or quality variations. That鈥檚 the goal of a recent study, 鈥淎 Dual-Path Deep Learning Framework for Video Quality Assessment: Integrating Multi-Speed Processing and Correlation-Based Loss Functions,鈥 which will be presented at the 2025 IEEE Conference in January by researchers in the Katz School鈥檚 Graduate Department of Computer Science and Engineering.

In a world where digital content is king, assessing the quality of video is critical. Traditional methods, which rely on manual tweaks and frame-level analysis, fall short when dealing with today鈥檚 complex, real-world video challenges. Enter artificial intelligence. By using deep learning techniques, Katz School researchers have created systems that analyze vast amounts of data to identify subtle distortions, ensuring that every pixel and frame contributes to the best possible viewing experience.

But the challenges in Video Quality Assessment (VQA) are vast. Balancing sharp detail with broader motion context is tough. For example, models that focus too much on fine details might miss the bigger picture of motion and context in a scene. Conversely, systems that emphasize motion can overlook critical details in fast-moving or complex videos. Tackling these trade-offs is key to advancing video quality. 

The Katz School researchers behind this new VQA framework are using the innovative SlowFast model architecture. Think of it as a dual-speed processor for video analysis. The 鈥渟low鈥 pathway captures detailed information by analyzing video at a lower frame rate, focusing on fine-grained features. Meanwhile, the 鈥渇ast鈥 pathway runs at full speed, zooming out to see the bigger picture, such as overall motion and flow. Together, these pathways offer a powerful combination, ensuring both fine details and large-scale context are accounted for. 

The research team didn鈥檛 stop at the SlowFast approach. They developed additional tools to refine how the system evaluates video: 

  1. PatchEmbed3D: This breaks video frames into 3D patches, enabling the system to understand both spatial and temporal dynamics. 
  2. WindowAttention3D: By zooming in on specific sections of a video, this tool ensures that local details don鈥檛 get lost in the shuffle. 
  3. Semantic Transformation and Global Position Indexing: These features help the system maintain spatial and temporal consistency. 
  4. Cross Attention and Patch Merging: These improve how the dual-speed pathways communicate and reduce complexity without losing accuracy. 

To train the system, the team used smart learning techniques. They combined PLCC Loss and Rank Loss functions, which focus on maintaining accuracy and ranking quality. A dynamic learning rate adjustment, called cosine annealing, ensured the model learned efficiently and accurately. 

鈥淭he results of these innovations are impressive,鈥 said Dr. David Li, senior author of the paper and program director of the Katz School鈥檚 M.S. in Data Analytics and Visualization. 鈥淭esting the model on public datasets showed that it outperforms existing methods. Not only does it deliver better numerical results, but it also improves the subjective experience of video quality鈥攚hat humans see and feel while watching.鈥

The researchers鈥 two-stage training process played a key role. The first stage taught the system broad patterns, while the second fine-tuned its ability to recognize intricate details. This stepwise approach proved highly effective, especially when paired with high-resolution data.

This work sets the stage for further exploration in VQA. Future efforts could involve even more sophisticated training strategies, experimenting with additional loss functions or developing tools that handle specific challenges, like compression artifacts or transmission errors. The framework could also serve as a foundation for other fields, from gaming to virtual reality, where video quality is crucial.

鈥淭his research bridges the gap between technology and human experience, ensuring that as video content becomes more diverse and complex, the viewing experience remains seamless and stunning,鈥 said Hang Yu, lead author of the study and a student in the M.S. in Artificial Intelligence.

Share

FacebookTwitterLinkedInWhat's AppEmailPrint

Follow Us