Details

Deep Learning for Video Understanding


Deep Learning for Video Understanding


Wireless Networks

von: Zuxuan Wu, Yu-Gang Jiang

CHF 165.50

Verlag: Springer
Format: PDF
Veröffentl.: 01.08.2024
ISBN/EAN: 9783031576799
Sprache: englisch

Dieses eBook enthält ein Wasserzeichen.

Beschreibungen

<p>This book presents deep learning techniques for video understanding. For deep learning basics, the authors cover machine learning pipelines and notations, 2D and 3D Convolutional Neural Networks for spatial and temporal feature learning. For action recognition, the authors introduce classical frameworks for image classification, and then elaborate both image-based and clip-based 2D/3D CNN networks for action recognition. For action detection, the authors elaborate sliding windows, proposal-based detection methods, single stage and two stage approaches, spatial and temporal action localization, followed by datasets introduction. For video captioning, the authors present language-based models and how to perform sequence to sequence learning for video captioning. For unsupervised feature learning, the authors discuss the necessity of shifting from supervised learning to unsupervised learning and then introduce how to design better surrogate training tasks to learn video representations. Finally, the book introduces recent self-training pipelines like contrastive learning and masked image/video modeling with transformers. The book provides promising directions, with an aim to promote future research outcomes in the field of video understanding with deep learning.</p>
Introduction.- Overview of Video Understanding.- Deep Learning Basics for Video Understanding.- Deep Learning for Action Recognition.- Deep Learning for Action Localization.- Deep Learning for Video Captioning.- Unsupervised Feature Learning for Video Understanding.- Efficient Video Understanding.- Future Research Directions.- Conclusion.
<p>Zuxuan Wu received the Ph.D. in Computer Science from the University of Maryland in 2020. He is currently an Associate Professor in the School of Computer Science at Fudan University and worked as a Research Scientist at Facebook AI. His research interests are in deep learning and large-scale video understanding. His work has been recognized by an AI 2000 Most Influential Scholars Award in 2022, a Microsoft Research PhD Fellowship (10 people Worldwide) in 2019 and a Snap PhD Fellowship (10 people Worldwide) in 2017.</p>

<p>Yu-Gang Jiang is a Chang Jiang Scholar Distinguished Professor at School of Computer Science, Fudan University. His research is focused on multimedia, computer vision, and robust &amp; trustworthy AI. As the director of Shanghai Collaborative Innovation Center of Intelligent Visual Computing and Fudan Vision and Learning (FVL) Laboratory, he leads a group of researchers working on all aspects of robust &amp; trustworthy visual analytics. He publishes extensively in top journals and conferences with over 25000 citations and an H-index of 79. His research outcomes have had major impacts on applications like mobile visual search/recognition and defect detection for high-speed railway infrastructures. His work has led to many awards, including the inaugural 2014 ACM China Rising Star Award, the 2015 ACM SIGMM Rising Star Award, several best paper awards, and various recognitions from NSF China, MOE China, and Shanghai Government. He holds a PhD in Computer Science from City University of Hong Kong and spent three years working at Columbia University before joining Fudan in 2011.&nbsp; He is an elected Fellow of IAPR and IEEE.</p>
<p>This book presents deep learning techniques for video understanding. For deep learning basics, the authors cover machine learning pipelines and notations, 2D and 3D Convolutional Neural Networks for spatial and temporal feature learning. For action recognition, the authors introduce classical frameworks for image classification, and then elaborate both image-based and clip-based 2D/3D CNN networks for action recognition. For action detection, the authors elaborate sliding windows, proposal-based detection methods, single stage and two stage approaches, spatial and temporal action localization, followed by datasets introduction. For video captioning, the authors present language-based models and how to perform sequence to sequence learning for video captioning. For unsupervised feature learning, the authors discuss the necessity of shifting from supervised learning to unsupervised learning and then introduce how to design better surrogate training tasks to learn video representations. Finally, the book introduces recent self-training pipelines like contrastive learning and masked image/video modeling with transformers. The book provides promising directions, with an aim to promote future research outcomes in the field of video understanding with deep learning.</p>

<ul>
<li>Presents an overview of deep learning techniques for video understanding;</li>
<li>Covers important topics like action recognition, action localization, video captioning, and more;</li>
<li>Introduces cutting-edge and state-of-the-art video understanding techniques.</li>
</ul>
Presents an overview of deep learning techniques for video understanding Covers important topics like action recognition, action localization, video captioning, and more Introduces cutting-edge and state-of-the-art video understanding techniques

Diese Produkte könnten Sie auch interessieren:

Nuevas tecnologías en la Administración de Justicia
Nuevas tecnologías en la Administración de Justicia
von: Gorgonio Martínez Atienza
PDF ebook
CHF 6.00
AI in paint process
AI in paint process
von: Manfred Schwartz
EPUB ebook
CHF 26.00