Eyewitness videos as an aid to crisis management

The algorithm should itself alearn the notion of intuitive physics

Author
Aleksandar Stanić
Swiss AI lab IDSIA

Interview with one of the persons working in the NRP75 project „EVAC – Employing Video Analytics for Crisis Management“.

You work for the NRP 75 project “EVAC – Employing Video Analytics for Crisis Management”. Can you tell us something about your project goals?

Our work in the scope of EVAC project focuses on researching deep neural networks used for video analysis. The goal is to create an algorithm which can aid crisis managers by learning to select relevant imagery in case of natural disasters (floods for example).

Last week you attended the NeurIPS in Vancouver. What did you present there?

The current machine learning algorithms flourish in the supervised learning setting, when there are vast amounts of labeled data available for training. However, labeling the data is a costly process and in the case of a high-dimensional input it is not even clear what the exact label should be. For this reason, we have been focusing on developing machine learning algorithms which can extract useful knowledge from unlabeled video data. The goal is that the algorithm itself learns the notion of intuitive physics in a similar way we humans understand the world around us. At this year’s NeurIPS, we presented work on training a neural network which learns to distinguish objects in a video scene and model the relations between them without any human input.

What does NRP 75 mean to you?

NRP 75 enables me to not only perform fundamental research but also make a direct connection to a real-world application. The annual meetings are very valuable, as they enable us to connect with researchers from other Swiss institutions working on similar topics.

About the project

Related links