Below are listed all the projects presented for this year’s edition of eNTERFACE workshops. Projects have a brief description attached. If you would like to get more information about one project, click the “Learn more” link.
Artificial Intelligence for Monitoring NAO in Web Platforms (AIMons)
Principal investigators : M. El Adoui
The objective of this project is to develop a web and a mobile platform controlling NAO humanoid robot. Many Artificial Intelligence (AI) applications are already developed by the computer science unit (Fpms-Umons) for NAO, such as controlling the arm and head of the robot using Kinect and microphones sensors, playing board games, etc. The goal of this project is to integrate these applications into a web/mobile platform allowing an easy control of NAO robots.
Participants : E. M. El Khayati, L. R. Lazouni, S. Riahi, A. Soyez
Audio scene analysis through source localisation and recognition
Principal investigators : G. Pironkov, S. Dupont
The goal of this project is to transpose state-of-the-art image processing algorithms to audio. More specifically, we would like to create a 2D audio map, along the x and y axes. As a result we have two main goals: 1) localising audio sources 2) recognising the audio source.
During the eNTERFACE project we will be focusing on two main categories of audio sources: home environment (closing doors, vacuum cleaner, blender, etc.) and crowd behaviour (applause, booing, cheering).
Participants : Y. Amraoui, P. Camargo Marquez, C. Lasquellec
Principal investigators : M. Mancas
The latest DNN-based developments in people tracking such as OpenPose dramatically changed the people tracking and analysis capabilities. This project will use OpenPose as basic detection brick to analyse crowds. The analysis includes people long-term tracking/reidentification (in a closed dataset) and the extraction of groups of people from the crowd. The results will be mapped into a simple augmented reality framework to demonstrate its effectiveness.
The goal of the project is to analyse crowds based on video input and extract individual people as well as groups of people.
Participants : A. Bandrabur, K. Hagihara, S. Laraba, N. Leblanc, S. Yengec Tasdemir
Crowd Intelligence & Distributed Image Blockchain (CIDIB)
Principal investigators : P. Desbordes, S. Lugan
In the field of medical imaging, radiomics is a technique that proposes to numerically study image features. Those features outline information present in medical images better than a simple visual analysis. The great advantage of using radiomics is that, combined with classical clinical features, it can lead to a new, personalised medicine in the treatment of cancer.
This project proposes to train deep learning models with data coming from hospitals from all around the world. Combining data from many hospitals helps gathering enough data to train deep neural networks. Nevertheless, reliability of the results as well as guarantee for patient’s privacy are really critical.
Participants : A. Garcia Castro, L. X. Ramos Tormo
Multimedia Indexation using Deep Learning in Big Data (MIDL)
Principal investigators : M. A. Belarbi, S. Mahmoudi
The goal of this project is to develop a cloud platform for indexing large-scale images by content. We will do so by using the technique of Big Data such as Hadoop, Spark, etc. The techniques of Deep Learning will be used for improving the accuracy of learning and retrieval phases. Within our platform, the user can provide the query image. The result will be represented by a set of similar images (Top-20 similar images). The platform can also index the query image if the user is not satisfied in order to improve precision.
Participants : M. Brahimi, A. Kella