Come­dia: How to become a mul­ti­me­dia direc­tor?Come­dia is an R&D axis which aims at inte­grat­ing inter­ac­tiv­ity between artists and the mul­ti­me­dia con­text on stage. Until now, both are quite inde­pen­dent: per­form­ing artists ver­sus pre­re­corded audio­vi­sual envi­ron­ment, directed by a stage man­ager. With the arrival of dig­i­tal arts, this inter­ac­tiv­ity starts exist­ing to add smooth­ness to the per­for­mance. Hence, the main aim of Come­dia is to ana­lyze actions from artists (their ges­ture, posi­tion, voice) to auto­mat­i­cally turn them into cor­re­spond­ing mul­ti­me­dia events through image and video under­stand­ing, sensor-based sig­nal inter­pre­ta­tion and speech and audio pro­cess­ing. Artists may then become their own direc­tor of the mul­ti­me­dia per­for­mance and in live realization.Along the analy­sis of audio and video to detect stage events, Come­dia will also tar­get to design ergonomic tools for mul­time  dia script encod­ing. It is not only more a tem­po­ral or sequen­tial descrip­tion of the sto­ry­board but a n-dimensional sto­ry­board evo­lu­tion, which is required for dig­i­tal arts per­for­mance. This advanced sto­ry­board will use con­cepts of inter­ac­tive sto­ry­telling, exist­ing in sev­eral appli­ca­tions, among which 3D games. The main chal­lenge to reach all these break­throughs will be their real­time real­iza­tion through ded­i­cated tech­niques, using pro­gram­ma­tion onto GPU for exam­ple.

 For more details, Come­dia will exploit exper­tise of all part­ners in terms of image pro­cess­ing, pat­tern recog­ni­tion, mul­ti­sen­sor fusion, audio and speech pro­cess­ing, aug­mented real­ity, 3D image syn­the­sis and grid computation.