This collective work deals with the analysis of audiovisual numerical texts or corpora, which may e.g. form part of an audiovisual library or archive.
The development of methods, tools and conceptual frameworks (or models) for the concrete analysis of audiovisual texts or corpora is one of the most important issues for multimedia (audiovisual) digital libraries, archives, collections, etc. and also for any project or program to compile and disseminate knowledge heritage (e.g. cultural, scientific etc.).
Analyzing audiovisual recordings, shoots, sound recordings, film or complex multimodal documents etc. obviously constitutes an essential step for any classification of the (digital) collection of an archive or library.
Above all, however, it is the most important activity by which an actor (an individual, group of individuals, institution, etc.) obtains and exploits numerical audiovisual data to transform them – depending on their own skills, expectations and requirements, but also within the limitations imposed by the tools, methods and models available – into genuine cognitive resources which they regard as “useful”, “pleasant”, “interesting” or simply relevant, i.e. which have a value for them.
Ten years ago now, along with a small nucleus of permanent collaborators from the ESCoM (Semiotics Cognitive and New Media Team), the research center at the Fondation Maison des Sciences de l’Homme (FMSH – House of the Human Sciences Foundation) in Paris, we set up the ...