Twoja przeglądarka nie obsługuje skryptów JavaScript - działanie strony jest mocno ograniczone.

V-Experiencing (Ideal-ist Partner Search)

Poniższy wpis jest wpisem archiwalnym

Propozycja zgłoszona do Ideal-ist Partner Search i opatrzona Quality Label.

PROJECT OVERVIEW
Call Identifier: H2020-FETOPEN-2014/2015
Objective: FETOPEN 1 – 2014/2015: FET-Open research projects
Funding Schemes: Research & Innovation Actions
Evaluation Scheme: one stage

PROJECT DESCRIPTION

Proposal Outline:

The proposed project is to carry out leading edge and multi-disciplinary research across the boundaries of media computing, computer vision, virtual reality, computer gaming, image/video processing, and machine learning etc. to introduce a new concept of experiencing videos, which are videos that allow people to experience rather than watch the video content. In comparison with the existing video concept, perception of such experiencing videos distinguishes itself with the following unique features:

(i) the audience will be fully immersed inside the video scenes, creating the same feeling as that of being inside the videos rather than outside;

(ii) providing audience with so-called any-view video content, which is recreated adaptively to the pose of their heads and positions/locations of their bodies;

(iii) provision of simulated moving tracks, allowing audience to move freely inside the video scene. An illustrative scenario is watching a show inside an opera house, where somebody wishes to walk onto the theatre and enjoy watching the show from the back;

(iv) interacting with the major cast and event inside the videos, and hence enhance the experience of being there to participate rather than watch.

Objectives:

  • To introduce a new concept of “experiencing videos” and prove its feasibility via a range of technology innovations beyond the existing state of the arts, including immersive videos, tele-presence, virtual reality, computer gaming and animations.
  • To carry out theoretical research on computerized recreation of experiencing video content, where multi-view video content captured through calibrated cameras are integrated, manipulated and transformed to regenerate any-view video content in accordance with the pose, position and movement of individual audience as if he or she is inside the videos.
  • To establish a research and show-case platform for demonstration, validation, and evaluation of such experiencing video technologies, and hence making the researched ideas, developed algorithms, and implemented technologies ready for possible commercial exploitation as well as scientific disseminations.

PARTNER PROFILE SOUGHT
Required skills and Expertise:

Computer vision, virtual reality, tele-presence, immersive videos, visual interactions, computer games and animations;

Description of work to be carried out by the partner(s) sought:

  • Research on computerized generation of avatars and mathematically modelling of their movements in accordance with segmented objects from multi-view videos;
  • Research on computerized creation of visual scenes (including outdoor and indoor etc) and mathematically modelling in accordance with multi-view videos;
  • Research, develop and implement tele-presence technologies as well as relevant algorithms;
  • Research, develop, and implement immersive video technologies as well as algorithms;
  • Research, develop, and implement interaction technologies as well as relevant algorithms in the context of computer games and animations.

Type of partner(s) sought:

  • Internationally leading research centres (universities)
  • R&D departments inside large industrial companies, key  players in relevant fields.

PROPOSER INFORMATION
Organisation: University of Surrey
Department: Computing
Type of Organisation: University
Country: United Kingdom
Więcej…

Korzystanie ze strony oznacza zgodę na wykorzystywanie plików cookie, niektóre mogą być już zapisane w przeglądarce. Więcej informacji można znaleźć na stronie: polityka prywatności.