MPAI celebrates 30th month of activity publishing Enhanced Audio V2 and Metaverse Functionalities Profiles for Community Comments

Top Quote MPAI has concluded its 30th General Assembly (MPAI-30) approving publication of WD0.3 of the MPAI Metaverse Model – Functionalities Profiles for Comunity Comments and Context-based Audio Enhancement (MPAI-CAE) Technical Specification Version 2. End Quote
  • (1888PressRelease) March 26, 2023 - Geneva, Switzerland – Today the international, non-profit, unaffiliated Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) standards developing organisation has concluded its 30th General Assembly (MPAI-30) approving publication of WD0.3 of the MPAI Metaverse Model – Functionalities Profiles for Community Comments and the Context-based Audio Enhancement (MPAI-CAE) Technical Specification Version 2.

    The MPAI Metaverse Model (MPAI-MMM) – Functionalities Profiles implements the second step of the MPAI roadmap of interoperability of metaverse implementations. It defines the pro-tocol infrastructure enabling the different elements of a metaverse instance to request other ele-ments of the same or different metaverse instances to execute actions, such as locating and ani-mating an avatar. The document is posted at https://rb.gy/qhwi3j. Anybody can send comments to the MPAI Secre-tariat by the 17th of April 2023. The document will be presented online on the 7th of April 2023 with a view to elicit comments for consideration and possible inclusion in the final version to be approved on 19 April by MPAI-31. Register at https://rb.gy/bjwegw

    The Context-based Audio Enhancement (MPAI-CAE) Technical Specification
    (https://mpai.community/standards/mpai-cae/) allows a device to describe an audio scene in terms of audio objects and their directions. MPAI uses this Tech-nical Specification to enable human interaction with autonomous vehicles, avatar-based vide-oconference and metaverse applications.

    These results are added to 6 Technical, 2 Reference Software, 1 Conformance Testing, and 1 Performance Assessment Specifications and 1Technical Report, 11 excellent reasons to celebrate 30 months of MPAI activity. On the anniversary day – 31st of March 2023 – MPAI will hold two parallel time-shifted events to cover most potential listeners where the 16 main MPAI activities will be presented. See the event program
    (https://mpai.community/newsletter-2023-02-22/#AE) and register.
    (https://bit.ly/3Z0K9nm).

    MPAI is continuing its work plan comprising the development of the following Technical Speci-fications:
    1) AI Framework (MPAI-AIF https://mpai.community/standards/mpai-aif/).
    V2 Technical Specification will enable an implementer to establish a secure AIF environment to execute AI Workflows (AIW) composed of AI Modules (AIM).
    2) Avatar Representation and Animation (MPAI-ARA https://mpai.community/standards/mpai-ara/ ). V1 Technical Specification will support creation and animation of interoperable human-like avatar models expressing a Personal Status.
    3) Multimodal Conversation (MPAI-MMC
    https://mpai.community/standards/mpai-mmc/). V2 Technical Specification will generalise the notion of Emotion by adding Cognitive State and Social Attitude and specify a new data type called Standard for Personal Status.

    The MPAI work plan also includes exploratory activities, some of which are close to becoming standard or technical report projects:

    1) AI Health (MPAI-AIH https://mpai.community/standards/mpai-aih).
    Targets an architecture where smartphones store users’ health data processed using AI and AI Models are updated using Federated Learning.
    2) Connected Autonomous Vehicles (MPAI-CAV https://mpai.community/standards/mpai-cav). Targets the Human-CAV Interaction En-vironment Sensing, Autonomous Motion, and Motion Actuation subsystems implemented as AI Workflows.
    3) End-to-End Video Coding (MPAI-EEV
    https://mpai.community/standards/mpai-eev). Extends the video coding frontiers using AI-based End-to-End Video coding.
    4) AI-Enhanced Video Coding (MPAI-EVC
    https://mpai.community/standards/mpai-evc). Improves existing video coding with AI tools for short-to-medium term applications.
    5) Server-based Predictive Multiplayer Gaming (MPAI-SPG https://mpai.community/standards/mpai-spg). Uses AI to train neural net-works that help an online gaming server to compensate data losses and detects false data.
    6) XR Venues (MPAI-XRV https://mpai.community/standards/mpai-xrv).
    Identifies common AI Modules used across various XR-enabled and AI-enhanced use cases where venues may be both real and virtual.

    It is still a good opportunity for legal entities supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data to join MPA (https://mpai.community/how-to-join/join/).

    Please visit the MPAI web site (https://mpai.community/), contact the MPAI secretariat (secretariat ( @ ) mpai dot community) for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media:
    - LinkedIn (https://www.linkedin.com/groups/13949076/)
    - Twitter (https://twitter.com/mpaicommunity)
    - Facebook (https://www.facebook.com/mpaicommunity) ,
    - Instagram (https://www.instagram.com/mpaicommunity/)
    - YouTube (https://youtube.com/c/mpaistandards).

    ###
space
space
  • FB Icon Twitter Icon In-Icon
Contact Information
  • Leonardo Chiariglione
  • Mpai
  • c/o Me Olivier BRUNISHOLZ 5 Cours des Bastions 1205 Geneva.
  • 1025
  • Voice: 390119350461
  • Visit our Site