W3C

Best practices for creating MMI Modality Components

W3C Working Group Note 1 March 2011

This version:
http://www.w3.org/TR/2011/NOTE-mmi-mcbp-20110301/
Latest version:
http://www.w3.org/TR/mmi-mcbp/
Previous version:
This is the first version.
Editor:
Ingmar Kliche , Deutsche Telekom AG
Authors:
Deborah Dahl, Invited Expert
James A. Larson, Invited Expert
B. Helena Rodriguez, Telecom ParisTech
Muthuselvam Selvaraj, until 2009 while at HP

Abstract

This document describes Modality Components in the MMI Architecture which are responsible for controlling the various input and output modalities on various devices by providing guidelines and suggestions for designing Modality Components. Also this document shows several possible examples of Modality Components, (1) face identification, (2) form-filling using handwriting recognition and (3) video display.

Status of this Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This is the 1 March 2011 W3C Working Group Note of "Best practices for creating MMI Modality Components". The Multimodal Interaction Working Group once published a Working Draft of the "Multimodal Architecture and Interfaces (MMI Architecture)" on 16 October 2008 with this content. However, the Working Group concluded that the description on how to create Modality Components and examples of possible Modality Components should be published as a Working Group Note rather than part of the MMI Architecture specification. The goal of this Working Group Note is to provide guidelines and suggestions for designing Modality Components in the MMI Architecture and make it easier to author concrete Modality Components for multimodal Web applications. Also this document shows several possible examples of Modality Components, (1) face identification, (2) form-filling using handwriting recognition and (3) video display.

This W3C Working Group Note has been developed by the Multimodal Interaction Working Group of the W3C Multimodal Interaction Activity.

Comments for this note are welcomed and should have a subject starting with the prefix '[ARCH]'. Please send them to [email protected], the public email list for issues related to Multimodal. This list is archived and acceptance of this archiving policy is requested automatically upon first post. To subscribe to this list send an email to [email protected] with the word "subscribe" in the subject line.

For more information about the Multimodal Interaction Activity, please see the Multimodal Interaction Activity statement.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

Publication as a Working Group Note does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

Table of Contents

1 Introduction
2 Modality component guidelines
    2.1 Guideline 1: Each modality component must implement all of the MMI life-cycle events.
    2.2 Guideline 2: Identify other functions of the modality component that are relevant to the interaction manager.
    2.3 Guideline 3: If the component uses media, specify the media format. For example, audio formats for speech recognition, or InkML for handwriting recognition.
    2.4 Guideline 4: Specify protocols for use between the component and the Interaction Manager (IM) (e.g., SIP or HTTP).
    2.5 Guideline 5: Specify supported human languages, e.g., English, German, Chinese and locale, if relevant.
    2.6 Guideline 6: Specify supporting languages required by the component, if any.
    2.7 Guideline 7: Modality components sending data to the interaction manager must use the format where appropriate.
    2.8 Guideline 8: Specify error codes and their meanings to be returned to the IM.
3 Modality component design suggestions
    3.1 Design suggestion 1: Consider constructing a complex modality component with multiple functions if one function handles the errors generated by another function.
    3.2 Design suggestion 2: Consider constructing a complex modality component with multiple functions rather than several simple modality components if the functions need to be synchronized.
    3.3 Design suggestion 3: Consider constructing a nested modality component with multiple child modality components if the children modality components are frequently used together but do not handle ther errors generated by the other children components and the children components do not need to be extensively synchronized.
4 Example simple modality: Face Identification
    4.1 Functions of a Possible Face Identification Component
    4.2 Event Syntax
        4.2.1 Examples of events for starting the component
        4.2.2 Example output event
5 Example simple modality: Form-filling using Handwriting Recognition
    5.1 Functions of a Possible Handwriting Recognition Component
    5.2 Event Syntax
        5.2.1 Examples of events for preparing the component
        5.2.2 Examples of events for starting the component
        5.2.3 Example output event
6 Example simple modality: Video Display
    6.1 Functions of a Possible Video Display Component
    6.2 Event Syntax
        6.2.1 Examples of events for starting the component

Appendix

A References


1 Introduction

The W3C Multimodal Interaction (MMI) Working Group develops an architecture [MMI-ARCH] for the Multimodal Interaction framework [MMIF]. The Multimodal Architecture describes a general and flexible framework for interoperability of the various components of the multimodal framework (e.g. modality components (MC) and the interaction manager (IM)) in an abstract way. Among others it defines interfaces and messages between the constituents of the framework, but it is up to the implementation to decide how these messages are transferred in case of a distributed implementation.

This Note is an informative supplement to the Multimodal Architecture and Interfaces specification [MMI-ARCH]. In contrast to the Multimodal Architecture specification, which defines normative conformance for multimodal constituents, the intention of this document is to provide additional informative guidelines for authors of MMI modality components. Its purpose is to assist authors in maximizing the usefulness of their Multimodal Architecture conformant constituents by describing additional information which will enable constituents to be more easily incorporated into a multimodal system. This additional suggested information includes, for example, descriptions of how the constituent behaves with respect to the optional aspects of the Architecture. The specific goals of the guidelines in this document are to:

  1. promote interoperability when constituents from different vendors are used in the same system by suggesting important information that can be provided along with a constituent that will enable others to use the constituent effectively (2 Modality component guidelines )
  2. provide suggestions for authoring Modality Components in order to maximize their effectiveness (3 Modality component design suggestions )
  3. provide illustrations of these suggestions using sample modality components for face identification and handwriting recognition (4 Example simple modality: Face Identification and 5 Example simple modality: Form-filling using Handwriting Recognition )


2 Modality component guidelines

The following guidelines guarantee that modalities are portable from interaction manager to interaction manager.

2.1 Guideline 1: Each modality component must implement all of the MMI life-cycle events.

The MMI life-cycle events are the mechanism through which a modality component communicates with the interaction manager. The Modality Component (MC) author must define how the modality component will respond to each life-cycle event. A modality component must respond to every life-cycle event it receives from the interaction manager in the cases where a response is required, as defined by the MMI Architecture. For example, if a modality component presents a static display, it must respond to a <pause> event with a <pauseResponse> event even if the static display modality component does nothing else in response to the <pause> event.

For each life-cycle event, define the parameters and syntax of the "data" element of the corresponding the life-cycle event that will be used in performing that function. For example, the <startRequest> event for a speech recognition modality component might include parameters like timeout, confidence threshold, max n-best, and grammar.

2.2 Guideline 2: Identify other functions of the modality component that are relevant to the interaction manager.

Define an <extensionNotification> event to communicate these functions to and from the interaction manager.

2.3 Guideline 3: If the component uses media, specify the media format. For example, audio formats for speech recognition, or InkML for handwriting recognition.

2.4 Guideline 4: Specify protocols for use between the component and the Interaction Manager (IM) (e.g., SIP or HTTP).

2.5 Guideline 5: Specify supported human languages, e.g., English, German, Chinese and locale, if relevant.

2.6 Guideline 6: Specify supporting languages required by the component, if any.

For example:

2.7 Guideline 7: Modality components sending data to the interaction manager must use the [EMMA] format where appropriate.

If a modality component captures or generates information, then it should format the information using the EMMA format and use an extension event to send that information to the interaction manager.

2.8 Guideline 8: Specify error codes and their meanings to be returned to the IM.

The MC developer must specify all error codes that are specific to the component. If the MC is based on another technology, the developer can provide a reference to that technology specification. For instance, if the MC is based on VoiceXML, a reference to the VoiceXML spec for VoiceXML errors can be included instead of listing each VoiceXML error.

Errors such as XML errors and MMI protocol errors must be handled in accordance with the guidelines laid out in the MMI architecture. These errors do not need to be documented.

3 Modality component design suggestions

The following design suggestions should be helpful for modality component authors to make modalities portable from interaction manager to interaction manager.

3.1 Design suggestion 1: Consider constructing a complex modality component with multiple functions if one function handles the errors generated by another function.

For example, if the ASR fails to recognize a user's utterance, a prompt may be presented to the user asking the user to try again by the TTS function. As another example, if the ASR fails to recognize a user's utterance, a GUI function might display the n-best list on a screen so the user can select the desired word. Efficiency concerns may indicate that two modality components be combined into a single complex modality component.

3.2 Design suggestion 2: Consider constructing a complex modality component with multiple functions rather than several simple modality components if the functions need to be synchronized.

For example, a TTS function must be synchronized with a visual talking head so that the lip movements are synchronized with the words. As another example, a TTS functions presents information about the each graphical item that the user places "in focus." Again, efficiency concerns may indicate that the TTS and talking head be two modality components be combined into a single complex modality component.

3.3 Design suggestion 3: Consider constructing a nested modality component with multiple child modality components if the children modality components are frequently used together but do not handle ther errors generated by the other children components and the children components do not need to be extensively synchronized.

Writing an application using a nested modality component may be easier than writing the same application using multiple modality components if the nested modality component hides much of the complexity of managing the children modality components.

4 Example simple modality: Face Identification

4.1 Functions of a Possible Face Identification Component

Consider a theoretical face identification modality component that takes an image or images of a face and returns the set of possible matches and the confidence of the face identification software in each match. An API to that modality component would include events for starting the component, providing data, and for receiving results back from the component.

This particular example includes the information needed to run this component in the "startRequest" and "doneNotification" events; that is, in this example no "extensionNotification" events are used, although extensionNotification events could be part of another modality component's API. This example assumes that an image has already been acquired from some source; however, another possibility would be to also include image acquisition in the operation of the component.

Depending on the capabilities of the modality component, other possible information that might be included would be some useful non-functional information as the capturing context of the still picture (e.g. indoor picture or outdoor picture) or the type of image (e.g. a portrait photography or a street photography) or would be some technical information as the algorithm to be used or the image format to expect. We emphasize that this is just an example to indicate the kinds of information that might be used by a multimodal application that includes face recognition. The actual interface used in real applications should be defined by experts in the field.

The use case is a face identification component that identifies one of a set of employees on the basis of face images.

The MMI Runtime Framework could use the following events to communicate with such a component.

Table 1: Component behavior of Face Identification with respect to modality component guidelines.
Guideline Component Information
Guideline 1: Each modality component must implement all of the MMI life cycle events. See Table 2 for the details of the implementation of the life-cycle events.
Guideline 2: Identify other functions of the modality component that are relevant to the interaction manager. All the functions of the component are covered in the life-cycle events, no other functions are needed.
Guideline 3: If the component uses media, specify the media format. The component uses the jpeg format for images to be identified and for its image database.
Guideline 4: Specify protocols supported by the component for transmitting media (e.g. SIP). The component uses HTTP for transmitting media.
Guideline 5: Specify supported human languages. This component does not support any human languages.
Guideline 6: Specify supporting languages required by the component. This component does not require any markup languages.
Guideline 7: Modality components sending data to the interaction manager must use the EMMA format. This component uses EMMA.
Table 2: Component behavior of face identification for each life-cycle event.
Life Cycle Event Component Implementation
newContextRequest (Standard) The component requests a new context from the IM.
newContextResponse (Standard) The component starts a new context and assigns the new context id to it.
prepareRequest The component prepares resources to be used in identification, specifically, the image database.
prepareResponse (Standard) If the database of known users is not found, the error message "known users not found" is returned in the <statusInfo> element.
startRequest The component starts processing if possible, using a specified image, image database, threshold, and limit on the size of nbest results to be returned.
startResponse (Standard) If the database of known users is not found, the error message "known users not found" is returned in the <statusInfo> element.
doneNotification Identification results in EMMA format are reported in the "data" field.The mode is "photograph", the medium is "visual", the function is "identification", and verbal is "false".
cancelRequest This component stops processing when it receives a "cancelRequest". It always performs a hard stop whether or not the IM requests a hard stop.
cancelResponse (Standard)
pauseRequest This component cannot pause.
pauseResponse <statusInfo> field is "cannot pause".
resumeRequest This component cannot resume.
resumeResponse <statusInfo> field is "cannot resume".
extensionNotification This component does not use "extensionNotification". It ignores any "extensionNotification" events that are sent to it by the IM.
clearContextRequest (Standard)
clearContextResponse (Standard)
statusRequest (Standard)
statusResponse The component returns a standard life cycle response. The "automaticUpdate" attribute is "false", because this component does not supply automatic updates.

Note: "(Standard)" means that the component does not do anything over and above the actions specified by the MMI Architecture.

4.2 Event Syntax

4.2.1 Examples of events for starting the component

To start the component, a startRequest event from the IM to the face identification component is sent, asking it to start an identification. It assumes that images found at a certain URI are to be identified by comparing them against a known set of employees found at another URI. The confidence threshold of the component is set to .5 and the IM requests a maximum of five possible matches.

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
  <mmi:startRequest source="uri:RTFURI" context="URI-1" requestID="request-1">
    <mmi:data>
      <face-identification-parameters threshold=".5" unknown="someURI" known="uri:employees" max-nbest="5"/>
    </mmi:data>
  </mmi:startRequest>
</mmi:mmi>

As part of support for the life-cycle events, a modality component is required to respond to a startRequest event with a startResponse event. Here's an example of a startResponse from the face identification component to the IM informing the IM that the face identification component has successfully started.

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
  <mmi:startResponse source="uri:faceURI" context="URI-1" requestID="request-1" status="success"/>
 </mmi:mmi>

Here's an example of a startResponse event from the face identification component to the IM in the case of failure, with an example failure message. In this case the failure message indicates that the known images cannot be found. 

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
  <mmi:startResponse source="uri:faceURI" context="URI-1" requestID="request-1" status="failure">
    <mmi:statusInfo>
      known users not found
    </mmi:statusInfo>
  </mmi:startResponse>
</mmi:mmi>

4.2.2 Example output event

Here's an example of an output event, sent from the face identification component to the IM, using EMMA to represent the identification results. Two results with different confidences are returned.

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
  <mmi:doneNotification source="uri:faceURI" context="URI-1" status="success" requestID="request-1">
    <mmi:data>
      <emma:emma version="1.0" xmlns:emma="http://www.w3.org/2003/04/emma">
        <emma:one-of emma:medium="visual" emma:verbal="false" emma:mode="photograph" emma:function="identification">
          <emma:interpretation id="int1" emma:confidence=".75">
            <person>12345</person>
            <name>Mary Smith</name>
          </emma:interpretation>
          <emma:interpretation id="int2" emma:confidence=".6">
            <person>67890</person>
            <name>Jim Jones</name>
          </emma:interpretation>
        </emma:one-of>
      </emma:emma>
    </mmi:data>
  </mmi:doneNotification>
</mmi:mmi>

This is an example of EMMA output in the case where the face image doesn't match any of the employees.

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
  <mmi:doneNotification source="uri:faceURI" context="URI-1" status="success" requestID="request-1" >
    <mmi:data>
      <emma:emma version="1.0" xmlns:emma="http://www.w3.org/2003/04/emma">
        <emma:interpretation id="int1" emma:confidence="0.0"
         uninterpreted="true" emma:medium="visual" emma:mode="photograph"
         emma:function="identification"/>
      </emma:emma>
    </mmi:data>
  </mmi:doneNotification>
</mmi:mmi> 

5 Example simple modality: Form-filling using Handwriting Recognition

5.1 Functions of a Possible Handwriting Recognition Component

Consider an ink recognition modality component for Handwriting Recognition (HWR) that takes digital ink written using an electronic pen or stylus, performs recognition and returns the recognized text. An API to such a modality component would include events for initializing the component, requesting for recognition by providing digital ink data, and for receiving recognized text result (possibly an n-best list) back from the component as shown in the below figure.

Figure 1: Example of Japanese handwriting recognition

This example assumes that handwriting ink is captured, represented in W3C InkML format and sent to the IM requesting for recognition to text. The following sequences of events explain the ink recognition request.

  1. The IM requests the ink recognition modality by sending the "prepareRequest" event along with the parameters for configuring the HWR system.
  2. Ink recognition modality responds with the "prepareResponse" event with the status of the configuration of the HWR system.
  3. IM sends the "startRequest" event to the ink recognition modality where the event's data field contains the InkML data to be recognized.
  4. Once the recognition is completed, the ink recognition modality notifies the results to the IM using the "doneNotification" event along with the recognition choices (N-best list).

The use case is a form-filling application which accepts handwriting input provided by the user on the form fields. The inputs are recognized and displayed back as text in the corresponding fields. An ink capture modality may be used to capture the ink and send it to IM for recognition. The communication between the ink capture modality and the IM is not covered here for the sake of brevity. The below section explains the details of the communication between the MMI Runtime Framework (RTF) of the IM and the ink recognition modality.

Table 3: Component behavior of Ink modality with respect to modality component guidelines.
Guideline Component Information
Guideline 1: Each modality component must implement all of the MMI life cycle events. See Table 4 for the details of the implementation of the life-cycle events.
Guideline 2: Identify other functions of the modality component that are relevant to the interaction manager. All the functions of the component are covered in the life-cycle events, no other functions are needed.
Guideline 3: If the component uses media, specify the media format. The component uses W3C InkML format to represent handwriting data (digital ink).
Guideline 4: Specify protocols supported by the component for transmitting media (e.g. SIP). The component uses HTTP for transmitting media. Other standard protocols such as TCP may also be explored.
Guideline 5: Specify supported human languages. Virtually any human language script can be supported based on the HWR component capability.
Guideline 6: Specify supporting languages required by the component. W3C InkML for representing the handwriting data.
Guideline 7: Modality components sending data to the interaction manager must use the EMMA format. This component uses EMMA.
Table 4: Component behavior of handwriting recognition for each life-cycle event.
Life Cycle Event Component Implementation
newContextRequest (Standard) The component requests a new context from the IM.
newContextResponse (Standard) The component starts a new context and assigns the new context id to it.
prepareRequest The component prepares resources to be used in recognition. Based on the 'script' parameter, it first selects an appropriate recognizer. It also configures the recognizer with other parameters such as recognition confidence threshold, limit on the size of n-best results to be returned etc., when available.
prepareResponse (Standard) If the recognizer failed to find a matching recognizer for the request language script, a relevant error message is returned in the <statusInfo> element.
startRequest The component performs recognition of the handwriting input.
startResponse (Standard)The status of recognition as "success" or "failure" is returned in the <statusInfo> element.
doneNotification Identification results in EMMA format are reported in the "data" field. The mode is "ink", the medium is "tactile", the function is "transcription", and verbal is "true".
cancelRequest This component stops processing when it receives a "cancelRequest". It always performs a hard stop irrespective of the IM request.
cancelResponse (Standard)
pauseRequest This component cannot pause.
pauseResponse <statusInfo> field is "cannot pause".
resumeRequest This component cannot resume.
resumeResponse <statusInfo> field is "cannot resume".
extensionNotification This component does not use "extensionNotification". It ignores any "extensionNotification" events that are sent to it by the IM.
clearContextRequest (Standard)
clearContextResponse (Standard)
statusRequest (Standard)
statusResponse The component returns a standard life cycle response. The "automaticUpdate" attribute is "false", because this component does not supply automatic updates.

Note: "(Standard)" means that the component does not do anything over and above the actions specified by the MMI Architecture.

5.2 Event Syntax

5.2.1 Examples of events for preparing the component

IM send a prepareRequest event to the ink recognition component. The ink recognition component selects an appropriate recognizer that matches the given language script, in this example it is set to "English_Lowercase". The "RecoGrammar.xml" grammar file contains constraints that aid the recognizer. The confidence threshold of the component is set to .7 and the IM requests a maximum of five possible matches. Based on the capability of the recognizer, other possible parameters such as a 'user profile' that contains user-specific information can be provided.

<mmi:mmi version="1.0" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch">
  <mmi:prepareRequest source="uri:RTFURI" context="URI-1" requestID="request-1">
    <mmi:data>
      <ink-recognition-parameters grammar="RecoGrammar.xml" threshold=".7" script="English_Lowercase" max-nbest="5"/>
    </mmi:data>
  </mmi:prepareRequest>
</mmi:mmi>

As part of support for the life cycle events, a modality component is required to respond to a prepareRequest event with a prepareResponse event. Here's an example of a prepareResponse from the ink recognition component to the IM informing the IM that the ink recognition component has successfully initialized.

<mmi:mmi xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" version="1.0">
  <mmi:prepareResponse source="uri:inkRecognizerURI" context="URI-1" requestID="request-1" status="success"/>
</mmi:mmi>

Here's an example of a prepareResponse event from the ink recognition component to the IM in the case of failure, with an example failure message. In this case the failure message indicates that the language script is not supported.

<mmi:mmi xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" version="1.0">
  <mmi:prepareResponse source="uri:inkRecognizerURI" context="URI-1" requestID="request-1" status="failure">
    <mmi:statusInfo>
      Given language script not supported
    </mmi:statusInfo>
  </mmi:prepareResponse>
</mmi:mmi>

5.2.2 Examples of events for starting the component

To start the component and recognize the handwriting data, a startRequest event from the IM to the ink recognition component is sent. The data field of the event contains InkML representation of the ink data.

Along with the ink, additional information such as the reference co-ordinate system and capture device's resolution may also be provided in the InkML data. The below example shows that the ink strokes have X and Y channels and the ink has been captured at a resolution of 1000 DPI. The example ink data contains strokes of the Japanese character "手" (te) which means "hand".

<mmi:mmi xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" version="1.0">
  <mmi:startRequest source="uri:inkRecognizerURI" context="URI-1" requestID="request-1">
    <mmi:data>
      <ink:ink version="1.0" xmlns:ink="http://www.w3.org/2003/InkML">
        <ink:definitions>
          <ink:context id="device1Context">
            <ink:traceFormat id="strokeFormat>
              <ink:channel name="X" type="decimal">
                <ink:channelProperty name="resolution" value="1000" units="1/in"/>
              </ink:channel>
              <ink:channel name="Y" type="decimal">
                <ink:channelProperty name="resolution" value="1000" units="1/in"/>
              </ink:channel>
            </ink:traceFormat>
          </ink:context>
        </ink:definitions>
       <ink:traceGroup contextRef="#device1Context">
        <ink:trace>
          106 81, 105 82, 104 84, 103 85, 101 88, 100 90, 99 91, 97 97,
          89 105, 88 107, 87 109, 86 110, 84 111, 84 112, 82 113, 78 117,
          74 121, 72 122, 70 123, 68 125, 67 125, 66 126, 65 126, 63 127,
          57 129, 53 133, 47 135, 46 136, 45 136, 44 137, 43 137, 43 137
        </ink:trace>

        <ink:trace>
          28 165, 29 165, 31 165, 33 165, 35 164, 37 164, 38 164, 40 163,
          42 163, 45 163, 49 162, 51 162, 53 162, 56 162, 58 162, 64 160,
          69 160, 71 159, 74 159, 76 159, 78 159, 86 157, 91 157, 95 157,
          96 157, 99 157, 101 157, 103 157, 109 155, 111 155, 114 155,
          116 155, 119 155, 121 154, 124 154, 126 154, 127 154, 129 154,
          131 154, 134 153, 135 153, 136 153, 137 153, 138 153, 139 153,
          140 153, 141 153, 142 153, 143 153, 144 153, 145 153, 145 153  
        </ink:trace>

        <ink:trace>
          10 218, 12 218, 14 218, 20 216, 25 216, 28 216, 31 216, 34 216,
          37 216, 45 216, 53 216, 58 215, 60 215, 63 215, 68 215, 72 215,
          74 215, 77 215, 85 212, 88 212, 94 210, 100 208, 105 208, 107 208,
          109 208, 110 208, 111 207, 114 207, 115 207, 119 207, 121 207,
          123 207, 124 207, 128 206, 130 205, 131 205, 134 205, 136 205,
          137 205, 138 205, 139 204, 140 204, 141 204, 142 204, 143 204,
          144 204, 145 204, 146 204, 147 204, 148 204, 149 204, 150 204,
          151 203, 152 203, 153 203, 154 203, 155 203, 156 203, 158 203,
          159 202, 160 202, 161 202, 162 202, 163 202, 164 202, 165 202,
          166 202, 167 202, 168 202, 169 202, 170 202, 171 202, 172 202,
          173 202, 173 201, 173 201
        </ink:trace>

        <ink:trace>
          78 128, 78 127, 79 127, 79 128, 80 129, 80 130, 81 132, 82 133,
          82 134, 83 135, 84 137, 85 139, 86 141, 87 142, 88 144, 89 146,
          94 152, 95 153, 96 155, 98 160, 99 162, 100 165, 101 167, 101 169,
          102 173, 102 176, 102 181, 102 183, 102 185, 102 186, 104 192,
          104 195, 104 197, 104 199, 104 201, 104 203, 104 205, 104 206,
          104 207, 104 208, 104 209, 104 210, 104 211, 104 213, 104 214,
          104 215, 104 216, 104 217, 104 218, 104 220, 103 222, 102 223,
          102 224, 102 223, 102 224, 103 225, 103 228, 103 229, 103 230,
          103 231, 103 232, 103 233, 103 236, 103 239, 103 242, 103 243,
          103 247, 103 248, 102 249, 102 250, 102 251, 101 251, 100 253,
          99 255, 99 256, 98 257, 97 258, 97 259, 96 260, 96 261, 95 262,
          95 263, 94 264, 94 265, 93 266, 93 267, 92 268, 91 269, 91 270,
          90 271, 90 272, 89 273, 89 274, 88 275, 88 276, 87 276, 87 277,
          86 277, 86 278, 85 279, 85 280, 84 281, 83 282, 82 284, 82 285,
          81 285, 80 286, 79 287, 78 288, 77 288, 77 289, 76 290, 75 290,
          75 291, 74 291, 74 290, 74 289, 74 288, 74 287, 73 287, 73 286,
          73 285, 72 284, 72 281, 71 280, 70 279, 70 278, 69 277, 68 276,
          67 275, 65 274, 62 272, 60 271, 59 271, 58 270, 57 270, 56 269,
          55 268, 54 268, 53 267, 52 267, 51 267, 49 267, 48 267, 48 266,
          48 266  
        </ink:trace>
       </ink:traceGroup>
      </ink:ink>
    </mmi:data>
   </mmi:startRequest>
</mmi:mmi>

As part of support for the life cycle events, a modality component is required to respond to a startRequest event with a startResponse event. Here's an example of a startResponse from the ink recognition component to the IM informing the IM that the ink recognition component has successfully started.

<mmi:mmi xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" version="1.0">
  <mmi:startResponse source="uri:inkRecognizerURI" context="URI-1" requestID="request-1" status="success"/>
</mmi:mmi>

Here's an example of a startResponse event from the ink recognition component to the IM in the case of failure, with an example failure message. In this case the failure message indicates that the recognition failed due to invalid data format of the handwriting data.

<mmi:mmi xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" version="1.0">
  <mmi:startResponse source="uri:inkRecognizerURI" context="URI-1" requestID="request-1" status="failure">
    <mmi:statusInfo>
      Invalid data format
    </mmi:statusInfo>
  </mmi:startResponse>
</mmi:mmi>

5.2.3 Example output event

Here's an example of an output event, sent from the ink recognition component to the IM, using EMMA to represent the identification results. Two results with different confidences are returned.

<mmi:mmi xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" version="1.0">
  <mmi:doneNotification source="uri:inkRecognizerURI" context="URI-1" status="success" requestID="request-1">
    <mmi:data>
      <emma:emma version="1.0" xmlns:emma="http://www.w3.org/2003/04/emma">
        <emma:one-of emma:medium="tactile" emma:verbal="true"
         emma:mode="ink" emma:function="transcription">
          <emma:interpretation id="int1" emma:confidence=".8">
            <text> 手 </text>
          </emma:interpretation>
          <emma:interpretation id="int2" emma:confidence=".7">
           <text> 于 </text>
          </emma:interpretation>
        </emma:one-of>
      </emma:emma>
    </mmi:data>
  </mmi:doneNotification>
</mmi:mmi>

This is an example of EMMA output in the case where the recognizer is unable to find a suitable match to the input handwriting. The EMMA output contains an empty interpretation result.

<mmi:mmi xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" version="1.0">
  <mmi:doneNotification source="uri:inkRecognizerURI" context="URI-1" status="success" requestID="request-1" >
    <mmi:data>
      <emma:emma version="1.0" xmlns:emma="http://www.w3.org/2003/04/emma">
        <emma:interpretation id="int1" emma:confidence="0.0"
         emma:medium="tactile" emma:verbal="true" emma:mode="ink"
         emma:function="transcription" emma:uninterpreted="true"/>
      </emma:emma>
    </mmi:data>
  </mmi:doneNotification>
</mmi:mmi>

6 Example simple modality: Video Display

6.1 Functions of a Possible Video Display Component

Consider a theoretical video display modality component that receives a video file and displays it in a screen. An API to that modality component would include events for starting the component, providing the video file and for receiving player status information back from the component. This example includes the information needed to run this component in the "startRequest" event and shows a display codec problem. In order to focus in the behavior of the output modality component, this example assumes that the video file is given from some source; however, another possibility would be to also include video acquisition in a composite (input/output) and more complex real-time display component. Depending on the capabilities of the modality component, other possible information that might be included would be the video formats supported. The MMI Runtime Framework could use the following events to communicate with such a component.

Table 5: Component behavior of Video Display with respect to modality component guidelines.
Guideline Component Information
Guideline 1: Each modality component must implement all of the MMI life cycle events. See Table 6 for the details of the implementation of the life-cycle events.
Guideline 2: Identify other functions of the modality component that are relevant to the interaction manager. All the functions of the component are covered in the life-cycle events, no other functions are needed.
Guideline 3: If the component uses media, specify the media format. The component uses for the moment only the h.264 codec format.
Guideline 4: Specify protocols supported by the component for transmitting media (e.g. SIP). The component uses HTTP for transmitting media.
Guideline 5: Specify supported human languages. This component does not support any human languages.
Guideline 6: Specify supporting languages required by the component. This component does not require any markup languages.
Guideline 7: Modality components sending data to the interaction manager must use the EMMA format. This component uses EMMA.
Table 6: Component behavior of Video Display for each life-cycle event.
Life Cycle Event Component Implementation
newContextRequest (Standard) The component requests a new context from the IM.
newContextResponse (Standard) The component starts a new context and assigns the new context id to it.
prepareRequest The component prepares resources to be used in display configuration, specifically, the supported formats table.
prepareResponse (Standard) If the recognizer failed to find a matching recognizer for the request language script, a relevant error message is returned in the <statusInfo> element.
startRequest The component starts displaying video if possible. The <mmi:data> element might hold a <video-display-parameters> element containing a "videoFile" attribute. The "videoFile" attribute contains the URI referencing the video content.
startResponse (Standard)If the current video format (WVM) is not found in the supported codec formats table, the error message "codec not supported" is returned in the <statusInfo> element.
doneNotification Display state in EMMA format are reported in the "data" field. The mode is "video", the medium is "visual", the function is "playing", and verbal is "false".
cancelRequest This component stops processing when it receives a "cancelRequest". It always performs a hard stop irrespective of the IM request.
cancelResponse (Standard)
pauseRequest (Standard)
pauseResponse (Standard)
resumeRequest (Standard)
resumeResponse (Standard)
extensionNotification This component does not use "extensionNotification". It ignores any "extensionNotification" events that are sent to it by the IM.
clearContextRequest (Standard)
clearContextResponse (Standard)
statusRequest (Standard)
statusResponse The component returns a standard life cycle response. The "automaticUpdate" attribute is "false", because this component does not supply automatic updates.

Note: "(Standard)" means that the component does not do anything over and above the actions specified by the MMI Architecture.

6.2 Event Syntax

6.2.1 Examples of events for starting the component

To start the component, a startRequest event from the IM to the display component is sent, asking it to start a video display. It gives information about a video file in a certain URI.

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
  <mmi:startRequest source="uri:RTFURI" context="URI-1" requestID="request-1">
    <mmi:data>
      <video-display-parameters videoFile="someURI"/>
    </mmi:data>
  </mmi:startRequest>
</mmi:mmi>

As part of support for the life-cycle events, a modality component is required to respond to a startRequest event with a startResponse event. Here's an example of a startResponse from the display component to the IM informing the IM that the component is successfully started and video is displaying.

 <mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
  <mmi:startResponse source="uri:displayURI" context="URI-1" requestID="request-1" status="success"/>
 </mmi:mmi>

Here's an example of a startResponse event from the display component to the IM in the case of failure, with an example failure message. In this case the failure message indicates that the video codec is not supported.

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
  <mmi:startResponse source="uri:displayURI" context="URI-1" requestID="request-1" status="failure">
    <mmi:statusInfo>
      WVM codec not supported
    </mmi:statusInfo>
  </mmi:startResponse>
</mmi:mmi>

A References

MMI-ARCH
"Multimodal Architecture and Interfaces (Working Draft)" , Jim Barnett et al. editors. This specification describes a loosely coupled architecture for multimodal user interfaces, which allows for co-resident and distributed implementations, and focuses on the role of markup and scripting, and the use of well defined interfaces between its constituents. World Wide Web Consortium, 2011.
MMIF
"W3C Multimodal Interaction Framework" , James A. Larson, T.V. Raman and Dave Raggett, editors, World Wide Web Consortium, 2003.
EMMA
"Extensible multimodal Annotation markup language (EMMA)", Michael Johnson et al. editors. EMMA is an XML format for annotating application specific interpretations of user input with information such as confidence scores, time stamps, input modality and alternative recognition hypotheses, World Wide Web Consortium, 2009.
VoiceXML
"Voice Extensible Markup Language (VoiceXML) Version 2.1" , Matt Oshry et al. editors. World Wide Web Consortium, 2007.
SSML
"Speech Synthesis Markup Language (SSML) Version 1.1" , Daniel C. Burnett et al. editors. World Wide Web Consortium, 2010.
SRGS
"Speech Recognition Grammar Specification Version 1.0" , Andrew Hunt et al. editors. World Wide Web Consortium, 2004.
SISR
"Semantic Interpretation for Speech Recognition (SISR) Version 1.0" , Luc Van Tichelen al. editors. World Wide Web Consortium, 2004.