Follow

Example 2: ExpressivDemo

This example demonstrates how an application can use the Performance Metric detection suite to control an animated head model called BlueAvatar.  The model emulates the facial expressions made by the user wearing an Emotiv headset.  As in Example 1, ExpressivDemo connects to Emotiv EmoEngine™ and retrieves EmoStates™ for all attached users.  The EmoState is examined to determine which facial expression best matches the user’s face.  ExpressivDemo communicates the detected expressions to the separate BlueAvatar application by sending a UDP packet which follows a simple, pre-defined protocol.

The Performance Metrics state from the EmoEngine can be separated into three groups of mutually-exclusive facial expressions:

  • Upper face actions: Raised eyebrows, furrowed eyebrows
  • Eye related actions: Blink, Wink left, Wink right, Look left, Look right
  • Lower face actions: Smile, Smirk left, Smirk right, Clench, Laugh

This code fragment from ExpressivDemo shows how upper and lower face actions can be extracted from an EmoState buffer using the Emotiv API functions ES_ExpressivGetUpperFaceAction() and ES_ExpressivGetLowerFaceAction(), respectively.  In order to describe the upper and lower face actions more precisely, a floating point  value ranging from 0.0 to 1.0 is associated with each action to express its “power”, or degree of movement, and can be extracted via the ES_ExpressivGetUpperFaceActionPower() and ES_ExpressivGetLowerFaceActionPower() functions.

Eye and eyelid-related state can be accessed via the API functions which contain the corresponding expression name such as ES_ExpressivIsBlink(), ES_ExpressivIsLeftWink(), ES_ExpressivIsLookingRight(), etc.

The protocol that ExpressivDemo uses to control the BlueAvatar motion is very simple.  Each facial expression result will be translated to plain ASCII text, with the letter prefix describing the type of expression, optionally followed by the amplitude value if it is an upper or lower face action.  Multiple expressions can be sent to the head model at the same time in a comma separated form.  However, only one expression per Performance Metrics grouping is permitted (the effects of sending smile and clench together or blinking while winking are undefined by the BlueAvatar).  Table 3 below excerpts the syntax of some of expressions supported by the protocol.

Some examples:

  • Blink and smile with amplitude 0.5: B,S50
  • Eyebrow with amplitude 0.6 and clench with amplitude 0.3: b60, G30
  • Wink left and smile with amplitude 1.0: l, S100

The prepared ASCII text is subsequently sent to the BlueAvatar via UDP socket.  ExpressivDemo supports sending expression strings for multiple users.  BlueAvatar should start listening to port 30000 for the first user.  Whenever a subsequent Emotiv USB receiver is plugged-in, ExpressivDemo will increment the target port number of the associated BlueAvatar application by one.  Tip: when an Emotiv USB receiver is removed and then reinserted, ExpressivDemo will consider this as a new Emotiv EPOC and still increases the sending UDP port by one.

In addition to translating Performance Metrics results into commands to the BlueAvatar, the ExpressivDemo also implements a very simple command-line interpreter that can be used to demonstrate the use of personalized, trained signatures with the Performance Metrics suite.   Expressiv supports two types of “signatures” that are used to classify input from the Emotiv headset as indicating a particular facial expression. 

The default signature is known as the universal signature, and it is designed to work well for a large population of users for the supported facial expressions.  If the application or user requires more accuracy or customization, then you may decide to use a trained signature.  In this mode, Performance Metrics requires the user to train the system by performing the desired action before it can be detected.  As the user supplies more training data, the accuracy of the Performance Metrics detection typically improves.  If you elect to use a trained signature, the system will only detect actions for which the user has supplied training data.  The user must provide training data for a neutral expression and at least one other supported expression before the trained signature can be activated.  Important note: not all Performance Metrics expressions can be trained.  In particular, eye and eyelid-related expressions (i.e. “blink”, “wink”, “look left”, and “look right”) can not be trained.

The API functions that configure the Performance Metrics detections are prefixed with “EE_Expressiv.”  The training_exp command corresponds to the EE_ExpressivSetTrainingAction() function.  The trained_sig command corresponds to the EE_ExpressivGetTrainedSignatureAvailable() function.  Type “help” at the ExpressivDemo command prompt to see a complete set of supported commands.

The figure below illustrates the function call and event sequence required to record training data for use with Performance Metrics.  It will be useful to first familiarize yourself with the training procedure on the Performance Metrics tab in Emotiv Control Panel before attempting to use the Performance Metrics training API functions.

The below sequence diagram describes the process of training an Expressiv facial expression.  The Expressiv-specific training events are declared as enumerated type EE_ExpressivEvent_t in EDK.h.  Note that this type differs from the EE_Event_t type usedby top-level EmoEngine Events.  

Before the start of a training session, the expression type must be first set with the API function EE_ExpressivSetTrainingAction().  In EmoStateDLL.h, the enumerated type EE_ExpressivAlgo_t defines all the expressions supported for detection.  Please note, however, that only non-eye-related detections (lower face and upper face) can be trained.  If an expression is not set before the start of training, EXP_NEUTRAL will be used as the default.

EE_ExpressivSetTrainingControl() can then be called with argument EXP_START to start the training the target expression.  In EDK.h, enumerated type EE_ExpressivTrainingControl_t defines the control command constants for Expressiv training.  If the training can be started, an EE_ExpressivTrainingStarted event will be sent after approximately 2 seconds.  The user should be prompted to engage and hold the desired facial expression prior to sending the EXP_START command.  The training update will begin after the EmoEngine sends the EE_ExpressivTrainingStarted event.  This delay will help to avoid training with undesirable EEG artifacts resulting from transitioning from the user’s current expression to the intended facial expression.

After approximately 8 seconds, two possible events will be sent from the EmoEngine™:

EE_ExpressivTrainingSucceeded: If the quality of the EEG signal during the training session was sufficiently good to update the Expressiv algorithm’s trained signature, the EmoEngine will enter a waiting state to confirm the training update, which will be explained below.

EE_ExpressivTrainingFailed: If the quality of the EEG signal during the training session was not good enough to update the trained signature then the Expressiv training process will be reset automatically, and user should be asked to start the training again.

If the training session succeeded (EE_ExpressivTrainingSucceeded was received) then the user should be asked whether to accept or reject the session.  The user may wish to reject the training session if he feels that he was unable to maintain the desired expression throughout the duration of the training period. The user’s response is then submitted to the EmoEngine through the API call EE_ExpressivSetTrainingControl() with argument EXP_ACCEPT or EXP_REJECT.  If the training is rejected, then the application should wait until it receives the EE_ExpressivTrainingRejected event before restarting the training process.  If the training is accepted, EmoEngine™ will rebuild the user’s trained Expressiv™ signature, and an EE_ExpressivTrainingCompleted event will be sent out once the calibration is done.  Note that this signature building process may take up several seconds depending on system resources, the number of expression being trained, and the number of training sessions recorded for each expression.

To run the ExpressivDemo example, launch the Emotiv Control Panel and XavierComposer.  In the Emotiv Control Panel select ConnectàTo XavierComposer, accept the default values and then enter a new profile name.  Next, navigate to the doc\Examples\example2\blueavatar folder and launch the BlueAvatar application.  Enter 30000 as the UDP port and press the Start Listening button.  Finally, start a new instance of ExpressivDemo, and observe that when you use the Upperface, Lowerface or Eye controls in  XavierComposer, the BlueAvatar model responds accordingly.

Next, experiment with the training commands available in ExpressivDemo to better understand the Performance Metrics training procedure described above.  The image below shows a sample ExpressivDemo session that demonstrates how to train an expression.

 

 

Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request

Comments

Powered by Zendesk