Follow

Example 4: CognitivDemo

This example demonstrates how the user’s conscious mental intention can be recognized by the Mental Commands detection and used to control the movement of a 3D virtual object.  It also shows the steps required to train the Mental Commands suite to recognize distinct mental actions for an individual user. 

The design of the CognitivDemo application is quite similar to the ExpressivDemo. In Example 2, ExpressivDemo retrieves EmoStates™ from Emotiv EmoEngine™ and uses the EmoState data describing the user’s facial expressions to control an external avatar.  In this example, information about the cognitive mental activity of the users is extracted instead.  The output of the Mental Commands detection indicates whether users are mentally engaged in one of the trained Mental Commands (pushing, lifting, rotating, etc.) at any given time.  Based on the Mental Commands results, corresponding commands are sent to a separate application called EmoCube to control the movement of a 3D cube.

Commands are communicated to EmoCube via a UDP network connection.  As in Example 2, the network protocol is very simple: an action is communicated as two comma-separated, ASCII-formatted values.  The first is the action type returned by ES_CognitivGetCurrentAction(), and the other is the action power returned by ES_CognitivGetCurrentActionPower(), as shownin Listing 11.

Training for Mental Commands

The Mental Commands detection suite requires a training process in order to recognize when a user is consciously imagining or visualizing one of the supported Mental Commands actions.  Unlike the Facial Expressions suite, there is no universal signature that will work well across multiple individuals.  An application creates a trained Mental Commands signature for an individual user by calling the appropriate Mental Commands API functions and correctly handling appropriate EmoEngine events.  The training protocol is very similar to that described in Example 2 in order to create a trained signature for Facial Expressions.

To better understand the API calling sequence, an explanation of the Mental Commands detection is required.  As with the Facial Expressions detection, it will be useful to first familiarize yourself with the operation of the Mental Commands tab in Emotiv Control Panel before attempting to use the Mental Commands API functions. 

Mental Commands can be configured to recognize and distinguish between up to 4 distinct actions at a given time.  New users typically require practice in order to reliably evoke and switch between the mental states used for training each Cognitiv action.  As such, it is imperative that a user first masters a single action before enabling two concurrent actions, two actions before three, and so forth.

During the training update process, it is important to maintain the quality of the EEG signal and the consistency of the mental imagery associated with the action being trained.  Users should refrain from moving and should relax their face and neck in order to limit other potential sources of interference with their EEG signal.

Unlike the Facial Expressions algorithm, the Mental Commands algorithm does not include a delay after receiving the COG_START training command before it starts recording new training data.

The above sequence diagram describes the process of carrying out Mental Commands training on a particular action.  The Mental Commands-specific events are declared as enumerated type EE_CognitivEvent_t in EDK.h.  Note that this type differs from the EE_Event_t type usedby top-level EmoEngine Events.  The code snippet in Listing 12 illustrates the procedure for extracting Mental Commands-specific event information from the EmoEngine event.

Before the start of a training session, the action type must be first set with the API function EE_CognitivSetTrainingAction().  In EmoStateDLL.h, the enumerated type EE_CognitivAction_t defines all the Mental Commands actions that are currently supported (COG_PUSH, COG_LIFT, etc.).  If an action is not set before the start of training, COG_NEUTRAL will be used as the default.

EE_CognitivSetTrainingControl() can then be called with argument COG_START to start the training on the target action.  In EDK.h, enumerated type EE_CognitivTrainingControl_t defines the control command constants for Mental Commands training.  If the training can be started, an EE_CognitivTrainingStarted event will be sent almost immediately.  The user should be prompted to visualize or imagine the appropriate action prior to sending the COG_START command.  The training update will begin after the EmoEngine sends the EE_CognitivTrainingStarted event.  This delay will help to avoid training with undesirable EEG artifacts resulting from transitioning from a “neutral” mental state to the desired mental action state.

After approximately 8 seconds, two possible events will be sent from the EmoEngine™:

EE_CognitivTrainingSucceeded: If the quality of the EEG signal during the training session was sufficiently good to update the algorithms trained signature, EmoEngine™ will enter a waiting state to confirm the training update, which will be explained below.

EE_CognitivTrainingFailed: If the quality of the EEG signal during the training session was not good enough to update the trained signature then the Mental Commands training process will be reset automatically, and user should be asked to start the training again.

If the training session succeeded (EE_CognitivTrainingSucceeded was received) then the user should be asked whether to accept or reject the session.  The user may wish to reject the training session if he feels that he was unable to evoke or maintain a consistent mental state for the entire duration of the training period. The user’s response is then submitted to the EmoEngine through the API call EE_CognitivSetTrainingControl() with argument COG_ACCEPT or COG_REJECT.  If the training is rejected, then the application should wait until it receives the EE_CognitivTrainingRejected event before restarting the training process.  If the training is accepted, EmoEngine™ will rebuild the user’s trained Mental Commands signature, and an EE_CognitivTrainingCompleted event will be sent out once the calibration is done.  Note that this signature building process may take up several seconds depending on system resources, the number of actions being trained, and the number of training sessions recorded for each action.

To test the example, launch the Emotiv Control Panel and the XavierComposer.  In the Emotiv Control Panel select Connect To XavierComposer and accept the default values and then enter a new profile name.  Navigate to the \example4\EmoCube folder and launch the EmoCube, enter 20000 as the UDP port and select Start Server.  Start a new instance of CognitivDemo, and observe that when you use the Cognitiv control in the XavierComposer the EmoCube responds accordingly.

Next, experiment with the training commands available in CognitivDemo to better understand the Mental Commands training procedure described above. The example below shows a sample CognitivDemo session that demonstrates how to train the CognitivDemo.  

Was this article helpful?
1 out of 1 found this helpful
Have more questions? Submit a request

Comments

Powered by Zendesk