EMOTIV offers the opportunity for the user to create and execute a number of Mental Commands.
In order to provide consistency and a simple range of possible actions, each user profile will contain space for training data for up to 15 different commands, which are internally labelled COMMAND1 to COMMAND15. Each COMMAND slot will store a LABEL (for example, PUSH, DISAPPEAR, FIRE or WIND) and a link to a custom animation which can be executed. Emotiv Insight Control Center will support animations for PUSH, PULL, LIFT, DROP, LEFT, RIGHT, ROTATE LEFT, ROTATE RIGHT, ROTATE FORWARDS, ROTATE BACKWARDS, ROTATE CLOCKWISE, ROTATE ANTICLOCKWISE, DISAPPEAR however Developers will be able to freely define their own Commands for each application.
The initial step in creating Mental Commands is to train the system to recognise your background mental state, the so-called NEUTRAL condition, by recording a brief period of your brain patterns while you are not trying to execute any Commands. Training a new Mental Command is as simple as selecting the desired Command label in Training mode, then imagining the consequences of the Command for 8 seconds (for example, imagine the target object floating up into the air for the LIFT Command) while the system records the mental patterns you want to associate with the Command.
The Command is then live and you can test and practise it. After a few repeated trials and as many training updates as you wish, your Command is ready to be used and the training data summary is stored in your user profile.
In normal use, from the list of up to 15 trained Mental Commands, the user can select 1, 2, 3 or 4 Commands to be currently active. When one of the active Commands is executed by the user, the desired outcome or animation will be triggered. This may just be moving the target object on screen, or you can link each Command to specific outputs such as keystrokes, mouse clicks, double taps and so on. You can also control the actions of game characters or even output to a remote control system to dim your lights, change the TV channel, fly your remote controlled helicopter or move the Earth (actuator not included).
This can be achieved by developers using API calls, or using a method (called Key Bindings) which will allow all EPOC and Insight models to generate user interface outputs (for example, key stroke sequences, mouse actions, touchscreen events, sounds etc) in response to Mental Commands using a very simple rule-based scripting system. These outputs can be directed to the application in focus, or to specific applications where permitted by the Operating System.