Developer Knowledge Base
The Vuzix M400 and M4000 are monocular Android-based wearable computers. It is designed to allow users to quickly and easily leverage specialized Android based applications to provide hands-free access to critical information. This allows users to be to complete tasks faster while simultaneously reducing errors in the process.
The M400 and M4000 contain many of the same capabilities as a standard Android smartphone, in an unobtrusive head-mounted form factor. See below for a full list of hardware and interaction capabilities of the device:
- Full color display:
- M400: 640x360 nHD resolution occluded OLED
- M4000: 854x480 see-through waveguide optics
- 8 Core 2.52Ghz Qualcomm XR1 processor
- 6GB system RAM
- 64GB internal flash storage
- Camera capable of up to 12.8MP stills and 4k video with auto-focus and image stabilization
- Orientation sensors (gyroscope, accelerometer, magnetometer)
- User-facing speaker
- Triple noise-cancelling microphones
- 4 standard Android control buttons
- Touch pad with multi finger support
- Voice control
The M400 and M4000 are huge leaps forward in the head-mounted wearable device market. Leveraging the power of the Qualcomm XR1 processor will allow you to push the boundaries of video processing and 3D rendering.
The M400 and M4000 also include an updated camera that improves resolution, frame rates and auto-focus time over previously available devices. These rates are documented in the Camera Knowledge Base.
The M400 and M4000 feature interaction methods which differ significantly from traditional touchscreen Android devices, and it is particularly important to keep these considerations in mind when designing the User Interface of an application intended to run on this device.
Existing applications which heavily leverage touchscreen interactions do not translate well to this device. This is due to touchscreen UI’s leveraging taps for input based on specific screen coordinates, which is not possible with the available interaction methods.
Voice
Voice commands are the ideal method of interacting with the device under many circumstances, as they will allow users to quickly control the device and provide input without requiring them to physically interact with the device, thus interrupting their workflow.
The device includes a Speech Recognition engine. Refer to the Speech SDK section for additional details on the engine and the speech vocabulary it supports.
Applications can leverage alternate recognition engines by including them within the application itself.
Navigation Buttons
The three navigation buttons on the device include both short and long-press functionality.
The buttons generate KeyEvents which can be intercepted and handled explicitly in your application, or can be left to the system to handle. Reference Android KeyEvent documentation for details.
Short presses on the buttons will perform the following functions:
- Foremost Button – Move focus to the right within a UI or move down if no focusable objects are available to the right. Returns the KEYCODE_DPAD_RIGHT KeyEvent.
- Middle Button – Move focus to the left within a UI or move up if no focusable objects are available to the left. Returns the KEYCODE_DPAD_LEFT KeyEvent.
- Rearmost Button – Will select the current UI element which has focus. Returns the KEYCODE_DPAD_CENTER KeyEvent.
Long presses on the buttons will perform the following functions:
- Foremost Button – Brings up a context menu for the current area of the UI, allowing users to access additional functions without crowding the UI. (KEYCODE_MENU)
- Middle Button – Returns to the Home screen. Returns KEYCODE_HOME.
- Rearmost Button – Moves back one step in the UI. Returns KEYCODE_BACK.
Touchpad
The M400 and M4000 feature a two-axis touchpad to that can detect a wide variety of user gestures.
The touchpad is implemented as a trackball device, and methods such as dispatchTrackballEvent() and onTrackballEvent() can be used to capture and process the raw touchpad events.
As a fallback, if you do not handle the trackball events in your application, there are predefined single, double, and triple-finger gestures that generate key presses. These keys can be captured with standard Android methods. Refer Android KeyEvent documentation for details.
One finger
- Swipe back to front: KEYCODE_DPAD_RIGHT
- Swipe front to back: KEYCODE_DPAD_LEFT
- Swipe bottom to top: KEYCODE_DPAD_UP
- Swipe top to bottom: KEYCODE_DPAD_DOWN
- Tap: KEYCODE_DPAD_CENTER
- Hold: KEYCODE_MENU
Two fingers
- Swipe back to front: KEYCODE_FORWARD_DEL
- Swipe front to back: KEYCODE_DEL
- Swipe bottom to top: KEYCODE_VOLUME_UP
- Swipe top to bottom: KEYCODE_VOLUME_DOWN
- Swipe top to bottom and hold: KEYCODE_VOLUME_MUTE
- Tap: KEYCODE_BACK
- Hold: KEYCODE_HOME
Three fingers
- Tap: KEYCODE_POWER
- Hold: KEY_F12
Touchpad Mouse
With the release of version 2.1.0 of the M400 and M4000 OS, users can now configure their touchpad to act as a virtual mouse.
Enable this by selecting "Mouse" mode in Settings > System > Language & input > Touchpad.
Usage:
- Swipe with one finger to move the cursor.
- Tap with one finger to click the screen at the cursor.
- Swipe with two fingers to scroll the view.
- Tap with two fingers to go back.
M400 Android OS
The Android OS running on the M400 and M4000 is a modified version of Android 9.0, tailored to the components and capabilities of the device.
For the most part, development of applications intended to be used on the M400 or M4000 can be accomplished by following standard Android development methodologies and by leveraging existing Android APIs. The APIs listed below are some of the prominent features of Android for which default APIs should be leveraged:
- Camera – android.hardware.Camera or android.hardware.Camera2 may be used
- Sensors – use SensorManager
- Bluetooth – use BluetoothManager and BluetoothAdapter. Standard Bluetooth and BLE are supported
- Database – standard Android SQLite supported
- Google Cloud Messaging – use Google Play Services Client Library 9.8.0 or earlier
- Maps – use Google Play Services Client Library 9.8.0 or earlier
- Speech Recognition – use Vuzix Speech SDK
- Barcode Engine - use Vuzix Barcode SDK
There are some components of the M400 and M4000 which will require device-specific APIs to access, these APIs will be covered in detail in other sections of the SDK documentation.
Software developers must explicitly enable USB debugging on the M400 and M4000.
For most use cases, you can simply navigate to Settings > Connected devices and enable the setting labeled 'Allow ADB'.
To enable ADB along with all developer options, follow the instructions below:
- Navigate to Settings > About Glasses > Build number
- Tap (using the select button or touchpad) the Build number list entry 7 times (a pop-up will indicate how many remaining taps are needed)
- A new menu item is now available. Navigate to Settings > System > Developer options.
- Scroll down until you see the USB debugging switch and toggle the switch to enable USB.
You may disable USB debugging and developer mode from the Developer options menu at any time in the future.
The M400 and M4000 can run most any Android application compiled for a minimum SDK version of 27*. Because of this, deploying applications to the M400 and M4000 is often as simple as updating your existing application to allow button navigation.
Most Android developers prefer Android Studio. You may use any Android development environment including Android Studio, Xamarin, Eclipse, IntelliJ IDEA, and many more.
To get started, simply create a new Android project with minimum SDK version 27*.
*Targeting a minimum SDK version of 28 is an option if you only plan to support M400/M4000s running OS version 2.0.0 or later.
Screen Orientation
The device may be worn on the left or right eye, and will always be in landscape or reverseLandscape orientation.
- The proper orientation to specify for your Activity in the manifest is:
Java
android:screenOrientation="sensorLandscape"
Determining Display Type
While most of the hardware on the M400 and M4000 is identical, the unique displays on each device allow users to interact with what is on-screen in vastly different ways.
As such, you may wish to design user interfaces for your application that are specifically targeted to one device or the other.
Instead of developing and maintaining two separate apps to accomplish this goal, Vuzix recommends to detect the display type of the device at run time and dynamically switch the interface to the appropriate experience.
To do this, Vuzix has added a function to the HUD-Resources library to quickly query the display type, an example of how to utilize this can be found below.
For more information on including this library in your project, please refer here.
Java
if(Utils.getDisplayType() == Utils.DisplayType.OCCLUDED)
{
// M400s have an occluded OLED display
}
else if(Utils.getDisplayType() == Utils.DisplayType.TRANSPARENT)
{
// M4000s have a transparent waveguide-based display
}
Navigation
The user will navigate your UI with the three physical buttons or touch swipes that can be automatically translated to KeyEvent key codes as described in the Interaction Methods article.
- Allow UI elements to be navigated with simple left/right/up/down navigation, and give users clear visual indicators which UI element currently has focus.
- Consider adding customized verbal navigation using the Vuzix Speech SDK.
- Explicitly control the focus order. More details can be found in the separate Navigation and Focus article.
Usability Best Practices
The most important aspect to a well-designed user interface for an application intended to be used on the M400 or M4000 is simplicity. This is largely driven by the limited amount of space on the display.
- Avoid complex menu systems, instead use a linear progression through interface components to minimize the amount of time a user needs to dedicate to navigating the application UI.
- Limit the amount of information being displayed to the user at any given moment to be very contextually relevant. I.E. a single, specific instruction to guide the user to performing an individual step in a procedure.
- Avoid displaying complex diagrams or schematics to the user, instead opt for displaying only the components immediately relevant to the task at hand.
- Try to minimize the frequency of scenarios requiring the user to physically interact with the device by leveraging alternative methods of advancing a user through the application interface. The best way to do so is by leveraging voice commands, but this can also be done by automatically advancing a user to the next screen based on any number of interactions, such as scanning a barcode, taking a picture, or some other form of verification that the step has been completed.
Some developers may prefer to remove appcompat from the project, which requires they change the theme.
- If you remove the dependency on the appcompat library, you’ll need to change styles.xml to base your AppTheme on another parent theme.
Change:
Java
Theme.AppCompat.Light.DarkActionBar to either @android:style/Theme.Material.Light.DarkActionBar or @android:style/Theme.Holo.Light.DarkActionBar
The Holo theme can preferable to Material on the device because button focus states are more apparent with Holo. If you do use Material, we recommend defining a custom button style that clearly changes the look of buttons when they have focus.
- If you changed your app theme to Material, in styles.xml you need to prepend the android namespace in front of the color attributes:
Java
<item name="android:colorPrimary">@color/colorPrimary</item>
<item name="android:colorPrimaryDark">@color/colorPrimaryDark</item>
<item name="android:colorAccent">@color/colorAccent</item>
- If you changed your app theme to Holo, in styles.xml you should remove the three <item> sub elements under <style> that refer to colors as these items are not used. In addition, you can delete colors.xml if you choose.
- Under the mipmap resource directories, you can delete ic_launcher_round.png as round icons are not used on the device. If you choose to delete the round icon, you should remove the roundIcon attribute that got generated in AndroidManifest.xml on the <application> tag.
Vuzix has created libraries to give you a head start in designing your user experience. Adding these libraries to your Android project is simple and developer friendly. Vuzix provides a Jitpack repository for those libraries and other resources.
Adding HUD-ActionMenu and HUD-Resources to your Android Application
To be able to utilize the HUD-ActionMenu and HUD-Resources libraries, you will need to add a dependency in your application build.gradle file.
In your app's build.gradle file just add the following line to your dependencies section:
- implementation 'com.vuzix:hud-actionmenu:2.9.0'
Note that adding HUD-ActionMenu to your project will automatically add HUD-Resources to your project. Your build.gradle for your application will look something like this:
Java
apply plugin: 'com.android.application'
android {
compileSdkVersion 27
defaultConfig {
applicationId "com.vuzix.m400.barcode_sample"
minSdkVersion 22
targetSdkVersion 27
versionCode 1
versionName "1.0"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
implementation 'com.vuzix:hud-actionmenu:2.9.0'
}
Android Jetpack Compatibility
Beginning with version 2.0, the HUD-ActionMenu and HUD-Resources libraries support Android Jetpack. In addition, the HUD themes are based on Theme.AppCompat rather than Theme.Material.
Utilizing the HUD Themes
With the HUD-Resource library in your project, you can now use the HUD theme as your main application theme. You also have access to a"light" variation of the HUD theme.
These themes will provide a good base for your application design and will also provide the color scheme as the default one for your layout.
To use the HUD theme as the main theme of the application, you just need to modify the Application Manifest and ensure the android:theme is "@style/HudTheme".
Your Android Manifest will look something like this:
Java
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.vuzix.m400.barcode_sample">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:theme="@style/HudTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Utilizing HUD API Classes
The Vuzix HUD libraries include our customized versions of Activity classes and other frequently used classes.
Our extension of those classes provide extra functionality, improve performance and add custom features to take full advantage of the Vuzix platform.
We recommend that all your Activities extend the "ActionMenuActivity". Your Main Activity Class definition will look something like this:
Java
public class MainActivity extends ActionMenuActivity {
.
.
.
}
Changing Action Menu Color
Beginning with version 1.5 of the HUD-ActionMenu library, you can easily change the color of the action menu. To do so, override actionMenuColor in your own theme:
Java
<style name="AppTheme" parent="HudTheme">
<item name="actionMenuColor">@color/hud_red</item>
</style>
If you have two themes, for example light and dark mode themes, don't forget to define actionMenuColor in both themes.
If you want to customize the action menu beyond simple color changes, you will need to use custom views for the action menu items.
HUD Libraries Java Docs
For more information and details check our Javadocs located at: https://www.vuzix.com/support/Downloads_Drivers.
We provide the following Java Docs for the HUD Libraries:
- For the HUD Resource Package: HUD-Resources Java Docs
- For the HUD ActionMenu Package: HUD-ActionMenu Java Docs
To simplify development it is recommended that you set up Android Studio with with a device profile for the M400 and M4000. This setup takes only a few minutes and dramatically increases your efficiency to create screen layouts. You may also use the emulator within Android Studio to do a significant amount of your testing.
Creating a Device Profile
The M400 and M4000 have a unique screen size. Creating a device profile allows Android Studio to accurately present the layout in the editor as it will be seen on the device.
- Save a copy of the device profiles configuration files downloaded specifically for M400 and M4000.
- Launch Android Studio
- Open AVD under the Tools menu and choose "Create Virtual Device."
Choose the Import Hardware Profiles. Select the previously downloaded file and click OK.
Search for "M400" or "M4000" and see if the new Device M400 or M4000 is added under the Phone Tab.
Later you can see the device properties as they were specified in the Device Profile file.
Congratulations! You now have the Vuzix device on your Hardware Profile.
Close the dialog and go back to Android Studio.
You can now use the new Hardware profile in your Layout/Interface Designer to have the proper dimension of the UI real estate.
From the device selector pull-down, select Generic Phones and Tablets then select either the M400 or M4000.
Emulating the M400 or M4000
The built-in system images allow emulation of the majority of the features of the Vuzix M400 and M4000 creating a valuable tool for developers. Please note, they do not allow testing of the Vuzix Speech SDK and Vuzix Barcode SDK. All other features of the device can be accurately emulated once it is configured using this procedure.
- Open AVD Manager under Tools
- Choose "Create Virtual Device"
- Search for and choose the previously added M400 or M4000 Device Profile and click "Next"
Choose Android 9 (Pie) for the system image and download if necessary.
Click Next.
Ensure the device's startup orientation is set to landscape.
Click Finish.
Congratulations! You can now choose to test your application using an M400 or M4000 sized emulator.
Advanced - Removing Navigation Buttons
If you would like to remove the stock Android navigation buttons, you can modify the config.ini file of the newly created emulator to hide them.
- Open AVD Manager under Tools
- Choose the drop-down arrow under actions next to the emulator you want to edit
- Choose "Show on Disk"
Right click on the config.ini file and select Edit.
Find hw.mainKeys and change it from "no" to "yes".
You will need to restart your emulator if it is running for the changes to take effect.
Congratulations! You have successfully removed the stock Android navigation buttons.
Introduction
The Vuzix Speech Recognizer is a fully embedded, fast, phrase-matching recognition system designed to interpret and respond to voice commands. A platform base vocabulary is available to all apps; it is intended to facilitate default navigation and selection without direction from the client app. That is, a client app can benefit from navigation provided by a base vocabulary with no setup or explicit awareness of the speech recognizer. This capability is implemented by mapping phrases to Android key events.
For many applications, it is desirable to implement a custom vocabulary which performs application-specific actions when an application-specific phrase is spoken (e.g. to capture a still image when “take a picture” is spoken.) The Vuzix Speech Recognition system provides two mechanisms by which this can be achieved: Android key events and Android intents.
Custom Vocabulary Architecture
The Vuzix Speech Recognizer is implemented as an Android service that runs locally on the device. No cloud servers are used, and the audio data never leaves the device.
Each Activity can have its own vocabulary. The system will automatically switch to the proper vocabulary as Activities are paused and resumed. If no vocabulary is provided, the system will use the default navigation commands.
Any 3rd party application may create a custom vocabulary for the Vuzix Speech Recognizer service by utilizing the Vuzix Speech SDK as described throughout this section of the knowledge base.
SDK Installation
It is recommended that users obtain the Speech SDK via Maven. Simply make this addition to your project build.gradle to define the Vuzix repository.
Java
allprojects {
repositories {
google()
jcenter()
// The speech SDK is currently hosted by jitpack
maven {
url "https://jitpack.io"
}
}
}
Then add a dependency to the Speech SDK library in your application build.gradle.
Java
dependencies {
implementation 'com.vuzix:sdk-speechrecognitionservice:1.91'
}
Proguard Rules
If you are using Proguard you will need to prevent obfuscating the Vuzix Speech SDK. Failure to do so will result in calls to the SDK raising the RuntimeException "Stub!". Add the following -keep statement to the proguard rules file, typically named proguard-rules.pro.
Java
# Vuzix speech recognition requires the SDK names not be obfuscated
-keep class com.vuzix.sdk.speechrecognitionservice.** {
*;
}
The R8 Optimization may omit arguments required by the SDK methods, resulting in the NullPointerException "throw with null exception" being raised. The current workaround is to disable R8 and use Proguard to do the shrinking and obfuscation. Add the following to your gradle.properties to change from R8 to Proguard.
Java
android.enableR8=false
Base Vocabulary Inheritance
The custom vocabulary is created when the VuzixSpeechClient class is instantiated. The newly instantiated custom vocabulary inherits the platform base vocabulary, which is currently:
- Hello Vuzix - Activates the listener
- Go left - Basic directional navigation
- Move left - Basic directional navigation
- Go right - Basic directional navigation
- Move right - Basic directional navigation
- Go up - Basic directional navigation
- Move up - Basic directional navigation
- Go down - Basic directional navigation
- Move down - Basic directional navigation
- Pick this - Activates the item that currently has focus
- Select this - Activates the item that currently has focus
- Confirm - Activates the item that currently has focus
- Okay - Activates the item that currently has focus
- Open - Activates the item that currently has focus
- Scroll down - Basic directional navigation
- Scroll left - Basic directional navigation
- Scroll right - Basic directional navigation
- Scroll up - Basic directional navigation
- Cancel - Navigates backward in the history stack
- Close - Navigates backward in the history stack
- Go back - Navigates backward in the history stack
- Go home - Triggers the Home action
- Quit - Triggers the Home action
- Exit - Triggers the Home action
- Stop - Stops the scrolling action
- Show menu - Brings up context menu for current UI screen
- Next - Advances to the next item in an ordered collection of items
- Previous - Goes backward by one item in an ordered collection of items
- Go forward - Navigates forward in the history stack
- Page up - Navigates up one page
- Page down - Navigates down one page
- Volume up - Increases volume by 5
- Volume down - Decreases volume by 5
- Speech settings - Opens Speech Settings menu
- Speech commands - Opens the current Speech Command list
- Command list - Opens the current Speech Command list
- Fashlight on - Enables the front flashlight LED
- Torch on - Enables the front flashlight LED
- Flashlight off - Disables the front flashlight LED
- Torch off - Disables the front flashlight LED
- Please enter evaluation mode - Triggers diagnostic mode which will display recognized keycodes in on-screen messages
- Please exit evaluation mode - Disables diagnostic mode
- Voice off - Stops the listener
- Start recording - Brings up the camera app in video mode
- Take a picture - Brings up the camera app in picture mode
- Open notifications - Brings up the notification menu
- View notifications - Brings up the notification menu
Creating the Speech Client
To work with the Vuzix Speech SDK, you first create a VuzixSpeechClient, and pass it your Activity
Java
import com.vuzix.sdk.speechrecognitionservice.VuzixSpeechClient;
Activity myActivity = this;
VuzixSpeechClient sc = new VuzixSpeechClient(myActivity);
Handling Exceptions
It is possible for a user to attempt to run code compiled against the Vuzix Speech SDK on non-Vuzix hardware. This will generate a RuntimeException "Stub!" to be thrown. It is also possible to write an application against the latest Vuzix Speech SDK, and have customers attempt to run the application on older devices. Any calls to unsupported interfaces will cause a NoClassDefFoundError. For this reason all SDK calls should be inside try / catch blocks.
Java
// Surround the creation of the VuzixSpeechClient with a try/catch for non-Vuzix hardware
VuzixSpeechClient sc;
try {
sc = new VuzixSpeechClient(myActivity);
} catch(RuntimeException e) {
if(e.getMessage().equals("Stub!")) {
// This is not being run on Vuzix hardware (or the Proguard rules are incorrect)
// Alert the user, or insert recovery here.
} else {
// Other RuntimeException to be handled
}
}
// Surround all speech client commands with try/catch for unsupported interfaces
try {
// sc.anySdkCommandHere();
} catch(NoClassDefFoundError e) {
// The hardware does not support the specific command expected by the Vuzix Speech SDK.
// Alert the user, or insert recovery here.
}
For brevity, this article may omit the try/catch blocks, but creating a robust application requires they be present.
Query All Phrases
If you wish to implement logic based on the phrases for which the speech engine is currently listening or wish to display the set of phrases to the user, you can query the list of all phrases. Simply call getPhrases().
Java
try {
List<String> phrases = sc.getPhrases();
} catch(NoClassDefFoundError e) {
// The ability to query the full command list was added in Speech SDK v1.6
// which was released on M400 and M4000 v1.1.4. Earlier versions will not support this.
}
Additional mechanisms to query specific phrases within the base vocabulary are described in the Editing the Base Vocabulary section of this Knowledge Base.
Debugging the Current Vocabulary
To aid in debugging, it is often important while manipulating the vocabulary to dump the active vocabulary to the log. You may use the above getPhrases() for this, or if you prefer, you may use the dump() command to generate a pre-formatted block of text.
Java
Log.i(LOG_TAG, sc.dump());
Removing Existing Phrases
Removing existing phrases may reduce the likelihood that the speech recognizer resolves the incorrect phrase. This is especially true if your phrases sound similar to default phrases. For example, a game control of "roll right" might be confused with the default phrase "scroll right."
Your application may also want to control the navigation itself, in which case you could remove default navigation commands to prevent confusion. For proper editing of the base vocabulary please see the Editing the Base Vocabulary section of this Knowledge Base.
And phrases may be deleted once they are no longer applicable to the state of your application. Any phrase in the vocabulary may be removed by calling deletePhrase().
Java
sc.deletePhrase("Answer incoming call");
It is important to note that when the speech recognizer gets a command that is already in the list, the previous command is overwritten. So there is no need to delete the original entry first. This is useful if you want to implement your own navigation methods using "go up", "go down", "go left", etc.
The entire vocabulary may be removed from your activity using:
Java
sc.deleteAllPhrases();
Note: the above command will remove the wake word phrase(s) from the vocabulary. It is highly recommended to re-add the "Hello Vuzix" phrase to the vocabulary after performing this action, for consistency of interaction method. This is detailed later in this article.
Adding Custom Trigger Control Phrases
When the speech recognition engine is enabled but idle it is listening only for a "wake word". Once the wake word phrase is recognized, the engine transitions to the triggered state where it listens for the full vocabulary (such as "go home" and "select this"). By default the wake word is "Hello Vuzix". You can insert custom wake word phrases using the following commands.
Java
sc.insertWakeWordPhrase("Hello Vuzix"); // Add "Hello Vuzix" wake-up phrase for consistency
sc.insertWakeWordPhrase("hey m400"); // Add application specific wake-up phrase
sc.insertWakeWordPhrase("hey m4000"); // Add application specific wake-up phrase
Once triggered, the speech engine will time-out after the period configured in the system settings and return to the idle state that listens only for the wake word phrases. The operator can circumvent the timeout and immediately return to idle by saying a voice-off phrase. By default the voice-off phrase is "voice off". You can insert custom voice-off phrases using the following commands.
Java
sc.insertVoiceOffPhrase("voice off"); // Add-back the default phrase for consistency
sc.insertVoiceOffPhrase("privacy please"); // Add application specific stop listening phrase
Adding Phrases to Receive Keycodes
You can register for a spoken command that will generate a keycode. This keycode will behave exactly the same as if a USB keyboard were present and generated that key. This capability is implemented by mapping phrases to Android key events (android.view.KeyEvent).
Java
sc.insertKeycodePhrase("toggle caps lock", KEYCODE_CAPS_LOCK);
Keycodes added by your application will be processed in addition to the Keycodes in the base vocabulary set:
- K_BACK "close"
- K_BACK "go back"
- K_DPAD_DOWN "go down"
- K_DPAD_DOWN "move down"
- K_DPAD_DOWN "scroll down" (repeats)
- K_DPAD_LEFT "go left"
- K_DPAD_LEFT "move left"
- K_DPAD_LEFT "scroll left" (repeats)
- K_DPAD_RIGHT "go right"
- K_DPAD_RIGHT "move right"
- K_DPAD_RIGHT "scroll right" (repeats)
- K_DPAD_UP "go up"
- K_DPAD_UP "move up"
- K_DPAD_UP "scroll up" (repeats)
- K_ENTER "confirm"
- K_ENTER "okay"
- K_ENTER "open"
- K_ENTER "pick this'
- K_ENTER "select this"
- K_ESCAPE "cancel"
- K_HOME "go home"
- K_HOME "quit"
- K_MENU "show menu"
- K_PAGE_DOWN "page down"
- K_PAGE_UP "page up"
- K_VOLUME_DOWN "volume down"
- K_VOLUME_UP "volume up"
Keycodes denoted as "(repeats)" will generate the keycode repeating at a fixed interval until terminated by speaking any other valid phrase. The phrase "stop" terminates the repeating keycodes with no further behavior.
Adding Phrases to Receive Intents
The most common use for the speech recognition is to receive intents which can trigger any custom actions, rather than simply receiving keycodes. To do this you must have a broadcast receiver in your application such as:
Java
public class VoiceCmdReceiver extends BroadcastReceiver {
...
}
The broadcast receiver must register with Android for the Vuzix speech intent. This can be done in the constructor as shown here.
Java
public class VoiceCmdReceiver extends BroadcastReceiver {
public VoiceCmdReceiver(MainActivity iActivity) {
iActivity.registerReceiver(this, new IntentFilter(VuzixSpeechClient.ACTION_VOICE_COMMAND));
...
}
}
The phrases you want to receive can be inserted in the same constructor. This is done using insertPhrase() which registers a phrase for the speech SDK intent. The parameter is a string containing the phrase for which you want the device to listen.
Java
public class VoiceCmdReceiver extends BroadcastReceiver {
public VoiceCmdReceiver(MainActivity iActivity) {
iActivity.registerReceiver(this, new IntentFilter(VuzixSpeechClient.ACTION_VOICE_COMMAND));
VuzixSpeechClient sc = new VuzixSpeechClient(iActivity);
sc.insertPhrase( "testing" );
Log.i( LOG_TAG, sc.dump() );
}
}
Now handle the speech SDK intent VuzixSpeechClient.ACTION_VOICE_COMMAND in your onReceive() method. In this scenario, whatever phrase you used in insertPhrase() will be provided in the the received intent as a string extra named VuzixSpeechClient.PHRASE_STRING_EXTRA.
Note: if the phrase contains spaces, they will be replaced by underscores.
Java
public class VoiceCmdReceiver extends BroadcastReceiver {
public VoiceCmdReceiver(MainActivity iActivity) {...}
@Override
public void onReceive(Context context, Intent intent) {
// All phrases registered with insertPhrase() match ACTION_VOICE_COMMAND
if (intent.getAction().equals(VuzixSpeechClient.ACTION_VOICE_COMMAND)) {
String phrase = intent.getStringExtra(VuzixSpeechClient.PHRASE_STRING_EXTRA);
if (phrase != null ) {
if (phrase.equals("testing")) {
// todo: take action upon hearing the spoken phrase "testing"
}
}
}
}
}
With the above example in place, you will be able to say "Hello Vuzix" to activate the recognizer, followed by "testing" and your code will execute.
Specifying Substitution Text for Intents
As mentioned above, the string that was recognized is returned, with spaces replaced by underscores. That can be somewhat cumbersome to the developer, especially since we expect recognized spoken phrases to be localized into many languages.
To make this easier, insertPhrase() can take an optional substitution string parameter. When phrases are inserted this way, the substitution text passed to insertPhrase() will be provided in the the received intent as a string extra named VuzixSpeechClient.PHRASE_STRING_EXTRA, rather than the spoken phrase.
Note: the substitution string may not contain spaces.
This example updates the original by replacing the hard-coded strings properly. Notice insertPhrase() is given two parameters, and it is the second that is used by the onReceive() method.
This now gives us a complete solution to receive a custom phrase and handle it properly.
Java
public class VoiceCmdReceiver extends BroadcastReceiver {
final String MATCH_TESTING = "Phrase_Testing";
public VoiceCmdReceiver(MainActivity iActivity) {
iActivity.registerReceiver(this, new IntentFilter(VuzixSpeechClient.ACTION_VOICE_COMMAND));
VuzixSpeechClient sc = new VuzixSpeechClient(iActivity);
// strings.xml contains: <string name="spoken_phrase_testing">testing</string>
sc.insertPhrase( iActivity.getResources().getString(R.string.spoken_phrase_testing), MATCH_TESTING);
Log.i( LOG_TAG, sc.dump() );
}
@Override
public void onReceive(Context context, Intent intent) {
// All phrases registered with insertPhrase() match ACTION_VOICE_COMMAND
if (intent.getAction().equals(VuzixSpeechClient.ACTION_VOICE_COMMAND)) {
String phrase = intent.getStringExtra(VuzixSpeechClient.PHRASE_STRING_EXTRA);
if (phrase != null ) {
if (phrase.equals(MATCH_TESTING)) {
// todo: take action upon hearing the spoken phrase "testing"
}
}
}
}
}
Phrases in the recognizer must be unique but substitution text does not. We could therefore have multiple insertPhrase() calls with different phrase parameters and identical substitutions. This allows us to insert multiple phrases that perform the same action. For example, the phrases "start call" and "make a call" can have the same substitution and will be treated identically in our BroadcastReceiver.
Adding Phrases to Receive Custom Intents
Defining and Inserting Custom Intents
To add even more flexibility, the speech SDK can send any intent you define, rather than only sending its own ACTION_VOICE_COMMAND. This is especially useful for creating multiple broadcast receivers and directing the intents properly.
We define intents by calling defineIntent() and providing a unique text label for each intent. When we want to specify a phrase that will broadcast that intent, we identify it with that same text label.
This example differs from the above in that the CUSTOM_SDK_INTENT is used in place of ACTION_VOICE_COMMAND
Java
public class VoiceCmdReceiver extends BroadcastReceiver {
public final String CUSTOM_SDK_INTENT = "com.your_company.CustomIntent";
final String CUSTOM_EVENT = "my_event";
public VoiceCmdReceiver(MainActivity iActivity) {
iActivity.registerReceiver(this, new IntentFilter(CUSTOM_SDK_INTENT);
VuzixSpeechClient sc = new VuzixSpeechClient(iActivity);
Intent customSdkIntent = new Intent(mMainActivity.CUSTOM_SDK_INTENT);
sc.defineIntent(CUSTOM_EVENT, customSdkIntent );
// strings.xml contains: <string name="spoken_phrase_testing">testing my voice application</string>
sc.insertIntentPhrase( iActivity.getResources().getString(R.string.spoken_phrase_testing), CUSTOM_EVENT);
Log.i( LOG_TAG, sc.dump() );
}
@Override
public void onReceive(Context context, Intent intent) {
// Since we only registered one phrase to this intent, we don't need any further switching. We know we got our CUSTOM_EVENT
// todo: add test behavior
}
}
The system can support multiple broadcast receivers. Each receiver simply registers for the intents it expects to receive. They do not need to be in the same class that creates the VuzixSpeechClient.
Deleting a Custom Intent
Beginning with SDK v1.91 you can delete a custom intent. Call the deleteIntent() method and supply the label of the intent you previously defined. This will automatically delete any phrase that would have generated this intent.
Java
sc.deleteIntent(CUSTOM_EVENT);
Listing all Intent Labels
Beginning with SDK v1.91 you can list all intent labels. The list will be returned as a List<String>.
Java
List<String> intentLabels = sc.getIntentLabels();
Checking the Engine Version
As mentioned above, it is possible for the SDK to expose newer calls than what is supported by a given device OS version. You can query getEngineVersion() to determine the version of the engine on the device to allow you to protect newer SDK calls with conditional logic to eliminate possible NoClassDefFoundError from being generated. For example, if you know the device is running SDK v1.8 you would not attempt calls introduced in v1.9.
Because getEngineVersion() is a newer SDK call, it should itself be protected.
Java
float version = 1.4f; // The first stable SDK released with M300 v1.2.6
try {
version = sc.getEngineVersion();
Log.d(mMainActivity.LOG_TAG, "Device is running SDK v" + version);
} catch (NoSuchMethodError e) {
Log.d(mMainActivity.LOG_TAG, "Device is running SDK prior to v1.8. Assuming version " + version);
}
Sample Project
A sample application for Android Studio demonstrating the Vuzix Speech SDK is available to download here.
Editing the Base Vocabulary
The Getting Started Code article of this knowledge base provides an overview of querying, deleting and adding phrases to the base vocabulary. Those are the only interfaces needed to create a vocabulary from scratch.
This article builds on those concepts to create a more robust mechanism to edit the base vocabulary. Editing the base vocabulary allows you to leave some portion of it unchanged and add your own customization on top of it.
The advantage of editing the base vocabulary, rather than re-inventing it, is to maintain a consistent user experience between your app and other applications. This reduces confusion and training for end users. For example, it would be confusing for the user to say "move left" in all other applications, then say "to the left" in your application. So, in most instances, it is best to leave similar functionality unchanged from the base vocabulary.
The base vocabulary is set by the device operating system based on the active system language. The built-in commands may change from version to version. Therefore, commands like:
Java
sc.deletePhrase("Voice Off");
are only guaranteed to work for customers running in English on the same OS version originally tested against. A future OS version may replace the "Voice Off" phrase with another similar phrase, rendering that code incorrect.
More robust interfaces were introduced in SDK v1.8 to allow applications to easily and reliably edit the base vocabulary. Using the new interfaces described below, the "voice off" example would instead be written:
Java
for( String voiceOffPhrase : sc.getVoiceOffPhrases() ) {
sc.deletePhrase(voiceOffPhrase);
}
Instead of hard-coding the voice off phrase, we query the engine for that information using getVoiceOffPhrases(). This example will behave identically in all languages, including ones to which your application is not translated; and will even work if the actual phrases are changed by future OS revisions.
Phrase Queries
The phrases in the active vocabulary can be queried based on functionality. Some actions have multiple phrases, so each query returns a List<String> of phrases where each phrase is the exact spoken phrase in the current language. An empty list is returned if no phrases match the request.
These interfaces allow you to create detailed help pages describing the built-in phrases, and allow you to delete the phrases you do not want active.
Queries include:
- getPhrases() - returns all phrases
- getWakeWordPhrases() - returns the current wake words, such as "Hello Vuzix", that bring the speech recognizer to the active triggered state where it listens for the full vocabulary.
- getVoiceOffPhrases() - returns the current phrases, such as "voice off", that take the recognizer out of the active triggered state so it once again listens only for wake words.
- getStopPhrases() - returns the current phrases, such as "stop", that terminate previous scroll requests.
- getKeycodePhrases() - return phrases associated with a specific keycode, such as "go left" is associated with KEYCODE_DPAD_LEFT. This is described more below.
- getIntentPhrases() - returns phrases associated with any label you created with defineIntent(). This interface is not used in editing the base vocabulary.
- getBuiltInActionPhrases() - returns phrases associated with various pre-defined actions, such as "take a picture". This is described more below.
Querying Key Codes
The getKeycodePhrases() query for phrases associated with keycodes takes two parameters: keycode and keyFrequency.
The keycode can be any valid key defined in the KeyEvent class, such as KEYCODE_DPAD_LEFT.
The keyFrequency can limit the results to phrases associated with single-press or repeating (scroll) keys. The valid values are:
- KEY_FREQ_SINGLE_OR_REPEAT - match all phrases whether single-press or repeating (scrolling). Both "go left" and "scroll left" would match this.
- KEY_FREQ_SINGLE_PRESS - match only single-press actions. Only "go left" would match this, not "scroll left".
- KEY_FREQ_REPEATING - match only repeating actions. Only "scroll left" would match this, not "go left".
So the request for the "go left" phrase would be
Java
List<String> leftPhrases = sc.getKeycodePhrases(KeyEvent.KEYCODE_DPAD_LEFT, VuzixSpeechClient.KEY_FREQ_SINGLE_PRESS);
It is possible to iterate over all valid key constants to determine if they have at least one valid phrase or you can query to identify keys that have at least one valid phrase associated using getMappedKeycodes(). This takes a single argument keyFrequency as described above, and returns a List<Integer> of keycode values.
This example shows a query for every phrase associated to a key:
Java
int selectedFrequency = VuzixSpeechClient.KEY_FREQ_SINGLE_OR_REPEAT;
for ( Integer keycode : sc.getMappedKeycodes(selectedFrequency) ) {
for ( String eachKeyPhrase : sc.getKeycodePhrases(keycode, selectedFrequency) ) {
// todo: Do something with keycode and eachKeyPhrase
}
}
Query Built-In Actions
The operating system provides some special built-in actions which vary by device. You can query the associated phrases using getBuiltInActionPhrases(). This interface takes a single parameter to identify the action. The action may be one of:
- BUILT_IN_ACTION_SLEEP
- BUILT_IN_ACTION_COMMAND_LIST
- BUILT_IN_ACTION_SPEECH_SETTINGS
- BUILT_IN_ACTION_FLASHLIGHT_ON (M-Series only)
- BUILT_IN_ACTION_FLASHLIGHT_OFF (M-Series only)
- BUILT_IN_ACTION_VIEW_NOTIFICATIONS
- BUILT_IN_ACTION_CLEAR_NOTIFICATIONS (Blade only)
- BUILT_IN_ACTION_START_RECORDING
- BUILT_IN_ACTION_TAKE_PHOTO
- BUILT_IN_ACTION_ASSISTANT (Blade only)
- BUILT_IN_ACTION_ASSIST_ALEXA (Blade only)
- BUILT_IN_ACTION_ASSIST_GOOGLE (Blade only)
For example, to determine which phrase will turn the screen off, we could query:
Java
List<String> sleepPhrases = sc.getBuiltInActionPhrases(VuzixSpeechClient.BUILT_IN_ACTION_SLEEP);
Deleting Items
Any phrase returned from any of the queries may be passed to deletePhrase() to remove the phrase from the vocabulary.
There is also a deleteAllPhrases() interface that removes all phrases to allow your application to insert clean entries.
When editing the base vocabulary it is often preferred to use the deleteAllPhrasesExcept() interface. This interface takes a List<String> of phrases to keep, all others are deleted. This is useful when an application wants to keep basic navigation and delete all built-in actions. As the OS is updated more built-in actions may be added. So, rather than re-releasing your application with more and more delete commands, you can create a list of base phrases to keep and delete the others.
This example deletes all phrases except the wake word, voice off word, those mapped to key presses and the stop phrase.
Java
VuzixSpeechClient sc = new VuzixSpeechClient(this);
ArrayList<String>keepers = new ArrayList<>();
for ( Integer keycode : sc.getMappedKeycodes(VuzixSpeechClient.KEY_FREQ_SINGLE_OR_REPEAT) ) {
keepers.addAll(sc.getKeycodePhrases(keycode, VuzixSpeechClient.KEY_FREQ_SINGLE_OR_REPEAT));
}
keepers.addAll(sc.getWakeWordPhrases());
keepers.addAll(sc.getVoiceOffPhrases());
keepers.addAll(sc.getStopPhrases());
sc.deleteAllPhrasesExcept(keepers);
Overview
The built-in speech recognition engine allows multiple applications to each register their own unique vocabulary. As the user switches between applications, the engine automatically switches to the correct vocabulary. Only the active application receives the recognized phrases.
Applications with only a single vocabulary can get the behavior they desire with very little coding consideration. Applications with multiple activities and multiple vocabularies have special considerations discussed here.
Single Activity and Single Vocabulary
Applications with only a single vocabulary in a single Activity can do the required registration in the onCreate() method, and the required cleanup in the onDestroy().
Java
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
try {
// Register to receive intent that are generated by the Speech Recognition engine
registerReceiver(this, new IntentFilter(VuzixSpeechClient.ACTION_VOICE_COMMAND));
// Create a VuzixSpeechClient from the SDK and customize the vocabulary
VuzixSpeechClient sc = new VuzixSpeechClient(this);
sc.insertPhrase(mMainActivity.getResources().getString(R.string.btn_text_clear));
} catch (NoClassDefFoundError e) {
Toast.makeText(this, "Cannot find Vuzix Speech SDK", Toast.LENGTH_LONG).show();
} catch (Exception e) {
Log.e(TAG, "Error setting custom vocabulary: " + e.getMessage());
}
}
@Override
protected void onDestroy() {
// Remove the dynamic registration to the Vuzix Speech SDK
unregisterReceiver(this);
super.onDestroy();
}
This application is similar to the ones described in the preceding knowledge base articles, and ensures only the correct commands are received. Unfortunately, this only works in the simplest cases. It requires your application have only a single activity using speech commands, and the vocabulary never be changed within your application.
Multiple Vocabularies
A single Activity will have a single vocabulary associated with it. The Activity can modify that vocabulary at any time by calling deletePhrase(), insertPhrase(), insertIntentPhrase(), and insertKeycodePhrase().
The default intent with an action VuzixSpeechClient.ACTION_VOICE_COMMAND will still be used, so there is no need to call registerReceiver() when changing the existing vocabulary.
For example:
Java
private void enableScanButton(boolean isEnabled ) {
mScanButton.setEnabled(isEnabled);
String scanBarcode = getResources().getString(R.string.scan_barcode);
if (isEnabled ) {
sc.insertPhrase(scanBarcode);
} else {
sc.deletePhrase(scanBarcode);
}
}
This same behavior could be achieved by inserting that scanBarcode phrase in the onCreate() and simply ignoring it when it is not expected. But many developers prefer actively modifying the vocabulary as shown above.
Multiple Activities
Many applications consist of multiple activities. If each Activity were to be registered for the same speech intents, both would receive the commands. This would cause significant confusion if not handled properly. There are a few easy mechanisms that can be used to solve this.
You can dynamically un-register and re-register for the intents based on the activity life cycle, or you can choose to receive a custom unique intent for each phrase. Both are described in sub-sections below.
Multiple Activities - Dynamic Registration
One mechanism to to ensure the correct speech commands are routed to the correct activity within a multi-activity application is for each activity to dynamically register and unregister for the speech intent. Each activity will receive the same default intent action, but only while in the foreground.
Java
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
try {
// Create a VuzixSpeechClient from the SDK and customize the vocabulary
VuzixSpeechClient sc = new VuzixSpeechClient(this);
sc.insertPhrase(mMainActivity.getResources().getString(R.string.btn_text_clear));
} catch (NoClassDefFoundError e) {
Toast.makeText(this, "Cannot find Vuzix Speech SDK", Toast.LENGTH_LONG).show();
} catch (Exception e) {
Log.e(TAG, "Error setting custom vocabulary: " + e.getMessage());
}
}
@Override
protect void onResume() {
super.onResume();
// Dynamically register to receive intent from the Vuzix Speech SDK
registerReceiver(this, new IntentFilter(VuzixSpeechClient.ACTION_VOICE_COMMAND));
}
@Override
protected void onPause() {
// Remove the dynamic registration to the Vuzix Speech SDK
unregisterReceiver(this);
super.onPause();
}
In this example, each activity creates its vocabulary once. Each time an activity is paused, it unregisters and stops receiving speech commands. Each time an activity is resumed, it re-registers and resumes receiving speech commands.
This allows multiple activities to co-exist in a single application and each get speech commands only when they are active.
The only downside to this mechanism is the activity will not get the recognizer state changes while it is not active. For example, it will not know if the speech recognition has become disabled or has timed out. For many applications this does not impact the behavior, and this mechanism is the most appropriate.
Multiple Activities - Unique Intents
Another mechanism exists to control the routing of speech commands. This uses slightly more code than dynamically registering and un-registering to receive the intents and allows all activities to receive the advanced recognizer state data, without receiving the incorrect speech commands. This should be used when it is important to maintain the state of the speech recognizer in your activities.
The speech engine allows a developer to specify a custom intent action, instead of relying on the default VuzixSpeechClient.ACTION_VOICE_COMMAND action.
The custom intents do not have any extras, so you must have one custom intent per phrase. Each activity will create a unique intent for each phrase, such as:
Java
public final String CUSTOM_BARCODE_INTENT = "com.vuzix.sample.MainActivity.BarcodeIntent";
public final String CUSTOM_SETTINGS_INTENT = "com.vuzix.sample.MainActivity.SettingsIntent";
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
try {
// Create a VuzixSpeechClient from the SDK
VuzixSpeechClient sc = new VuzixSpeechClient(this);
// Associate the phrase "Scan Barcode" with generating the CUSTOM_BARCODE_INTENT intent
final String barcodeIntentId = "ScanBarcodeId";
registerReceiver(this, new IntentFilter(CUSTOM_BARCODE_INTENT));
sc.defineIntent(barcodeIntentId, new Intent(CUSTOM_BARCODE_INTENT) );
sc.insertIntentPhrase("Scan Barcode", barcodeIntentId);
// Associate the phrase "Show Settings" with generating the CUSTOM_SETTINGS_INTENT intent
final String showSettingsId = "ShowSettingId";
registerReceiver(this, new IntentFilter(CUSTOM_SETTINGS_INTENT));
sc.defineIntent(showSettingsId, new Intent(CUSTOM_SETTINGS_INTENT) );
sc.insertIntentPhrase("Show Settings", showSettingsId);
} catch (NoClassDefFoundError e) {
Toast.makeText(this, "Cannot find Vuzix Speech SDK", Toast.LENGTH_LONG).show();
} catch (Exception e) {
Log.e(TAG, "Error setting custom vocabulary: " + e.getMessage());
}
}
@Override
public void onReceive(Context context, Intent intent) {
// Now we have a unique action for each phrase
if (intent.getAction().equals(CUSTOM_BARCODE_INTENT)) {
// todo: scan barcode
} else if (intent.getAction().equals(CUSTOM_SETTINGS_INTENT)) {
// todo: open settings menu
}
}
@Override
protected void onDestroy() {
unregisterReceiver(this);
super.onDestroy();
}
In the above example, we have a single intent action for each vocabulary word. These each include "MainActivity" in this example. To continue this pattern, you would create other activities with unique names such as "SettingsActivity". This makes the processing very deterministic. Each activity only registers for its own unique intents.
As the user switches between activities, the vocabulary will be changed in the engine, and even if both activities recognize the same phrase, the unique intent will be generated based on the current active activity.
You can create common code to insert common phrases, you just need to be sure to append an Activity name to each so they are not confused. For example:
Java
public final String PACKAGE_PREFIX = "com.vuzix.sample.";
public final String CUSTOM_BARCODE_INTENT = ".BarcodeIntent";
public String GetBarcodeIntentActionName(String ActivityName) {
return (PACKAGE_PREFIX + ActivityName + CUSTOM_BARCODE_INTENT);
}
protected void InsertCustomVocab(String ActivityName) {
try {
// Create a VuzixSpeechClient from the SDK
VuzixSpeechClient sc = new VuzixSpeechClient(this);
// Associate the phrase "Scan Barcode" with generating the unique intent for the calling activity
final String barcodeIntentId = "ScanBarcodeId" + ActivityName;
sc.defineIntent(barcodeIntentId, new Intent(GetBarcodeIntentActionName(ActivityName)));
sc.insertIntentPhrase("Scan Barcode", barcodeIntentId);
} catch (NoClassDefFoundError e) {
Toast.makeText(this, "Cannot find Vuzix Speech SDK", Toast.LENGTH_LONG).show();
} catch (Exception e) {
Log.e(TAG, "Error setting custom vocabulary: " + e.getMessage());
}
}
With combinations of these techniques you can define custom vocabularies in the various onCreate() methods of your activities. These vocabularies generate unique intent actions for each phrase within each Activity. The system will ensure the correct phrases are delivered to the correct activities.
Storing and Retrieving Vocabularies
Some developers find themselves switching between a few fixed vocabularies. One mechanism that may aid this is storing and retrieving the vocabularies.
A snapshot of the current vocabulary can be taken with:
Java
sc.storeVocabulary("My save point name");
The name you provide must be unique within your Activity or the previous save point will be overwritten.
A saved vocabulary may be retrieved at any point in time. This operation completely changes from whatever the current vocabulary is to what it was at the point in time when it was saved. This is done with a single call to:
Java
sc.retrieveVocabulary("My save point name");
A single save point may be restored multiple times. When you no longer need the saved state you can remove it with:
Java
sc.removeVocabulary("My save point name");
Note: removing the stored vocabulary does not affect the the phrases for which the engine is actively listening. It simply removes the save point so it may not be used again in the future.
Beginning with SDK v1.91 you can list all saved vocabulary names. The list will be returned as a List<String>.
Java
List<String> storedVocabularyNames = sc.getStoredVocabularyNames();
Summary
By using the techniques described here, your can create an application that dynamically modifies the vocabulary within activities,. You can also create multiple activities that use the same or different vocabularies, and each activity will get the correct speech commands.
Advanced Controls
The Vuzix Speech Recognition engine has advanced controls described here. These have been expanded since the initial SDK was released.
Enabling and Disabling Speech Recognition
The Vuzix Speech SDK will listen for the wake word phrases "Hello Vuzix" whenever Vuzix Speech Recognition is enabled in the Settings menu (unless explicitly removed by an application).
When Speech Recognition is enabled, the microphone icon on the notification bar becomes an unfilled outline. When Speech Recognition is disabled, the microphone icon becomes grayed-out.
It is possible to write an application that relies on custom voice commands to perform essential tasks. In this scenario, it would be an unwanted burden to require the user to navigate to the system Settings menu to enable the Speech Recognition prior to launching your application. Instead the Vuzix Speech Recognition may be programatically enabled from within an application.
Java
VuzixSpeechClient.EnableRecognizer(getApplicationContext(), true);
This method is static. Passing the the optional context parameter allows the proper user permissions to be applied, and is recommended for robustness.
The recognizer may be similarly disabled via code during times when false detection would impair the application behavior.
Java
VuzixSpeechClient.EnableRecognizer(getApplicationContext(), false);
However, programatically disabling the Speech Recognition is strongly discouraged. If your application is force-stopped or crashes before re-enabling this, Speech Recognition will remain disabled for all applications even across reboots. A more robust approach is to use deleteAllPhrases() to prevent anything from being detected but only while your application is running.
Once Vuzix Speech Recognition is disabled, the microphone icon on the notification bar becomes grayed-out, and the phrase "Hello Vuzix" will no longer trigger speech recognition.
It is safe to set the Speech Recognition to the existing state, so there is no need to query the state before enabling or disabling Vuzix Speech Recognition. Simply specify the desired state. However, if you want to display the current enabled/disabled state you can query it using isRecognizerEnabled(). This value is not changed by the system while your application is active so the appropriate place for this query is your activity onResume().
Java
bool mSpeechEnabled;
@Override
protected void onResume() {
super.onResume();
mSpeechEnabled = VuzixSpeechClient.isRecognizerEnabled(this);
// todo: update status to user showing state of mSpeechEnabled
}
Triggering the Speech Recognizer
When Speech Recognition is enabled, the recognizer remains in a low-power mode listening only for the wake word phrase, "Hello Vuzix". This state is indicated by the microphone icon on the notification bar becoming an unfilled outline. Once the wake word phrase is heard, the recognizer wakes and becomes triggered. This state is indicated by the microphone icon on the notification bar becoming fully filled. While triggered, all audio data is scanned for all known phrases.
It is possible for an application to programatically trigger the recognizer to wake and become active, rather than relying on the "Hello Vuzix" wake word phrase. This can be tied to a button press or a fragment opening.
Java
VuzixSpeechClient.TriggerVoiceAudio(getApplicationContext(), true);
The recognizer has a timeout that can be modified in the system Settings menu, or programatically as described below. The active recognizer will return to idle mode after that duration has elapsed since the most recent phrase was recognized. This state is again indicated by the microphone icon on the notification bar returning to the unfilled outline, and the recognizer will only respond to the wake word phrase "Hello Vuzix."
Some workflows are best suited to return the active recognizer to idle at a specific time. For example, during recording of a voice memo. This prevents phrases such as "go back" and "go home" from being recognized and acted upon.
The recognizer may be programatically un-triggered to idle state by the following:
Java
VuzixSpeechClient.TriggerVoiceAudio(getApplicationContext(), false);
Trigger State Notification
Since the Speech Recognition engine may be triggered by the user speaking and may timeout internally, it is likely that applications that wish to control this behavior need to know the state of the recognizer.
The same Speech Recognition Intent that broadcasts phrases also broadcasts state change updates. Simply check for the presence of the extra boolean RECOGNIZER_ACTIVE_BOOL_EXTRA.
Java
bool mSpeechTriggered;
@Override
public void onReceive(Context context, Intent intent) {
if (intent.getAction().equals(VuzixSpeechClient.ACTION_VOICE_COMMAND)) {
Bundle extras = intent.getExtras();
if (extras != null) {
// We will determine what type of message this is based upon the extras provided
if (extras.containsKey(VuzixSpeechClient.RECOGNIZER_ACTIVE_BOOL_EXTRA)) {
// if we get a recognizer active bool extra, it means the recognizer was
// activated or stopped
mSpeechTriggered = extras.getBoolean(VuzixSpeechClient.RECOGNIZER_ACTIVE_BOOL_EXTRA, false);
// todo: Implement behavior based upon the recognizer being changed to active or idle
}
}
}
}
Since the state may also change while your application is not running, if you display the state using these notifications you should also query the current state in your onResume().
Java
bool mSpeechTriggered;
@Override
protected void onResume() {
super.onResume();
mSpeechTriggered = VuzixSpeechClient.isRecognizerTriggered(this);
// todo: Implement behavior based upon the recognizer being changed to active or idle
}
Startup Timing Concerns
It is possible for applications that automatically launch with the operating system to be initialized before the speech engine has come online. This is true for launcher applications, among others. Any speech queries or commands issued at startup will fail, and must be retried after the speech engine comes online. In such applications, you should surround initialization logic with a call such as:
Java
if( VuzixSpeechClient.isRecognizerInitialized(this) ) {
//todo perform your speech customizations here
}
Even if the initialization code cannot be run at startup, you should still register the broadcast receiver for the trigger state, as described in the preceding section. When the engine becomes initialized it will send out an initial trigger state. The receipt of this trigger state can cause your application to retry the speech initialization. This allows you to create an application that is starts before the speech engine, and can interact with the speech engine as soon as it becomes available without any unnecessary polling.
Canceling Repeating Characters
Certain Commands like "scroll up" and "scroll down" initiate repeating key presses. This allows the user interface to continue to scroll in the selected direction. The repeating key presses stop when the engine detects any other phrase, such as "select this". The default phrase "stop" is recognized by the speech engine and has no behavior other than to terminate the scrolling.
You may wish to stop repeating key presses programatically without requiring the user to say another phrase. This is useful when reaching the first or last item in a list. To do this, simply call StopRepeatingKeys().
Java
try {
sc.StopRepeatingKeys();
} catch(NoClassDefFoundError e) {
// The ability to stop repeating keys was added in Speech SDK v1.6 which
// was released on M400 v1.1.4. Earlier versions will not support this.
}
Getting and Setting the Recognizer Timeout Config
Beginning with SDK v1.91 you can retrieve and set the recognizer timeout configuration. This value defines how long the Speech Recognizer will remain active from the last valid phrase it hears until returning to idle and requiring the wake word phrase "Hello Vuzix" to be said.
Retrieve the current configuration value in seconds with:
Java
int recognizerTimeoutConfigSeconds = sc.getRecognizerTimeoutConfig();
And modify it with:
Java
sc.setRecognizerTimeoutConfig(30); // in seconds
Note: This change affects all applications until changed again programatically, or from within the Settings menu of the device.
In order for the change to take effect, the timeout value must be within the supported range. The minimum value of zero indicates that the recognizer will never time-out. Values of 1 through the maximum indicate how many seconds the recognizer will remain active. The maximum supported value can be queried with:
Java
int recognizedMaxTimeoutTime = sc.getRecognizerTimeoutMax();
Sample Project
A sample application for Android Studio demonstrating the Vuzix Speech SDK is available to download here.
Working with the Barcode SDK
The Vuzix M400 and M4000 includes a barcode scanning engine that supports the most common symbologies.
- QR
- EAN (8 & 13)
- UPC (A & E)
- Data Matrix
- Code 39
- Code 93
- Code 128
- Codabar
- ITF
- MaxiCode
- RSS-14
- RSS-Expanded
Developers may leverage this barcode engine in their own applications via the Vuzix Barcode SDK. Using the built-in engine provides a common experience with other applications on the device and ensures a consistent high performance.
There are three possible mechanisms to integrate the built-in barcode engine into your application. Each mechanism has the same features for interpreting and filtering barcodes but different integration details.
- Scanning via Intent - By far the easiest mechanism is to send an intent to open the built-in scanning user-interface. You will receive a result with the barcode contents. This only requires a few lines of code, and there is no option to customize the user experience of this approach.
- Embedding a Barcode Scanner in your app- If the default user experience associated with using the Intent does not meet your needs, you can easily insert a scanner fragment into your application. This allows you to have complete control over the messaging and experience. This is the most commonly used option.
- Scanning Raw Image Data - For complete control you may directly capture image data and process it for barcodes. This is sometimes done by developers who are capturing photographs for audit purposes and want to read the barcode from a saved image. It is also possible to call this periodically on a preview stream for complete control over all camera parameters. Most developers do not require this level of control.
Refer to the appropriate sub-section of this Knowledge Base article for details on the solution that best fits your needs.
SDK Installation
It is recommended that developers obtain the Barcode SDK via Maven. Simply make this addition to your project build.gradle to define the Vuzix repository.
Java
allprojects {
repositories {
google()
jcenter()
// The Vuzix barcode SDK is currently hosted by jitpack
maven {
url "https://jitpack.io"
}
}
}
Then add a dependency to the Barcode SDK library in your application build.gradle
Java
dependencies {
implementation fileTree(include: ['*.jar'], dir: 'libs')
implementation 'com.vuzix:sdk-barcode:1.71'
}
Proguard Rules
If you are using Proguard you will need to prevent obfuscating the Vuzix Barcode SDK. Failure to do so will result in calls to the SDK raising the RuntimeException "Stub!". Add the following -keep statement to the proguard rules file, typically named proguard-rules.pro.
Java
-keep class com.vuzix.sdk.barcode.** {
*;
}
The R8 Optimization may omit arguments required by the SDK methods, resulting in the NullPointerException "throw with null exception" being raised. The current workaround is to disable R8 and use Proguard to do the shrinking and obfuscation. Add the following to your gradle.properties to change from R8 to Proguard.
Java
android.enableR8=false
Scanning via Intent
The simplest way to integrate barcode scanning into your app is to invoke the Barcode Scanner via an intent.
Java
import com.vuzix.sdk.barcode.ScannerIntent;
import com.vuzix.sdk.barcode.ScanResult2;
private static final int REQUEST_CODE_SCAN = 0;
Intent scannerIntent = new Intent(ScannerIntent.ACTION);
startActivityForResult(scannerIntent, REQUEST_CODE_SCAN);
The scan result will be returned to you in onActivityResult():
Java
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
switch (requestCode) {
case REQUEST_CODE_SCAN:
if (resultCode == Activity.RESULT_OK) {
ScanResult2 scanResult = data.getParcelableExtra(ScannerIntent.RESULT_EXTRA_SCAN_RESULT2);
// do something with scan result
}
return;
}
super.onActivityResult(requestCode, resultCode, data);
}
For more information on barcode scanning by intent, including intent extras that are available, refer to the JavaDoc for the ScannerIntent class.
A sample Android Studio project demonstrating barcode scanning via intent is available to download here.
Embedding a Barcode Scanner in your app
If you need a bit more control over barcode scanning, you can also embed a barcode scanner directly within your app. This gives you the flexibility to customize the barcode scanning UI or to filter discovered barcodes before taking action on them. The Barcode SDK provides a ScannerFragment which can be embedded in your app like any other Android fragment. You can embed directly in layout xml using the <fragment> tag or use FragmentManager to dynamically add it to your activity. For example:
Java
import com.vuzix.sdk.barcode.ScannerFragment;
import com.vuzix.sdk.barcode.ScanResult2;
ScannerFragment scannerFragment = new ScannerFragment();
Bundle args = new Bundle();
// specify any scanner args here, such as:
// String barcodeTypes[] = {BarcodeType2.CODE_128.name()};
// args.putStringArray(ScannerFragment.ARG_BARCODE2_TYPES, barcodeTypes);
// or:
// args.putBoolean(ScannerFragment.ARG_ZOOM_IN_MODE, true);
scannerFragment.setArguments(args);
getFragmentManager().beginTransaction().replace(R.id.fragment_container, scannerFragment).commit();
You can register a listener with ScannerFragment to get callbacks when a successful scan takes place or an error occurs. For example:
Java
scannerFragment.setListener2(new ScannerFragment.Listener2() {
@Override
public void onScan2Result(Bitmap bitmap, ScanResult2[] scanResults) {
// handle barcode scanning results
}
@Override
public void onError() {
// scanner fragment encountered a fatal error and will no longer provide results
}
});
For more information on using ScannerFragment, including supported arguments, refer to the JavaDoc for the ScannerFragment class.
A sample Android Studio project demonstrating embedding barcode scanning is available to download here.
Scanning Raw Image Data
If you need to customize barcode scanning even further, the Barcode SDK also provides the ability to scan raw image data for barcodes. This is the most advanced use of barcode scanning on the device. The developer will be responsible for both acquiring and pre-processing image data before handing off to the barcode scanner. Before barcodes can be scanned, the scanner needs to be initialized with a context:
Java
import com.vuzix.sdk.barcode.Scanner2;
import com.vuzix.sdk.barcode.ScanResult2;
import com.vuzix.sdk.barcode.Scanner2Factory;
Scanner2 scanner = Scanner2Factory.getScanner(context);
You can then scan your raw image data for barcodes. The byte array data you scan should be grayscale image data. You’ll also need to provide the width and height of the image data. For example:
Java
// acquire image data
byte[] data;
int width;
int height;
Rect rect = null;
// scan image data
ScanResult2[] results = scanner.scan(data, width, height, rect);
If (results.length > 0) {
// we got results, do something with them
}
For more information on scanning raw image data for barcodes, including the various options available, refer to the Java Doc for the Scanner class.
A sample Android Studio project demonstrating barcode scanning via raw image data is available to download here.
Overview
As a developer, you can communicate between apps running on the M400 or M4000 and apps running on a phone. This can be accomplished in a variety of ways. You can choose to manage your own communication protocols using wireless technologies such as Wi-Fi, Wi-Fi Direct, Bluetooth and Bluetooth Low Energy. You can also leverage the Vuzix Connectivity framework to communicate between apps using the same secure communication protocol used between the M400 or M4000 and the Vuzix M400 Companion App.
Managing Your Own Communication
Writing apps for Vuzix M400 and M4000 are very similar to writing apps for any other Android device. If your M400 or M4000 app needs to communicate with another device, or an app running on a phone, you can use familiar technologies such as Wi-Fi or Bluetooth to make that happen. The advantage of using these technologies is that they are standard, well-known technologies that can handle all sorts of use cases and requirements. The disadvantage is they often require a lot of coding to get setup and working in your app. This guide will not go into details about how to write a custom protocol for app communication. Instead we will focus on getting up and running quickly using the Vuzix Connectivity framework.
Vuzix Connectivity Framework
The Vuzix Connectivity framework enables simple communication between an app running on a M400 or M4000 and an app running on a phone. The framework leverages the Android broadcast system and extends it to enable sending broadcasts between devices.
https://developer.android.com/guide/components/broadcasts
This approach gets you up and running very quickly without having to worry about how to locate devices, without worrying about how to secure your data, and without designing a custom communication protocol.
Using the connectivity framework in your app will depend on the type of app you are developing. M400 or M4000 apps and Android apps designed for a phone will use the Vuzix Connectivity SDK for Android. iPhone apps will use the VuzixConnectivity framework for iOS (coming soon). Both require a phone running the Vuzix M400 and M4000 Companion App.
Overview
The Vuzix Connectivity SDK for Android gives developers access to the Vuzix Connectivity framework allowing simple messaging between apps running on the M400 or M4000 and apps running on a phone. The same Connectivity SDK library is used regardless of which side of the communication you are on; M400/M4000 or phone.
Setup and Installation
To use the SDK, you only need small additions to your build.gradle file(s). First, declare a new maven repository in a repositories element that points to the Vuzix maven repository.
You can use the repositories element under allprojects in your project’s root build.gradle file:
Java
allprojects {
repositories {
google()
jcenter()
maven {
url "https://dl.bintray.com/vuzix/lib"
}
}
}
Alternatively you can use a repositories element under the android element in your app’s build.gradle file:
Java
apply plugin: 'com.android.application'
android {
...
repositories {
maven {
url "https://dl.bintray.com/vuzix/lib"
}
}
}
Finally, add the Connectivity SDK as a dependency in your app’s build.gradle file:
Java
dependencies {
implementation 'com.vuzix:connectivity-sdk:1.1'
}
That’s it! You are now ready to start sending messages using the Connectivity framework.
The Connectivity Class
The Connectivity class is your main entry point into the Connectivity SDK. The first thing you need to do is use the static get() method to obtain the singleton instance of the Connectivity class. You’ll need to pass in a non-null Context object:
Java
Connectivity c = Connectivity.get(myContext);
Once you have a reference to the Connectivity object, you can call various methods to learn more about the current connectivity status, send broadcasts, and perform other functions related to connectivity. Consult the Connectivity SDK javadocs for complete information on the available methods. Here are some key methods to be aware of:
isAvailable()
isAvailable() can be used to test if the Vuzix Connectivity framework is available on the device. For M400/M4000, this method should always return true. On phones, this method will return true if the Vuzix M400/M4000 Companion App is installed and false otherwise. If this method returns false, no other Connectivity methods should be called.
getDevices(), getDevice(), isLinked()
getDevices() returns you all the remote devices you are currently linked to. Currently M400/M4000 and the Companion App only support one linked device in either direction. However, the Connectivity framework supports multiple linked devices as a future enhancement. getDevice() is a convenience method that will return you a single linked device if one exists or null if you’re not linked to anything. isLinked() is a convenience method that returns true if you are currently linked to a remote device and false otherwise.
isConnected()
isConnected(device) is used to determine if you are connected to a specific remote device. The no arg version of isConnected() will return true if any remote device is currently connected.
addDeviceListener(listener), removeDeviceListener(listener)
You can add DeviceListeners to be notified when remote devices are added, removed, connected and disconnected. If you only care about one or two particular events, the DeviceListenerAdapter class has no-op implementations of all the DeviceListener methods. Always remember to properly remove your DeviceListeners when done with them.
Sending a Broadcast to a Remote Device
The Connectivity framework supports both regular broadcasts and ordered broadcasts. Broadcasts should be limited to 10 kilobytes or less if possible. Larger broadcasts are supported, but you will always be subject to the Binder transaction buffer limit of 1MB which is shared across your app’s process. In addition, larger broadcasts will take longer to transfer between devices.
The first step to sending a broadcast is creating an Intent with an action:
Java
Intent myRemoteBroadcast = new Intent("com.example.myapp.MY_ACTION");
It is strongly recommended, but not required, that you specify the remote app package that will receive the intent. This ensures that the broadcast is only delivered to a specific app. Use the setPackage() method on the Intent class:
Java
myRemoteBroadcast.setPackage("com.example.myapp");
If you need to set the category, data, type or modify any flags on the intent, you can do so. You can also fill up the intent with extras. For example:
Java
myRemoteBroadcast.putExtra("my_string_extra", "hello");
myRemoteBroadcast.putExtra("my_int_extra", 2);
The following intent extra types are supported:
- boolean, boolean[]
- Bundle, Bundle[]
- byte, byte[]
- char, char[]
- double, double[]
- float, float[]
- int, int[]
- Intent, Intent[]
- Long, long[]
- short, short[]
- String, String[]
If you specify an intent extra of any other type, it will be ignored and will not be broadcast to the remote device.
Once your intent is populated with data, you’re ready to broadcast it to the remote device. To send as a regular broadcast, simply use the sendBroadcast() method on Connectivity:
Java
Connectivity.get(myContext).sendBroadcast(myRemoteBroadcast);
To send as an ordered broadcast, use the sendOrderedBroadcast() method. For ordered broadcasts, you need to specify the remote device:
Java
Connectivity c = Connectivity.get(myContext);
Device device = c.getDevice();
c.sendOrderedBroadcast(device, myRemoteBroadcast, new BroadcastReceiver() {
@Override
public void onReceive(Context context, Intent intent) {
if (getResultCode() == RESULT_OK) {
// do something with results
}
}
});
Notice you can specify a BroadcastReceiver to get results back from the remote device.
Receiving a Remote Broadcast
Receiving a remote broadcast is as easy as receiving a local broadcast. You simply register a BroadcastReceiver in your app.
Java
@Override
protected void onStart() {
super.onStart();
registerReceiver(receiver, new IntentFilter("com.example.myapp.MY_ACTION"));
}
@Override
protected void onStop() {
super.onStop();
unregisterReceiver(receiver);
}
private BroadcastReceiver receiver = new BroadcastReceiver() {
@Override
public void onReceive(Context context, Intent intent) {
// do something with intent
}
};
As with all BroadcastReceivers, don’t forget to unregister your receiver.
You can also declare a BroadcastReceiver in your manifest:
Java
<receiver android:name=".MyReceiver">
<intent-filter>
<action android:name="com.example.myapp.MY_ACTION"/>
</intent-filter>
</receiver>
Returning Data from an Ordered Broadcast
An advantage of ordered broadcasts is that they can return data to the sender. The same is true of ordered broadcasts sent through the Connectivity framework. To return data to the remote sender, simply use any combination of the setResult(), setResultCode(), setResultData() or setResultExtras() methods. You should also check the isOrderedBroadcast() method to make sure you are receiving an ordered broadcast. Setting result data on a non-ordered broadcast will result in an exception being thrown.
Java
private BroadcastReceiver receiver = new BroadcastReceiver() {
@Override
public void onReceive(Context context, Intent intent) {
if (isOrderedBroadcast()) {
setResultCode(RESULT_OK);
setResultData("some result data");
Bundle extras = new Bundle();
extras.putString("key", "value");
setResultExtras(extras);
}
}
};
Note that the result extras Bundle is subject to the same restrictions as broadcast extras (see above).
That’s all you need to do to return data back to the remote sender of the broadcast. The Connectivity framework handles routing the result back to the proper place.
Verifying a Remote Broadcast
Since remote broadcasts are received just like local broadcasts, sometimes you may want to verify a broadcast came from a remote device through the Connectivity framework. You may also want to verify which app on the remote side the broadcast came from. Fortunately, both of these are easy to verify using the Connectivity SDK and the verify() method.
To verify a broadcast came through the Connectivity framework, pass the intent to the verify method. verify() will return true if the broadcast is a legitimate and false otherwise:
Java
private BroadcastReceiver receiver = new BroadcastReceiver() {
@Override
public void onReceive(Context context, Intent intent) {
// verify broadcast came from Connectivity framework
if (Connectivity.get(context).verify(intent)) {
// do something with intent
}
}
};
You can also verify the broadcast came from a specific remote app through the Connectivity framework. Pass both the intent and a package name to the verify() method. Note that patterns are not allowed, you must pass the exact package name you are looking for.
Java
private BroadcastReceiver receiver = new BroadcastReceiver() {
@Override
public void onReceive(Context context, Intent intent) {
// verify broadcast came from the correct remote app
if (Connectivity.get(context).verify(intent, "com.example.myapp")) {
// do something with intent
}
}
};
Adding or Removing Remote Devices
Apps using the Connectivity SDK cannot add or remove remote devices. For remote device management, users should be directed to the Vuzix M400/M4000 Companion App to set those devices up ahead of time. Once a remote device is set up, it becomes available for use through the SDK.
Sample Project
A sample application for Android Studio demonstrating the Vuzix Connectivity SDK for Android is available here.
Overview
Vuzix Connectivity framework allows iOS apps to share the BLE connnection to communicate with apps that are running on the Vuzix M4000/M400. There is no need to pair the BLE connection, as the framework uses the same BLE connection established and setup in the Vuzix Companion App. There is also no need to define a protocol model as we have already done that. You can send simple messages between M4000/M400 and your iPhone app seemlessly. Apps running on the Vuzix M4000/M400 will need to utilize the Vuzix Android Connectivity library. Since messages are sent as BLE messages, it is best to send smaller messages as throughput is limited when using BLE.
Requirements
- M4000/M400 OS 1.1.4 or higher.
- Latest version of Vuzix Companion app running on iPhone App Store
- App running on M4000/M400 must use the Android Connectivity SDK
- M4000/M400 and Companion app must be linked.
Setup and Installation
To use the framework within your Xcode project you can install through cocoa pods or manually.
CocoaPods
CocoaPods is a dependency manager for Xcode projects. For usage and installation instructions, visit their [website](http://www.cocoapods.org).
To integrate the Vuzix Connectivity framework into your Xcode project using CocoaPods, specify it in your Podfile:
Pod ‘VuzixConnectivity’
Manually
If you prefer to not use Cocoapods, then you can integrate the Vuzix Connectivity framework into your project manually.
Simply download the framework from: Vuzix’s github page and follow the steps below:
- Drag the Connectivity.framework into your project.
- Go to General pane of the application target in your project. Add Connectivity.framework to the Embedded Binaries section.
- Import Connectivity in your Swift file and use in your code.
The Connectivity Class
The Connectivity class is your main entry point into the Connectivity framework. The first thing you need to do is use the static shared property to obtain the singleton instance of the Connectivity class.
Java
let connectivity = Connectivity.shared
Once you have a reference to the Connectivity object, you can call various methods to learn more about the current connectivity status, send broadcasts, and perform other functions related to connectivity. Consult the framework for complete documentation on the available methods.
Here are some key methods to be aware of:
isConnected
connectivity.isConnected returns a boolean. It is used to determine if you are connected to a remote device.
requestConnection()
connectivity.requestConnection is used to request a connection to your vuzix smart glasses. Make this request to share the bandwidth with the companion app. Listen for notification on the connectivity state.
Java
if connectivity.isConnected == false {
connectivity.requestConnection()
}
ConnectivityStateChanged
After requesting connection to the device, observe the state of the device by adding an observer on the Notification.Name -- connectivityStateChanged property. Six possible states are possible: .connected, searchingForDevice, bluetoothOff, notConnected, connecting and notSupported which are defined in ENUM ConnectivityState. You can display the correct UI for the states while connections are being made.
Java
NotificationCenter.default.addObserver(forName: .connectivityStateChanged, object: nil, queue: nil) { [weak self] (notification) in
if let state = notification.userInfo?["state"] as? ConnectitivyState {
if state == ConnectitivyState.connected {
self?.textView.text = "--Connected to M400!"
}
else if state == ConnectitivyState.searchingForDevice {
self?.textView.text = "--Searching for Device"
}
else if state == ConnectitivyState.connecting {
self?.textView.text = "--Connecting to Device"
}
else if state == Connectivity.bluetoothOff {
self?.textView.text = "--Bluetooth turned off on Phone"
}
else if state == Connectivity.notConnected {
self?.textView.text = "--Not Connected. Not searching"
}
else if state == Connectivity.notSupported {
self?.textView.text = "--M400 OS and/or Companion App needs update"
}
}
}
Sending data to the Vuzix device
The Connectivity framework supports an Android data model of using broadcasts and ordered broadcasts. A broadcast is nothing more than a Swift Notification that is sent across the framework to the other end. An orderedBroadcast is a broadcast which responds back with data.
Broadcasts should be limited to 10 kilobytes or less if possible. Larger broadcasts are supported, but you will always be subject to the Binder transaction buffer limit of 1MB which is shared across your app’s process. In addition, larger broadcasts will take longer to transfer between devices.
The first step to sending a broadcast is creating an Intent with an action:
Java
var intent = Intent(action: "com.vuzix.a3rdparty.action.MESSAGE")
It is strongly recommended, but not required, that you specify the remote app package that will receive the intent. This ensures that the broadcast is only delivered to a specific app. Use the package property on the Intent class to set the target.
Java
intent.package = "com.vuzix.a3rdparty"
If you need to set the category or modify any flags on the intent, you can do so. You can also fill up the intent with extras. For example:
Java
intent.addExtra(key: "my_string_extra", value: "hello")
intent.addExtra(key: "my_int_extra", value: 2)
The following intent extra types are supported:
Bool,[Bool]
BundleContainer,[BundleContainer]
Data
Character,[Character]
Double,[Double]
Float,[Float]
Int,[Int]
Intent,[Intent]
Int64,[Int64]
Int16,[Int16]
String,[String]
If you specify an intent extra of any other type, it will be ignored and will not be broadcast to the remote device.
Once your intent is populated with data, you’re ready to broadcast it to the remote device. To send as a regular broadcast, simply use the broadcast(intent:) method on Connectivity:
Java
connectivity.broadcast(intent: intent)
To receive data back from the M4000/M400, send an ordered broadcast. Using the Connectivity shared instance, call orderedBroadcast(intent:callBack:) method with your intent and extras that you want to send. Extras is an optional.
Java
connectivity.orderedBroadcast(intent: intent, extras: extras) { [weak self] (result) in
if result.code == 1 {
//result is of type BroadcastResult
//result will contain the data from the M400.
}
}
Notice you specify a closure to receive the results back from the remote device.
Receiving a Remote Broadcast
Receiving a remote broadcast is as easy as simply registering a listener in your app.
Java
let connectivity = Connectivity.shared
connectivity.registerReceiver(intentAction: "com.a3rdParty.action.message") { [weak self] intent in
if let message = intent.extras?.getValueForKey(key: "message") as? String {
print(message)
DispatchQueue.main.async {
self?.textView.text = message
}
}
}
Returning Data from an Ordered Broadcast
An advantage of an ordered broadcasts is that they can return data to the sender. The same is true of ordered broadcasts sent through the Connectivity framework. To return data to the remote sender, simply set the data on the BroadcastResult in the onReceive listener.
Java
var itemReceiver = BroadcastReceiver()
itemReceiver.onReceive { (result: BroadcastResult) in
result.data = <your data you want to return to the other side>
result.code = -1
}
connectivity.registerReceiver(intent: "com.vuzix.a3rdparty.action.ITEM2", receiver: itemReceiver)
The M400 and M4000 in many ways, behave similarly to other Android 9.0 devices. Specific details on the camera and audio subsystem are available here.
Audio Playback
Audio playback is a straightforward audio output, with appropriate overload protection and filtration blocks in the audio DSP. It is not configurable by user software beyond standard Android audioManager controls.
Audio Capture
Application Audio
Audio capture for Android application audio input uses three microphones: a user microphone in the eyepiece and two environment microphones on the body of the device. These are conflated by the DSP audio processor to allow "beam forming" or controllable directionality. Hence the microphones may emphasize the user's voice with cancellation of environmental sound, or emphasize the environment with cancellation of user-generated sound (e.g. breathing noise), or operate omni-directionally. Additionally, acoustic echo cancellation removes sound from the audio output and speaker. The captured audio is filtered, noise reduced, and equalized before it becomes the Android microphone input for application audio using the media recorder. Most Media Recorder audio sources will use this audio path, adjusting beam forming and audio processing parameters as appropriate.
The Android application audio input is, by the design of the Android OS, a singleton resource. Only one application may "own" the audio capture at any one time, hence it is not possible to have a foreground activity consuming captured audio while a background service simultaneously does so.
Speech Audio
A second channel for audio capture is implemented as well, intended for use by speech recognition and other software-interpreted audio usage. This channel operates independently of the main Android application audio channel.
Speech Audio does not implement adaptive filtration, noise suppression, or other algorithms which may introduce artifacts to which machine recognition may be vulnerable. This channel is also available to Android application audio when AudioSource.VOICE_RECOGNITION is selected.
Audio Sources
The developer controls the DSP processing by selecting the appropriate AudioSource as defined by MediaRecorder.AudioSource. The following describes the audio processing in M400 and M4000:
- DEFAULT: Identical to MIC
- CAMCORDER: The noise cancellation emphasizes the environment mic and reduces sound from the user
- MIC: The noise cancellation emphasizes the user mic and reduces sound from the environment.
- VOICE_COMMUNICATION: Similar to MIC but adds acoustic echo cancellation so audio from the loudspeaker is not heard by the microphone. This is intended for use during Voice over IP (VoIP) and video calls.
- VOICE_RECOGNITION: Microphone audio source tuned for voice recognition.
- UNPROCESSED: The stream used by the built-in speech recognizer. It uses all the mics and contains no adaptive algorithms which may introduce artifacts to which machine recognition may be vulnerable.
Supported API
The M400 and M4000 supports both the deprecated Android Camera API and the updated Android Camera2 API.
Still Image Resolutions
The following still image resolutions are supported:
- 4032 x 3024
- 3264 x 2448
- 2944x1656
- 1920x1080
- 1408x792 (center crop mode)
- 1280x720
- 1024x768
- 720x480
- 640x480
- 640x360
* The center cropped resolution uses pixels from the center of the image, and discards all pixels around the edges of the sensor. This creates an effect similar to a zoom. This resolution is ideal for barcode imaging where maximum detail is desired and the object of interest is small and centered.
Video Resolutions
The following video resolutions are supported by the M400 and M4000:
- 3840x2160 (4K UHD), 30fps
- 1920x1080 (1080p), 30 fps
- 1280x720 (720p), 30 fps
- 1408x792, 30 fps *(center cropped)
- 720x480, 30 fps
- 640x480 (VGA), 30 fps
- 640x360, 30 fps
Video Codecs
The M400 and M4000 have hardware encoders for the following codecs:
- H.263
- Profile: 0
- Level: up to 70
- Resolution: 864x480 at 30 fps
- H.264
- Profile: Baseline/Main/High
- Level: up to 5.2
- Resolution: 1920x1080 or 3840x2160 at 30 fps
- MPEG-4
- Profile: Simple/Advanced Simple
- Level: up to 8
- Resolution: 1920x1080 at 30 fps
- VP8
- Profile: 0
- Level: up to 5.1
- Resolution: 1920x1080 or 3840x2160 at 30 fps
- HEVC
- Profile: Main
- Level: up to 5.1
- Resolution: 1920x1080 or 3840x2160 at 30 fps
Auto Focus
The M400 and M4000 support automatic focus from 10 centimeters (3.94 inches) to infinity. An example application controlling the auto-focus can be found here.
Flash LED
The M400 and M4000 have a flash that can operate be manually for photo or video use. An example application showing flash control can be found here.
Optical Image Stabilization
The M400 and M4000 camera have an optical image stabilization (OIS) motor to offset small movements caused by the operator wearing the camera. This feature is enabled or disabled in the Settings application and affects all software applications that use the camera.
Understanding the interactions with the external battery is essential for developers. The Vuzix M400 and M4000 require an external battery for normal operation. These devices support true hot-swapping because there is a small internal battery that allows the external battery to be completely removed for a short time without any negative impact to the user.
Any battery capable of supplying of 1.5 amps will be able to reliably power the M400 and M4000. When the external battery becomes fully depleted, a message will appear on the screen indicating that the operator has a limited time to swap the battery before the device powers-down. If the user replaces the external battery before this time, they can operate indefinitely.
The Vuzix 478T0A001 Power Bank not only provides adequate power, it also allows the M400 or M4000 to read its voltage, providing an easy way for users to know the time they have left before hot swapping is required.
In this guide you will learn how to register for the system broadcasts related to the battery and read the power level of the 478T0A001 Power Bank from your apps.
Creating the External Battery Status Receiver
Within your application, create a Java class that extends the BroadcastReceiver class.
Java
package com.vuzix.sample.externalbattery;
import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.Intent;
public class ExternalBatteryStatusReceiver extends BroadcastReceiver {
@Override
public void onReceive(Context context, Intent intent) {
}
}
After, register your new ExternalBatteryStatusReceiver with your app's manifest file. You will need to create an intent-filter with an action named
android.intent.action.ACTION_POWER_CONNECTED
android.intent.action.ACTION_POWER_DISCONNECTED
If the user is using the Vuzix 478T0A001 Power Bank we can read additional information from the battery:
com.vuzix.action.EXTERNAL_BATTERY_CHANGED
Java
<receiver android:name=".ExternalBatteryStatusReceiver" android:exported="true">
<intent-filter>
<action android:name="android.intent.action.ACTION_POWER_CONNECTED" />
<action android:name="android.intent.action.ACTION_POWER_DISCONNECTED" />
<!-- For reading extra data provided by the Vuzix 478T0A001 Power Bank -->
<action android:name="com.vuzix.action.EXTERNAL_BATTERY_CHANGED" />
</intent-filter>
</receiver>
Creating the External Battery Status Listener
We are going to create an interface named ExternalBatteryStatusListener that will describe a listener that will receive updates on when the status of the external battery changes.
Create an interface called ExternalBatteryStatusListener and declare two methods called onExternalBatteryStatusChanged and onVuzixExternalBatteryStatusChanged that takes contains the parameters as listed below.
Java
public interface ExternalBatteryStatusListener {
/**
* Called when an external battery change broadcast is received
* @param charging Boolean for if the device is currently charging
* @param chargeMethod ChargeMethod for the current way the device is receiving power, None if
* not charging
*/
void onExternalBatteryStatusChanged(boolean charging,
ExternalBatteryStatusReceiver.ChargeMethod chargeMethod);
/**
* Called when an external battery change broadcast is received
* @param connected The connection status of the external battery
* @param capacity The capacity of the external battery
* @param voltage The voltage of the external battery
* @param current The current of the external battery
*/
void onExternalBatteryStatusChanged(boolean connected,
int capacity,
int voltage,
int current);
}
Registering the External Battery Status Listener
We are now going to go back to the ExternalBatteryStatusReceiver class and implement the required methods for registering and unregistering an ExternalBatteryStatusListener. Add the follow variable and methods.
Java
private ExternalBatteryStatusListener externalBatteryStatusListener;
public void registerExternalBatteryListener(ExternalBatteryStatusListener listener) {
externalBatteryStatusListener = listener;
}
public void unregisterExternalBatteryListener() {
if (externalBatteryStatusListener != null) {
externalBatteryStatusListener = null;
}
}
Now, in our onReceive method we are going to send our external battery status updates to the ExternalBatteryStatusListener. Below are the default values and intent extras referenced in the onRecieve mothod. Place these values in your ExternalBatteryReceiver class. We also have an enum named ChargedMethod that we will use to tell our listeners what type of charge method is currently charging the device. The options will be None, USB and AC. Wireless charging is a fourth possibility in the BatteryManager class, however, Vuzix Smart Glasses do not support wireless charging.
Java
private final String EXTRA_VUZIX_EXTBAT_CONNECTED = "connected";
private final String EXTRA_VUZIX_EXTBAT_CAPACITY = "capacity";
private final String EXTRA_VUZIX_EXTBAT_VOLTAGE = "voltage";
private final String EXTRA_VUZIX_EXTBAT_CURRENT = "current";
enum ChargeMethod {None, USB, AC} // Vuzix Smart Glasses do not support wireless charging
private static final int DEFAULT_CAPACITY = 0;
private static final int DEFAULT_VOLTAGE = 0;
private static final int DEFAULT_CURRENT = 0;
Receiving the onExternalBatteryStatusChanged Updates
On our MainActivity we're going to extend the ExternalBatteryStatusListener so we can receive onExternalBatteryStatusChanged updates. After implementing the required method we need to register an intent filter so we can listen for updates to the external battery. Ensure your MainActivity looks like the following code sample. You will want to register the intent filters with the following actions:
Intent.ACTION_BATTERY_CHANGED
com.vuzix.action.EXTERNAL_BATTERY_CHANGED
Java
public class MainActivity extends AppCompatActivity implements ExternalBatteryStatusListener{
ExternalBatteryStatusReceiver receiver;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
receiver = new ExternalBatteryStatusReceiver();
IntentFilter intentFilter = new IntentFilter();
intentFilter.addAction(Intent.ACTION_BATTERY_CHANGED);
intentFilter.addAction("com.vuzix.action.EXTERNAL_BATTERY_CHANGED");
registerReceiver(receiver, intentFilter);
receiver.registerExternalBatteryListener(this);
}
Don't forget to unregister your ExternalBatteryStatusReceiver and receiver in your onDestroy().
Java
@Override
protected void onDestroy() {
receiver.unregisterExternalBatteryListener();
unregisterReceiver(receiver);
super.onDestroy();
}
Congratulations! You can now register and receive external battery status updates.
You will now be able to receive updates in your MainActivity class through the onExternalBatteryStatusChanged and onVuzixExternalBatteryStatusChanged methods.
Sample Project
A sample project for Android Studio can be downloaded here.
Getting started on developing software for the Vuzix Blade is as simple as developing software for any other Android device.
The Blade utilizes the Android APIs and leverages the Android Studio to develop software for the device.
There are some inherent differences however, as this is a wearable device that you interact with differently than other Android devices.
In this document we can get your development environment setup ready to start developing all kinds of new user experiences using the Vuzix Blade.
The Vuzix Blade has a unique configuration for visual display. We need to create a new Device Profile to understand and visualize those screen limitations, while designing new User Interface and User experience.
For more information on those, please refer to the User Experience Design Guidelines.
First, we need to acquire Device Profile.
Later, open AVD under the Tools menu and choose to "Create Virtual Device."
Choose the Import Hardware Profiles. Select the previously downloaded file and click OK.
Search for Vuzix and see if the new Device "Vuzix Blade" is added under the Phone Tab.
Later you can see the device properties as they were specified in the Device Profile file.
AVD device properties
Congratulations! You now have the Vuzix device on your Hardware Profile.
Close the dialog and go back to Android Studio.
You can now use the new Hardware profile in your Layout/Interface Designer to have the proper dimension of the UI real estate.
Device Profile location
Emulating the Vuzix Blade
Vuzix does not provide a custom system image for testing or development.
The custom hardware platform requires features that the system image and an Android Virtual Device cannot provide. While using the emulator, you will not be able to use any of the Vuzix SDKs. However, you can use the stock Android 5.1 system image with a few modifications to interact with your applications with the device's size.
- Open AVD Manager under Tools
- Choose "Create Virtual Device"
- Search for and choose the previously added Vuzix Blade Device Profile and click "Next"
Choose Android 5.1 (Lollipop) for the system image and download if necessary.
Click Next.
Removing the Side Bar
If you experience a bar on the side of the emulator as pictured below, follow these steps to remove it.
The newly created config.ini file within the emulator will need to be modified to remove this bar.
- Open AVD Manager under Tools
- Choose the drop-down arrow under actions next to the emulator you want to edit
- Choose "Show on Disk"
Right click on the config.ini file and select Edit.
Find hw.mainKeys and change it from "no" to "yes".
You will need to restart your emulator if it is running for the changes to take effect.
All set! The sidebar within the emulator will be removed.
It is recommended that you utilize Vuzix libraries to design your user experience. Adding these libraries to your Android project is simple and developer friendly. Vuzix provides a Jitpack repository for those libraries and other resources.
Adding HUD-ActionMenu and HUD-Resources to your Android Application
To be able to utilize the HUD-ActionMenu and HUD-Resources libraries, you will need to add a dependency in your application build.gradle file.
In your app's build.gradle file just add the following line to your dependencies section:
- implementation 'com.vuzix:hud-actionmenu:2.9.0'
Note that adding HUD-ActionMenu to your project will automatically add HUD-Resources to your project. Your build.gradle for your application will look something like this:
Java
apply plugin: 'com.android.application'
android {
compileSdkVersion 27
defaultConfig {
applicationId "com.vuzix.blade.devkit.camera_sample"
minSdkVersion 22
targetSdkVersion 27
versionCode 1
versionName "1.0"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
implementation 'com.vuzix:hud-actionmenu:2.9.0'
}
Android Jetpack Compatibility
Beginning with version 2.0, the HUD-ActionMenu and HUD-Resources libraries support Android Jetpack. In addition, the HUD themes are based on Theme.AppCompat rather than Theme.Material.
If you are not using Jetpack in your application, you should not upgrade past version 1.6 of the HUD libraries. If you choose to migrate to Jetpack in the future, you can also upgrade to the latest HUB libraries at that time.
Determining Blade Model
Using the Utils class of the HUD-Resources library, you can quickly determine which model of the Blade (447 or 494) your application is currently running on.
This is particularly useful if your application provides the user feedback (i.e. audio or haptic feedback).
To do this, simply call the Utils class using the method described below:
Java
if(Utils.getModelNumber() == Utils.DEVICE_MODEL_BLADE)
{
//This app is currently running on a Blade with Haptic Feedback
}
else if(Utils.getModelNumber() == Utils.DEVICE_MODEL_BLADE_REV_2020)
{
//This app is currently running on a Blade Upgraded with Stereo Speakers
}
Utilizing the HUD Themes
With the HUD-Resource library in your project, you can now use the HUD theme as your main application theme. You also have access to a"light" variation of the HUD theme.
These themes will provide a good base for your application design and will also provide the color scheme as the default one for your layout.
To use the HUD theme as the main theme of the application, you just need to modify the Application Manifest and ensure the android:theme is "@style/HudTheme".
Your Android Manifest will look something like this:
Java
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.vuzix.blade.devkit.camera_sample">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:theme="@style/HudTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Utilizing HUD API Classes
The Vuzix HUD libraries include our customized versions of Activity classes and other frequently used classes.
Our extension of those classes provide extra functionality, improve performance and add custom features to take full advantage of the Blade platform.
We recommend that all your Activities extend the "ActionMenuActivity". Your Main Activity Class definition will look something like this:
Java
public class MainActivity extends ActionMenuActivity {
.
.
.
}
Changing Action Menu Color
Beginning with version 1.5 of the HUD-ActionMenu library, you can easily change the color of the action menu. To do so, override actionMenuColor in your own theme:
Java
<style name="AppTheme" parent="HudTheme">
<item name="actionMenuColor">@color/hud_red</item>
</style>
If you have two themes, for example light and dark mode themes, don't forget to define actionMenuColor in both themes.
If you want to customize the action menu beyond simple color changes, you will need to use custom views for the action menu items.
HUD Libraries Java Docs
For more information and details check our Javadocs located at: https://www.vuzix.com/support/Downloads_Drivers.
We provide the following Java Docs for the HUD Libraries:
- For the HUD Resource Package: HUD-Resources Java Docs
- For the HUD ActionMenu Package: HUD-ActionMenu Java Docs
The most important aspect to a well-designed user interface for an application intended to be used on the Vuzix Blade is simplicity. This is largely driven by the limited amount of space on the Blade's display, but also by the constraints of the interaction methods. Here are some Guidelines that we have found work best when designing and developing applications for the Blade platform.
Build-in Resource and Classes
Utilize our HUD classes and resources provided by our Maven Libraries, HUD-ActionMenu and HUD-Resources. This is a good starting point for your application. Also make sure you setup your Blade Device Profile to understand the UI restriction for the platform (size, colors, fonts, readability).
For more information on those see our Development Getting Started knowledge area.
More Resources
Read and Follow our UX Design Guidelines located in our Download section.
With the unique design and display that the Vuzix Blade provides, there are some User Interface designs that work better than others.
In our experience, we have found some styles and UI Designs fit better and are more useful for users.
In the previous sections we talked about some of the ways you can create a great UI for the Blade.
In this section we will explain the two main UI styles and some HUD libraries you can use. The Main navigation UI styles that we recommend are:
- Content Center - Action Menu below
- Action Menu Center - Content Around
"Action Menu" is a type of navigational menu that provides clear, concise information or actions for user. The "Action Menus" are just custom actions that are highly visible, simple to understand and simple to select with the available Blade interaction methods.
If you want more information or how to implement those, you can go to "Action Menu" section.
Content Center
The most used and more common style is the Content Center style. This provides most of the UI real estate for your Application.
This style can also be used with the Pop-Up Action Menu. When used, it provides all the device screens, and the Action Menu will cover your UI when it Pops Up for actions.
To use this style you just have to use the normal Layout elements and configure them as desired. Here is an example of this style from our Template App.
If you want to learn how to use the Pop-Up Action menu feature, read more in our Sample Project UI Layout section.
Blade Template - Center Content
Action Menu Center
Action Menu Center - Content around style means that the View or real-state available is around the center of the screen. The Action menu is moved and located in the center of the main UI area.
This UI style is mainly used when there are many options or you have to navigate many possible actions for the user. A good example of this is the BladeOS Settings Application.
Since this style can be tricky to design, we provided a useful dimension resource with the HUD Resources, which can be used in the Android Studio designer to provide a "space" where the Action Menu will be centered. To use this resource you can use the element dimension of:
- @dimen/action_menu_heigh in your Layout designs.
Here is an example Space element for the Action Menu Center style and an Example view on our Template App.
Java
<!--Utilize something similar to this SPACE item to understand the dimensions of the ActionMenu
and your available real state.-->
<Space
android:id="@+id/menu_placeholder"
android:layout_width="match_parent"
android:layout_height="@dimen/action_menu_height"
android:layout_centerInParent="true"/>
Action Menu Center style
The Blade, by default, will have ADB (Android Debug Bridge) debugging disabled for security and privacy reasons.
In any new versions of the Blade OS, we can enable ADB Debugging using the below steps. If you are having trouble with this please reach out to the Support team.
Go to the Settings App and navigate to System > About
Settings App
System Menu Item
System > About menu item
Tap on about will show Device information screen. On this Device info screen perform One-Finger Swipe Forward 7 times.
Notice a Prompt reporting the number of swipes will be displayed.
Note: Make sure your Blade is in Gesture mode when performing the One-Finger swipes.
Dev Options enabling
After the 7 swipes the user will be taken to Dev Options button and you will notice that the Dev Options button will be added under the System menu.
This Dev Option has additional features like USB debugging, Keep Screen On and Mock Locations.
New Dev Option Menu
Developer Options
Overview
The Vuzix Blade® runs on the Android platform version 5.1.1, API level 22, more commonly known as Lollipop. The camera on the device is equipped with HAL version 1.0.
As such, the camera HAL is best suited to interface with Android Camera API version 1.
It is possible to utilize the legacy framework within Android Camera API 2; however, for the best performance when using the Blade, we highly recommend utilizing Camera 1.
Further tips and tricks on tuning your app to get the best possible experience out of the Blade can be found below.
Suggestions:
1. When utilizing the Camera 1 API, set the camera to run in the background thread instead of the main UI thread. This will avoid high CPU utilization and unexpected application crashes.
2. When attempting to perform video recording, set the video resolution size equal to preview resolution size.
3. When using Camera 1, set all parameters prior to starting the preview, modifying these parameters while the preview is active will cause unexpected errors.
4. When attempting to stop and restart the preview (as you might want to do when finishing a recording or image capture), provide the system with a delay (~500ms) so the the buffer can completely send out the preview stream.
5. As the camera is designed with a fixed focus, you should not need to access any auto-focus parameters. For the other functions of 3A, we recommend using the default AE and AWB. These parameters have been tuned to get the best quality out of the camera.
6. When in high-light environments, utilize the HDR mode for still captures.
7. The Blade's camera also has access to night, landscape and sports mode for use in different environments. Giving access to these extra modes will allow users to customize their experience based on their surroundings/what they are hoping to capture.
For more information on the Camera 1 API please visit the Android Developer Center here.
Overview
Apps that you write for the Blade can have user settings. Your app will typically have a mechanism for users to change those settings. You can also allow users to change app settings from within the companion app. Information about available settings and changes to those settings are communicated via a set of broadcasts between your app and the companion app.
When a user views details for your app on the companion app’s Apps tab, the companion app will query your app via an ordered broadcast for details about your app settings. If your app does not respond, no settings will be displayed other than app version information. If your app responds with information about settings, further broadcasts will be used to communicate changes to settings. The broadcasts used are described below.
Broadcasts
Your app’s manifest should define a broadcast receiver with an intent filter that listens for two actions:
- com.vuzix.action.GET_SETTINGS
- com.vuzix.action.PUT_SETTING
These broadcast actions are described in the following sections.
com.vuzix.action.GET_SETTINGS
The GET_SETTINGS ordered broadcast is sent from the companion app to your app for the purpose of querying information about settings in your app. If you wish to provide settings back to the companion app, they can be communicated through a broadcast result extra named “items”. The “items” extra is a Bundle array. Each Bundle in the array describes a single setting in your app. For example, if your app exposed 3 settings for display in the companion app, the “items” Bundle array would have 3 items. Each Bundle has properties that describe a setting. The common properties for a settings item are as follows:
- id: String containing the unique ID of the setting
- type: String containing the type of setting, will be one of: PAGE, SECTION, READONLY, TOGGLE, LIST, SLIDER, STRINGLIST, BUTTON
- displayName: String containing the display name of the setting
Then depending on the type of setting, additional properties are available. Each type of setting and available properties for each are described below.
The PAGE and SECTION types are used to group settings together. PAGE types display a group of settings on a new page while SECTION types group settings together under a header on the current page. Both types have the following additional property:
- items: Bundle array containing child settings
The READONLY type is used to display a setting name and value to the user. The following additional property is available:
- value: String containing the value of the setting
The TOGGLE type is used to display a setting that can be either on or off. The following additional property is available:
- on: boolean specifying if the setting is on
The LIST type displays a setting with a list of possible choices. The following additional properties are available:
- choices: String array containing the choices available to the user
- selected: int containing the zero-based choice that is currently selected
The SLIDER type displays a slider that can change a setting value between a min and max value. The following additional properties are available:
- min: int containing the minimum allowed value
- max: int containing the maximum allowed value
- step: int containing the difference in value between steps on the slider
- value: int containing the current setting value
The STRINGLIST type displays an editable list of strings the user can edit. Strings can be added, removed and reordered and their values can be changed. The following additional properties are available:
- max: int containing the maximum number of strings allowed to be created by the user
- values: String array containing the list of strings
- readonly: boolean indicating if the list should be used for display only and cannot be edited, if missing assume false
The BUTTON type displays a button to the user that can be tapped. Optionally, a confirmation message can be displayed with Yes/No choices when the button is tapped. Buttons are useful for trigger actions over on the Blade, logging out of an app for example. The following additional property is available:
- confirmation: String containing the confirmation to display, if any
com.vuzix.action.PUT_SETTING
The PUT_SETTING broadcast is used to communicate a change in a setting value. The companion app will send this broadcast to your app when the user makes a change to a setting. When you receive this broadcast, you should update your local setting. If the user is also looking at your app settings on the Blade, it’s good user experience to make sure your user interface is updated to show the new setting value.
You can also send this broadcast from your app to the companion app when the user changes a setting in your app. This is not typically required but can be used to ensure a responsive UI if the user is looking at settings in both your app and the companion app at the same time.
The following intent extras will be present for the PUT_SETTING broadcast:
- id: String containing the original id you specified during GET_SETTINGS
- value: the new value of the setting, will be typed differently depending on the original type specified during GET_SETTINGS
- values: String array representing the new values after user modifications are complete, only used with STRINGLIST type
com.vuzix.apps.action.UPDATE_SETTINGS
If your app ever changes the structure of its settings, you can use the UPDATE_SETTINGS broadcast to notify the companion app that all settings are invalid, and a new list should be requested immediately. To do this, you’ll need to use the Connectivity SDK to send a remote broadcast to the companion app. For example:
Java
Intent updateSettings = new Intent("com.vuzix.apps.action.UPDATE_SETTINGS");
updateSettings.setPackage("com.vuzix.companion");
Connectivity.get(context).sendBroadcast(updateSettings);
Notice that you should specify the package so that only the companion app receives the broadcast. When the companion app receives the broadcast, it will know the message came from your app and a settings refresh will be initiated.
Companion Settings Library
While responding to the GET_SETTINGS intent, you can manually build a Bundle array which describes your app’s settings. You can also use the companion settings library to assist you in building the necessary Bundle array.
Setup and Installation
If you haven’t already, point your project to Jitpack where we host our libraries by add the following to your build.gradle file:
Java
allprojects {
repositories {
google()
jcenter()
maven {
url "https://jitpack.io"
}
}
}
If you set up the Connectivity SDK previously, the above step is probably already done.
Finally, add Companion Settings as a dependency in your app’s build.gradle file:
Java
dependencies {
implementation 'com.vuzix:companion-settings:1.0'
}
Usage
In your BroadcastReceiver’s onReceive() method, you can use the companion settings library to generate the result value for the GET_SETTINGS broadcast:
Java
@Override
public void onReceive(Context context, Intent intent) {
if ("com.vuzix.action.GET_SETTINGS".equals(intent.getAction())) {
Settings settings = new Settings();
settings.addSetting(new Toggle(“toggle”, "My Toggle", false));
settings.addSetting(new Slider(“slider”, "My Slider", 0, 100, 1, 0));
setResultExtras(settings.toBundle());
}
}
Sample Project
A sample application for Android Studio demonstrating the Companion Settings Library is available here.
Vuzix Blade Smart Glasses connect the digital world to the real world by providing real-time access to key alerts and information while your phone stays in your pocket. Finally, a pair of smart glasses that consumers or enterprise workers are not afraid to wear in public.
The Vuzix Blade contains many of the capabilities of a standard Android smartphone in an unobtrusive head-mounted form factor. See below for a full list of hardware and interaction capabilities of the device:
- Waveguide based see-through optics
- Vibrant full-color display
- Right-eye monocular
- Quad-core ARM CPU
- Android OS
- 1GB system RAM
- 8GB internal flash storage
- External SD-Card support
- Auto-Focus* Camera capable of up to 8 Megapixel camera with 720p video
- Stereo speakers*
- Dual noise-cancelling microphones
- Two-axis touchpad
- Voice control
- Orientation sensors (gyroscope, accelerometer, magnetometer)
- Inner/outer proximity sensors
- Outward-facing Ambient Light Sensor
*Only available on the Blade Upgraded
The Android OS running on the Blade is a modified version of Android 5.1.1(Lollipop), tailored to the components and capabilities of the device.
For the most part, development of applications intended to be used on the Blade can be accomplished by following standard Android development methodologies and by leveraging existing Android APIs. The APIs listed below are some of the prominent features of Android for which default APIs should be leveraged:
- Camera – android.hardware.Camera or android.hardware.Camera2 may be used.
- Sensors – Use SensorManager.
- Bluetooth – Use BluetoothManager and BluetoothAdapter. Standard Bluetooth and BLE are supported.
- Database – Standard Android SQLite is supported.
There are some components of the Blade which will require device-specific APIs to access; these APIs will be covered in detail in other sections of the SDK documentation.
Supported Application Development
The Blade supports any application develop for Android.
You can develop in Java with the Android Java APIs, in Kotlin using the Android Kotlin APIs or in C/C++ using Android NDK Layer and APIs.
The Blade's interaction methods differ significantly from traditional touchscreen Android devices. It is particularly important to keep these considerations in mind when designing the User Interface of an application intended to run on this device. Existing applications which heavily leverage touchscreen interactions do not translate well to the Blade. This is due to the fact that the transitional touchscreen UI is leveraging taps for input based on specific screen coordinates, which is not possible with the Blade’s interaction methods.
Touchpad
The Blade features a two-axis touchpad with single or double-finger tracking gestures for predefined actions. The horizontal swipe gestures can be leveraged for left/right navigation and the vertical swipes can be leveraged for the up/down navigation.
The touchpad is implemented as a trackball device, and methods such as dispatchTrackballEvent() and onTrackballEvent() can be used to capture and process the raw touchpad events.
As a fallback, if you do not handle the trackball events in your application, the touchpad will fire KEYCODE_DPAD events which can be captured with standard Android methods.
Refer Android KeyEvent documentation for details on KEYCODE_DPAD events. https://developer.android.com/reference/android/view/KeyEvent.html.
Note that the Home gesture cannot be intercepted or modified. This is an expected Android behavior.
Available Gestures and Taps:
- One-Finger Tap = Select when Blade is awake. When Blade is sleeping it turns on the display.
- One-Finger Swipe Forward = Forward
- One-Finger Swipe Backward = Back
- One-Finger Swipe Up = Up
- One-Finger Swipe Down = Down
- One-Finger Hold for one second = Activate the app menu(if available), on Keyboard it shows the char's menu
- Two-Finger Tap = Back or Return.
- Two-Finger Hold for one second = Home
- Two-Finger Swipe Back = Backspace, on keyboard deletes the previous character
- Two-Finger Swipe Forward = Delete action depends on the app (if available); on Keyboard it is a space input.
- Two-Finger Slide Up = Volume up
- Two-Finger Slide Down = Volume down
- Two-Finger Slide Down and hold= Mute
KeyCodes for available taps and gestures:
- ONE FINGER TAP: Select
Key Event Fired: KEYCODE_ENTER
- ONE FINGER HOLD FOR 1 SECOND: Menu
Key Event Fired: KEYCODE_MENU
- TWO FINGER TAP: Back or Return
Key Event Fired: KEYCODE_BACK
- TWO FINGER SWIPE FORWARD: Delete
Key Event Fired: KEYCODE_FORWARD_DEL
- TWO FINGER SWIPE BACKWARD: Backspace
Key Event Fired: KEYCODE_DEL
- TWO FINGER UP: Volume up
Key Event Fired: KEYCODE_VOLUMEUP
- TWO FINGER DOWN: Volume down
Key Event Fired: KEYCODE_VOLUMEDOWN
- TWO FINGER DOWN and hold: Mute
Key Event Fired: KEYCODE_VOLUME_MUTE
Touchpad Mouse
With the release of version 2.13 of the Blade OS, users can now configure their touchpad to act as a virtual mouse.
Enable this by selecting "Mouse" mode in Settings > Device > Touchpad.
Usage:
- Swipe with one finger to move the cursor.
- Tap with one finger to click the screen at the cursor.
- Swipe with two fingers to scroll the view.
- Tap with two fingers to go back.
For a well-designed user interface on the Blade:
- Avoid complex menu systems; instead use a linear progression through interface components to minimize the amount of time a user needs to dedicate to navigating the application UI.
- Allow UI elements to be navigated with simple left/right/up/down navigation, and give users clear visual indicators for which UI element currently has focus.
- Limit the amount of information being displayed to the user at any given moment to be very contextually relevant. E.g. Provide a single, specific instruction to guide the user to perform an individual step in a procedure.
- Avoid displaying complex diagrams or schematics to the user, instead opt for displaying only the components immediately relevant to the task at hand.
- Try to minimize the frequency of scenarios requiring the user to physically interact with the Blade by leveraging alternative methods to advance a user through the application interface. The best way to do so is to leverage voice commands, but this can also be done by automatically advancing a user to the next screen based on any number of interactions, such as taking a pictureor some other form of verification that the step has been completed. Note that Voice Commands functionality is coming soon and is not currently available.
This tutorial will show how to create a Vuzix Blade application with Android Studio. Remember that most Blade applications start as basic Android Apps. This tutorial will start with that assumption.
This tutorial will show how to build the following parts of the Blade Application:
- Basic Application Structure:
- Adding Vuzix HUD Libraries
- Use of Vuzix DynamicThemeApplication class
- Use of Vuzix ActionMenuActivity class
- Use of BladeOS Applicaiton Icon Tinting feature
- How to Create Two UI / UX Styles for Your First Application:
- Center Content Style
- Around Content Style
- Application Widget for the Launch Screen:
- How to Create a Dynamic Changing Widget with Dynamic Theme
- How to Use the BladeOS Auto-Widget Loading Feature
If you want to follow along with a pre-built Vuzix Blade Template App, feel free to download the Template app from our Code sample area.
Now let's get started with the Tutorial.
Next: Getting Started with a new Android Studio Project
Vuzix recommends using the latest Android Studio version. Android Studio can be downloaded from https://developer.android.com/studio/.
- Open Android Studio.
- Select Start a new Android Studio project from the “Welcome to Android Studio” window.
Welcome to Android Studio
- Provide the Application name and Company domain in the “Create Android Project” window.
- Change the Project location if you’d like.
- There is no need to check the Include C++ support or Include Kotlin support options.
- Note: BladeOS does support Kotlin build applications since they compile to Android APK applications. For this example we will not use Kotlin.
Create New Project
- Select Phone and Tablet form factor in the “Target Android Devices” window.
- The Vuzix Blade currently runs Android 5.1, so select API 22 for the target API.
- For this example, there is no need to Include Android Instant App support.
Target Android Devices
- For this example, select the Empty Activity template in the “Add an Activity to Mobile” window.
Add an Activity to Mobile
- Provide your Activity Name.
- Ensure the Generate Layout File is checked.
- Provide your Layout Name.
- For this Sample, make sure you do not enable the "Backwards Compatibility (AppCompat)" option.
- Click Finish.
Configure Activity
- Android Studio will generate the example project and display the MainActivity class as shown below.
New Project
This is the basis of any Vuzix Blade application, and basic Android activity application.
Now lets continue the process of making this Vuzix Blade application by adding the Vuzix HUD libraries and resource.
Configure build.gradle
Android projects have multiple build.gradle files. These files provide the compiler with compile-time information about the project. This information includes how to compile, what dependency libraries are used, where to get them, and which Android versions the application will support in addition to other application information.
For this example, it is important to configure the build.gradle file for the main app module. Please be sure to modify the correct file as shown below. Open the build.grade (Module: app), not the build.gradle (Project: BladeSampleApp) file. The screen below shows an unmodified version of the build.gradle file generated by Android Studio. There is some additional information here that we will remove shortly.
Default build.gradle
For simplicity of this example application, remove the testInstrumentationRunner line from the defaultConfig section.
And replace all of the dependencies in the dependencies section with simply implementation 'com.vuzix:hud-actionmenu:1.3' and implementation 'com.vuzix:hud-resources:1.2'. This dependency allows the Blade application to use dynamic theme and other Blade specific features. Refer to Dynamic Theme section for more information.
Note: Anytime this file is modified, Android Studio will display a warning stating that the files have changed, and a project sync should be performed. We will do a project sync after making other changes as well.
Before we can sync the project, let’s change the layout of the main activity as described in the next section.
Modified build.gradle
Change Default Activity Layout
By default, Android Studio will generate activities using a Constraint layout. For simplicity, this example uses the Relative layout.
Open the activity_main.xml layout (\res\layout\activity_main.xml).
The Layout will open on the "Design" Tab.
We recommend that you change your Default Device via the Device Editor to be the Vuzix Blade. If you have not register the Vuzix Blade as a device in Android studio, we strongly recommend you do so now to aid in the development of UI elements.
Notice the ConstraintLayout class is used as the main Layout Element.
Change the Layout to now be a RelativeLayout.
For more information on the Android Layouts or the ConstraintLayout class see the Android documentation:
- https://developer.android.com/guide/topics/ui/declaring-layout.
- https://developer.android.com/reference/android/support/constraint/ConstraintLayout.
To further simplify the layout, you can modify the following:
- Remove the "xmlns:app="http://schemas.android.com/apk/res-auto" from the Top RelativeLayout definition.
- Remove the constraint attributes (app:layout_constraintTop_toTopOf=”parent”, for instance) from the TextView element.
Note: The constraint layout requires a dependency. We removed that dependency when we modified the generated gradle.build file above.
Constraint Layout
Relative Layout
Sync the project
Now that we have modified the build.gradle file and simplified our activity layout, we are ready to sync the project.
Open the build.gradle file again and click on the Sync now link.
Android Studio will refresh the project using the modified build.gradle file and build the project.
If the build.gradle file and layout file are correct, the Build should complete successfully as shown below.
Once the project has built successfully, the Blade-specific libraries can be inspected in the Project window.
Change the project view from the Android view to the Project view in the top left of the file explorer.
Expand the External Libraries section to explore the hud-actionmenu and hud-resources classes.
hud-resources expanded
We now have all the Vuzix HUD resource and libraries.
These libraries will provide a lot of resources that will help you as a developer to build more intuitive apps with performance dedicated to the Vuzix Blade.
To learn more about these libraries, you can download the independent JavaDocs or read our section on them.
- JavaDocs
- For the HUD Resource Package: HUD-Resources Java Docs
- For the HUD ActionMenu Package: HUD-ActionMenu Java Docs
- HUD Libraries
Next Steps: Adding Dynamic, Light and Dark Theme
The Vuzix Blade has three display modes (Dynamic, Light and Dark modes). The display modes provide an optimal display experience in different ambient lighting conditions.
Those styles are automatically changed by the BladeOS based on the front-facing sensors.
The developer can take advantage of this feature and use HUD resources to allow the application Theme to change.
It can also be used to Allow other parts of your application (for instance, Home Screen Widget) to dynamically adjust as well.
The com.vuzix.hud.resources class defines two styles. The two theme styles: HudTheme and HudTheme.Light.
This corresponds to the two display modes of the Vuzix Blade.
Note: The com.vuzix.hud.actionmenu classes have a dependency on com.vuzix.hud.resources.
Thus, since com.vuzix.hud.actionmenu classes are in the dependencies section of the build.gradle file, Android Studio also downloads the resources classes.
Extend HUDTheme Styles
A styles.xml (res/values.styles.xml) file will be created by Android Studio.
You will need to modify the file to have two separate themes that reference HUD resource main themes.
Create an AppTheme that extends the HudTheme from com.vuzix.hud.resources and also create a AppTheme.Light that extends the HudTheme.Light from the same package.
For more details, you can read the JavaDocs or view the details on the parent themes by exploring inside the hud-resources-1.0 file.
Java
<style name="AppTheme" parent="HudTheme">
<!-- Customize your theme here. -->
</style>
<style name="AppTheme.Light" parent="HudTheme.Light">
<!-- Customize your theme here. -->
</style>
styles.xml
DynamicThemeApplication Class
Extend the com.vuzix.hud.resources.DynamicThemeApplication class to create an application that will dynamically change themes depending on the current display mode.
If you utilize this feature, the application will automatically receive a notification from the system and handle the theme change if correctly configured.
In the example application, a new BladeSampleApplication class is created which extends com.vuzix.hud.resources.DynamicThemApplication.
Note: Be sure to set the Superclass as com.vuzix.hud.resources.DynamicThemeApplication.
Create New Class Extending DynamicThemeApplication
Once the BladeSampleApplication class is generated by Android Studio, two methods must be overridden.
The methods that will need to be extended are getNormalThemeResId and getLightThemeResId. These two classes return the new styles defined in the styles.xml file modified above.
Java
@Override
protected int getNormalThemeResId() {
return R.style.AppTheme;
}
@Override
protected int getLightThemeResId() {
return R.style.AppTheme_Light;
}
BladeSampleApplication Class
Next we have to register the new application in the AndroidManifest.xml file.
This is shown on line 6 of the screen below.
Java
android:name=".BladeSampleApplication"
Application defined in AndroidManifest.xml
Now that your application is created and registered, your application will automatically change based on the BladeOS understanding of ambient light conditions.
Dynamic Themes Broadcast
BladeOS will send a broadcast with the following action:
- com.vuzix.intent.action.UI_DISPLAY_MODE
This broadcast can also be received manually to perform any desire actions.
To receive and modify this broadcast you can just follow the normal Android OS Broadcast development work.
We will create a custom broadcast receiver to intercept that message. Once we have that message, we can use it to do custom actions or signal other parts of the application to update.
We will use this feature later for Home Screen widgets, but lets create it now for simplicity.
First we create a new Java class (File > New > Other > Broadcast Receiver) and provide the following information:
- Name: Name of Class
Once the file gets generated, the new class will open up and have a TODO of implement method.
For now we can delete all the code inside the method and leave it blank.
You can use this method to override and do custom work desired when the BladeOS signals for a dynamic theme change request.
Now we go to the Android manifest and tell the system which broadcast this receiver should receive.
We do this by adding an Intent Filter for : "com.vuzix.intent.action.UI_DISPLAY_MODE"
Java
<intent-filter>
<action android:name="com.vuzix.intent.action.UI_DISPLAY_MODE" />
</intent-filter>
You now have a broadcast receiver to intercept and perform custom work on a dynamic theme change request.
Now lets continue the sample project by creating and using ActionMenuActivity and ActionMenu.
User Interface Styles and Customization
With the Vuzix Blade you can develop and create your own unique user interface (UI).
Considering the unique design and screens, we found some designs better than others.
If you want to learn more about some of our suggestions for Layout Styles and other features, you can read the UX Style Guides Section or follow along with the Template Application from our Code Sample area.
Here we will modify our Sample App. The Change will be to the Main Layout XML and in ActionMenuActivity (MainActivity) .
With those changes, we will now have an application with a center View Layout with PopUp Action menu.
Layout Changes
For the Layout Changes we will open our activity_main.xml layout file (res/layout).
Now you can modify the Layout like any other Android Layout by putting the content anywhere you want.
The default layout will provide a Top Center view since our Action Menu will occupy the bottom section.
Let's change the Relative location of our Current TextView to be Top Left.
Then let's put two new TextViews that are Top Right and one that is below those two in the Center.
Notice that for the first TextView we only had to add a change to say alignParentStart.
Also for the new TextViews, they are the same from mainTextView, with the exception of the ID and the alignment parameters.
Java
<TextView
android:id="@+id/mainTextView"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Hello" android:layout_alignParentStart="true" />
<TextView
android:id="@+id/mainTextView2"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="World!!" android:layout_alignParentEnd="true" />
<TextView
android:id="@+id/mainTextView3"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Hello In the Center!"
android:layout_below="@id/mainTextView" android:layout_centerHorizontal="true" />
ActionMenuActivity(MainActivity.java) Modifications
The Final change that we are going to do is tell our ActionMenu To Auto Show when needed.
This is accomplished by a very simple change to our MainActivity.java file.
Open the MainActivity.java file (java/devkit.blade.vuzix.com.bladesampleapp). Find the method alwaysShowActionMenu and have it return false.
Java
@Override
protected boolean alwaysShowActionMenu() {
return false;
}
If you have a Blade, you can run the Sample Application.
When the App starts you will notice that there will be NO action menu, just your main UI.
To trigger the action menu, a simple SELECT gesture (one-finger tap) will pop up the action menu.
To close the action menu you can just do a BACK gesture (two-finger tap).
Next is building a Simple BladeOS Launcher/Rail Widget to provide additional information to the users while the app is closed.
A Vuzix Blade home screen / launcher widget is a standard android app widget with a simple hook into the BladeOS Launcher.
Since the BladeOS launcher is a customized Home screen it has some capabilities that other Launchers don't.
The Home Screen widgets, when using the BladeOS features and hooks, can automatically be added and started. It can also use the BladeOS Dynamic Theming features and take advantage of other BladeOS features too.
Let's add a Widget to our Sample Project.
Create Home Screen Widget
Click File > New > Widget > App Widget in Android Studio to create a new app widget.
Name the widget. In the example application, the app widget is named WidgetProvider.
You will need to provide:
- Class Name = this will be the name of your widget and its resources.
All other fields can be left to the default value as is.
Android Studio will generate the necessary files for the app widget.
The new WidgetProvider app widget will be registered in the AndroidManifest.xml file.
To tell the Vuzix Blade system to display the WidgetProvder app widget in the Home screen when the example application is selected, register the example application’s MainActivity with the launch widget itself.
To add this hook make sure you put the below code inside the widget receiver section on the AndroidManifest.
Make sure you provide the full package path to the class to prevent errors due to simplified values like the .ClassName.
See line 29 in the screen below.
Java
<meta-data android:name="com.vuzix.launcher.widget" android:value="devkit.blade.vuzix.com.bladesampleapp.MainActivity" />
You should be able to run your App, navigate to Home, swipe to update the launcher and see how the Launcher changes to the example layout that the wizard created.
Code Simplifications
Our BladeOS allows for certain simplifications of the Widget provider xml file(res/xml/).
The *_widget_info.xml file gets generated when the app widget is created. This file defines the widget properties to the BladeOS.
The files defines values like Initial Layout to load, how frequently the widget should be updated, previewImage, minimum Height & Width and others. Since the BladeOS is a custom Home Launcher, we will not use some of these values. The system will ignore them if they are provided. But for simplification, you can safely remove all with the exception of the ones below.
One value that we modify is how frequently the widget should be updated. This is controlled with the updatePeriodMillis attribute. Please refer to the app widget documentation for more detailed information. However, a value of 0 as in the example, allows the widget provider to update the widget as frequently as it chooses. The android platform limits automatic updates of app widgets to every 30 minutes or longer.
Java
<appwidget-provider xmlns:android="http://schemas.android.com/apk/res/android"
android:initialLayout="@layout/sample_widget"
android:updatePeriodMillis="0"/>
simplification of widget info file
Add Dynamic Theming to your Widget
Now we are going to take advantage of the BladeOS Dynamic Theming Signals to update the widget.
First we will create two layouts for Light mode and Dark mode. To create those add them to the layout folder under res. Add widget_light.xml and widget_dark.xml
Provide the File name and Root Element if needed. Usually it is used as a Parent to the RelativeLayout to allow us to provide more detailed information on where an object is drawn in reference to the screen and other objects.
Open each and modify the following items:
- On the Main Parent RelativeLayout, set the Style to be HUD define styles.
- Change the background for the main parent to be transparent for dark mode and light_gray for bright/light mode
- Add TextView with an appropriated message for light or dark.
- Center the TextView in the Center of the Layout for us to see.
- Change the Text style to "HudTheme.TextView" dark mode and "HudTheme.Light.TextView" for bright/light mode.
For simplicity, we will only add the code and the image for light mode.
Java
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent" android:layout_height="match_parent" style="@style/HudTheme.Light"
android:background="@color/hud_light_gray">
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerInParent="true"
style="@style/HudTheme.Light.TextView"
android:text="Example of Widget in Bright/Light mode"/>
</RelativeLayout>
Widget Sample Light layout
Modify the AppWidgetProvider (sample_widget.java) to allow it to properly select and update the theme.
The changes are:
- Modify the updateAppWidget method, call a new method called isLightMOde to get the BladeOS DynamicTheme mode.
- Based on the return mode, select a new RemoteView and pass that value back to the AppWidgetManager to update.
- Override the onReceive Method to ensure we catch the "ACTION_APPWIDGET_UPDATE" we will send later.
- Ensure that the onUpdate only sends the first appWidgetID since the BladeOS will only send one id.
For simplicity, remove the onEnabled and onDisabled methods that were automatically generated.
Java
public class sample_widget extends AppWidgetProvider {
static void updateAppWidget(Context context, AppWidgetManager appWidgetManager,
int appWidgetId) {
boolean isLightMode = isLightMode(context);
if(appWidgetManager == null)
{
appWidgetManager = (AppWidgetManager)context.getSystemService(Context.APPWIDGET_SERVICE);
appWidgetId = appWidgetManager.getAppWidgetIds(new ComponentName(context, sample_widget.class))[0];
}
RemoteViews views = new RemoteViews(context.getPackageName(),
isLightMode ? R.layout.widget_light: R.layout.widget_dark);
appWidgetManager.updateAppWidget(appWidgetId,views);
}
@Override
public void onReceive(Context context, Intent intent) {
super.onReceive(context, intent);
updateAppWidget(context,null,0);
}
@Override
public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) {
// There may be multiple widgets active, so update all of them
updateAppWidget(context, appWidgetManager, appWidgetIds[0]);
}
private static boolean isLightMode(Context context)
{
return ((BladeSampleApplication)context.getApplicationContext()).isLightMode();
}
}
Widget Update on BladeOS Dynamic Theme change Broadcast
Using our previously created Dynamic Theme Broadcast Receiver we can tell the widget to update its Theme based on this new information.
In order to do this we simply Open the dynamic_theme_receiver.java file and modify it to send out a Widget update request.
To do that we will add the following code:
- Create a new intent.
- Set the intent action.
- Send that new intent as a broadcast.
Java
Intent updateIntent = new Intent();
updateIntent.setAction(AppWidgetManager.ACTION_APPWIDGET_UPDATE);
context.sendBroadcast(updateIntent);
Now run your application by changing the light environment (increase or decrease ambient light) to see how the widget updates on the Main Launcher area.
For the final part of this Tutorial, we will utilize the BladeOS Application Icon Tint Feature to automatically apply a more visible color schema to your Application Icon on the launcher. Note that this is optional and follow this only if you do want this feature. If you don't want or need Icon Tint or you are doing your own custom one, feel free to skip this.
Next learn how to utilize BladeOS Application Icon Tint Feature.
Android applications have launcher icons. Standard launcher icons are compatible with the Vuzix Blade. However to provide a more consistency launcher, icons can be automatically tinted to match the display mode of the smart glasses.
The tinting feature will perform two functions when you register for it.
- It will TINT the Main Home Screen Icon if you do not have a Widget registered.
- It will TINT the smaller launcher rail icon. It will also apply the tint dynamically base on focus state. In focus state will be tinted green. Out of focus state will be tinted white.
Below is an example of a standard launcher app icon for Vuzix Basics™ Video. Notice the widget icon and the home icon are the same color and they render the same way like the icons do on other Android devices, for instance, like in the Vuzix M300.
Standard Launcher App Icon
The image below shows a tinted Camera app launcher icon. Notice the icon is simple and easily identifiable on the home screen.
Also notice that with the tint feature enabled, the larger icon (only visible if you do not have a Home/Launcher widget) is white while the smaller icon on the launcher rail is green.
A Blade application can be configured to tint launcher icons automatically.
Tinted Launcher App Icon
To enable auto-tinting of launcher icons on the main rail and in Settings > Device > Apps list , modify the main activity in the AndroidManifest.xml file. We only need to add two simple hooks to the Application Manifest to signal this change.
To add the hook:
- Open the AndroidManifest.xml (/manifest/).
- Find the top level application element and add the child meta-data element below.
- Find the MainActivity activity element, activity with the "android.intent.action.MAIN" intent and add a child meta-data element below.
Java
<meta-data android:name="com.vuzix.icon.tint" android:value="true" />
App Tint Manifest change
Now we have an Application Theme that is easier to view and is more consistent.
Please note:
- Application Icon tint will only work properly if the app icon has layers that cannot be drawn on. If the Drawable/mipmap/icon is fully drawable (no layer) the tint application might not work as desired.
- This is not required. This is a benefit that developers can use if desired.
Congratulations! You now have a fully functional Blade sample app.
The Vuzix Speech Command engine is a fully embedded, fast, phrase-matching recognition system designed to interpret and respond to voice commands. A platform base vocabulary is available to all apps; it is intended to facilitate default navigation and selection without direction from the client app. That is, a client app can benefit from navigation provided by a base vocabulary with no setup or explicit awareness of the speech command engine. This capability is implemented by mapping phrases to Android key events.
For many applications, it is desirable to implement a custom vocabulary which performs application-specific actions when an application-specific phrase is spoken (e.g. to capture a still image when “take a picture” is spoken.) The Vuzix Speech Command engine provides two mechanisms by which this can be achieved: Android key events and Android intents.
Custom Vocabulary Architecture
The Vuzix Speech Command engine is implemented as an Android service that runs locally on the device. No cloud servers are used, and the audio data never leaves the device.
Each Activity can have its own vocabulary. The system will automatically switch to the proper vocabulary as Activities are paused and resumed. If no vocabulary is provided, the system will use the default navigation commands.
A 3rd party Application may create a custom vocabulary for the Vuzix Speech Command engine by utilizing the Vuzix Speech SDK as described throughout this section of the knowledge base.
It is recommended that users obtain the Speech SDK via Jitpack. Simply make this addition to your project build.gradle to define the Vuzix repository.
Java
allprojects {
repositories {
google()
jcenter()
// The speech SDK is currently hosted by jitpack
maven {
url "https://jitpack.io"
}
}
}
Then add a dependency to the Speech SDK library in your application build.gradle
Java
dependencies {
implementation 'com.vuzix:sdk-speechrecognitionservice:1.91'
}
Proguard Rules
If you are using Proguard you will need to prevent obfuscating the Vuzix Speech SDK. Failure to do so will result in calls to the SDK raising the RuntimeException "Stub!". Add the following -keep statement to the proguard rules file, typically named proguard-rules.pro.
Java
# Vuzix speech recognition requires the SDK names not be obfuscated
-keep class com.vuzix.sdk.speechrecognitionservice.** {
*;
}
The R8 Optimization may omit arguments required by the SDK methods, resulting in the NullPointerException "throw with null exception" being raised. The current workaround is to disable R8 and use Proguard to do the shrinking and obfuscation. Add the following to your gradle.properties to change from R8 to Proguard.
Java
android.enableR8=false
The custom vocabulary is created when the VuzixSpeechClient class is instantiated. The newly instantiated custom vocabulary inherits the platform base vocabulary, which is currently:
- hello vuzix - Activates the listener
- hello blade - Activates the listener
- go left - Basic directional navigation
- move left - Basic directional navigation
- go right - Basic directional navigation
- move right - Basic directional navigation
- go up - Basic directional navigation
- move up - Basic directional navigation
- go down - Basic directional navigation
- move down - Basic directional navigation
- pick this - Activates the item that currently has focus
- select this - Activates the item that currently has focus
- confirm - Activates the item that currently has focus
- okay - Activates the item that currently has focus
- open - Activates the item that currently has focus
- scroll down - Basic directional navigation
- scroll left - Basic directional navigation
- scroll right - Basic directional navigation
- scroll up - Basic directional navigation
- cancel - Navigates backward in the history stack
- close - Navigates backward in the history stack
- go back - Navigates backward in the history stack
- done - Navigates backward in the history stack
- go home - Triggers the Home action
- quit - Triggers the Home action
- exit - Triggers the Home action
- stop - Stops the scrolling action
- show menu - Brings up context menu for current UI screen
- next - Advances to the next item in an ordered collection of items
- previous - Goes backward by one item in an ordered collection of items
- go forward - Navigates forward in the history stack
- page up - Navigates up one page
- page down - Navigates down one page
- volume up - Increases volume by 5
- volume down - Decreases volume by 5
- speech settings - Opens Speech Settings menu
- speech commands - Opens the current Speech Command list
- command list - Opens the current Speech Command list
- hey google - Opens Google Assistant*
- hey alexa - Opens Alexa Assistant**
- get assistant - Opens user selectable assistant***
- start recording - Opens Camera app to the record function (to begin recording, use the 'Select This' or 'Pick This' command)
- take a picture - Opens Camera app to the picture function (to take a picture, use the 'Select This' or 'Pick This' command)
- view notifications - Opens the Notification Manager
- open notifications - Opens the Notification Manager
- clear notifications - Clears all Notifications
- voice off - Stops the listener
* Requires Google Assistant BETA installed on the Blade
** Requires Amazon Alexa installed on the Blade
*** Requires an assistant to be installed on the Blade, user will need to choose preferred assistant on first attempt to use this command
Creating the Speech Client
To work with the Vuzix Speech SDK, you first create a VuzixSpeechClient, and pass it your Activity
Java
import com.vuzix.sdk.speechrecognitionservice.VuzixSpeechClient;
Activity myActivity = this;
VuzixSpeechClient sc = new VuzixSpeechClient(myActivity);
Handling Exceptions
It is possible for a user to attempt to run code compiled against the Vuzix Speech SDK on non-Vuzix hardware. This will generate a RuntimeException "Stub!" to be thrown. It is also possible to write an application against the latest Vuzix Speech SDK, and have customers attempt to run the application on older devices. Any calls to unsupported interfaces will cause a NoClassDefFoundError. For this reason all SDK calls should be inside try / catch blocks.
Java
// Surround the creation of the VuzixSpeechClient with a try/catch for non-Vuzix hardware
VuzixSpeechClient sc;
try {
sc = new VuzixSpeechClient(myActivity);
} catch(RuntimeException e) {
if(e.getMessage().equals("Stub!")) {
// This is not being run on Vuzix hardware (or the Proguard rules are incorrect)
// Alert the user, or insert recovery here.
} else {
// Other RuntimeException to be handled
}
}
// Surround all speech client commands with try/catch
try {
// sc.anySdkCommandHere();
} catch(NoClassDefFoundError e) {
// The hardware does not support the specific command expected by the Vuzix Speech SDK.
// Alert the user, or insert recovery here.
}
For brevity, this article may omit the try/catch blocks, but creating a robust application requires they be present.
Removing Existing Phrases
Removing existing phrases may reduce the likelihood that the speech command engine resolves the incorrect phrase. This is especially true if your phrases sound similar to default phrases. For example, a game control of "roll right" might be confused with the default phrase "scroll right."
Your application may want to control the navigation itself, in which case you could remove default navigation commands to prevent confusion.
The default vocabulary may be modified by removing individual phrases with commands such as:
Java
sc.deletePhrase("hey google");
sc.deletePhrase("hey alexa");
Or the entire default vocabulary may be removed from your activity using an asterisk as a wildcard.
Java
sc.deletePhrase("*");
Note: On Blade, this command will remove the trigger phrase(s) from the vocabulary. It is highly recommended to re-add the "hello vuzix" phrase to the vocabulary after performing this action, for consistency of interaction method. This is detailed later in this article.
The results of the modified recognizer map can be viewed during debugging with the dump command.
Java
Log.i(LOG_TAG, sc.dump());
Additionally it is important to note that when the speech command engine gets a command that is already in the list, the previous command is overwritten. There is no need to delete the original entry first. For example this is useful if you want to implement your own navigation methods using "go up", "go down", "go left", ect.
Adding Custom Trigger Phrases
When the speech command engine is enabled but idle it is listening only for a trigger phrase, also known as a "wake word". Once the trigger phrase is recognized, the engine transitions to the active state where it listens for the full vocabulary (such as "go home" and "select this"). By default the trigger phrases are "Hello Vuzix" and "Hello Blade". You can insert custom trigger phrases using the following commands.
Java
sc.insertWakeWordPhrase("hello vuzix"); // Add "hello vuzix" wake-up phrase for consistency
sc.insertWakeWordPhrase("hello blade"); // Add "hello vuzix" wake-up phrase for consistency
sc.insertWakeWordPhrase("hello myself"); // Add application specific wake-up phrase
The speech command engine will time-out after the period configured in the system settings and return to the idle state that listens only for the trigger phrases. The operator can circumvent the timeout and immediately return to idle by saying a voice-off phrase. By default the voice-off phrases are "voice off" and "speech off". You can insert custom voice-off phrases using the following commands.
Java
sc.insertVoiceOffPhrase("voice off"); // Add-back the default phrase for consistency
sc.insertVoiceOffPhrase("speech off"); // Add-back the default phrase for consistency
sc.insertVoiceOffPhrase("privacy please"); // Add application specific stop listening phrase
Adding Phrases to Receive Keycodes
You can register for a spoken command that will generate a keycode. This keycode will behave exactly the same as if a USB keyboard were present and generated that key. This capability is implemented by mapping phrases to Android key events (android.view.KeyEvent).
Java
sc.insertKeycodePhrase("toggle caps lock", KEYCODE_CAPS_LOCK);
Log.i(LOG_TAG, sc.dump());
Keycodes added by your application will be processed in addition to the Keycodes in the base vocabulary set:
- K_DPAD_LEFT "move left"
- K_DPAD_RIGHT "move right"
- K_DPAD_UP "move up"
- K_DPAD_DOWN "move down"
- K_DPAD_LEFT "scroll left" (repeats)
- K_DPAD_RIGHT "scroll right" (repeats)
- K_DPAD_UP "scroll up" (repeats)
- K_DPAD_DOWN "scroll down" (repeats)
- K_PAGE_UP "page up"
- K_PAGE_DOWN "page down"
- K_DPAD_LEFT "go left"
- K_DPAD_RIGHT "go right"
- K_DPAD_UP "go up"
- K_DPAD_DOWN "go down"
- K_NAVIGATE_IN "go in"
- K_NAVIGATE_OUT "go out"
- K_FORWARD "go forward"
- K_BACK "go back"
- K_HOME "go home"
- K_ENTER "select this"
- K_ENTER "pick this"
- K_ENTER "open"
- K_BACK "close "
- K_BACK "done"
- K_HOME "quit"
- K_HOME "exit"
- K_ENTER "okay"
- K_ENTER "confirm"
- K_NAVIGATE_NEXT "next"
- K_NAVIGATE_PREVIOUS "previous"
- K_ESCAPE "cancel"
- K_MENU "show menu"
- K_VOLUME_UP "volume up"
- K_VOLUME_DOWN "volume down"
Note that speaking any valid phrase will terminate a previous repeating keycode. The phrase "stop" terminates the repeating keycodes an has no further behavior.
Adding Phrases to Receive Intents
The most common use for the speech command engine is to receive intents which can trigger any custom actions, rather than simply receiving keycodes. To do this you must have a broadcast receiver in your application such as:
Java
public class VoiceCmdReceiver extends BroadcastReceiver {
....
}
The broadcast receiver must register with Android for the Vuzix speech intent. This can be done in the constructor as shown here.
Java
public class VoiceCmdReceiver extends BroadcastReceiver {
public VoiceCmdReceiver(MainActivity iActivity) {
iActivity.registerReceiver(this, new IntentFilter(VuzixSpeechClient.ACTION_VOICE_COMMAND));
...
}
}
The phrases you want to receive can be inserted in the same constructor. This is done using insertPhrase() which registers a phrase for the speech SDK intent. The parameter is a string containing the phrase for which you want the device to listen.
Java
public class VoiceCmdReceiver extends BroadcastReceiver {
public VoiceCmdReceiver(MainActivity iActivity) {
iActivity.registerReceiver(this, new IntentFilter(VuzixSpeechClient.ACTION_VOICE_COMMAND));
VuzixSpeechClient sc = new VuzixSpeechClient(iActivity);
sc.insertPhrase( "testing" );
Log.i( LOG_TAG, sc.dump() );
}
}
Now handle the speech SDK intent VuzixSpeechClient.ACTION_VOICE_COMMAND in your onReceive() method. Whatever phrase you used in insertPhrase() will be provided in the the received intent as a string extra named VuzixSpeechClient.PHRASE_STRING_EXTRA.
Java
public class VoiceCmdReceiver extends BroadcastReceiver {
public VoiceCmdReceiver(MainActivity iActivity) {...}
@Override
public void onReceive(Context context, Intent intent) {
// All phrases registered with insertPhrase() match ACTION_VOICE_COMMAND
if (intent.getAction().equals(VuzixSpeechClient.ACTION_VOICE_COMMAND)) {
String phrase = intent.getStringExtra(VuzixSpeechClient.PHRASE_STRING_EXTRA);
if (phrase != null ) {
if (phrase.equals("testing")) {
// todo: add test behavior
}
}
}
}
}
With that, you will be able to say "Hello Vuzix" to activate the engine, followed by "testing" and the code you inserted in place of the //todo will execute.
The Speech Command engine always broadcasts the phrase that was registered in insertPhrase(). Note: If the phrase contains spaces, they will be replaced by underscores.
Replacement Text
As mentioned above, the string that was recognized is returned, with spaces replaced by underscores. That can be somewhat cumbersome to the developer, especially since we expect recognized spoken phrases to be localized into many languages.
To make this easier, insertPhrase() can take an optional substitution string parameter. When this is supplied, the substitution string is returned in place of the spoken text.
This example updates the original by replacing the hard-coded strings properly. Notice insertPhrase() is given two parameters, and it is the second that is used by the onReceive() method.
Note: The substitution string may not contain spaces
This now gives us a complete solution to receive a custom phrase and handle it properly.
Java
public class VoiceCmdReceiver extends BroadcastReceiver {
final String MATCH_TESTING = "Phrase_Testing";
public VoiceCmdReceiver(MainActivity iActivity) {
iActivity.registerReceiver(this, new IntentFilter(VuzixSpeechClient.ACTION_VOICE_COMMAND));
VuzixSpeechClient sc = new VuzixSpeechClient(iActivity);
// strings.xml contains: <string name="spoken_phrase_testing">testing my voice application</string>
sc.insertPhrase( iActivity.getResources().getString(R.string.spoken_phrase_testing), MATCH_TESTING);
Log.i( LOG_TAG, sc.dump() );
}
@Override
public void onReceive(Context context, Intent intent) {
// All phrases registered with insertPhrase() match ACTION_VOICE_COMMAND
if (intent.getAction().equals(VuzixSpeechClient.ACTION_VOICE_COMMAND)) {
String phrase = intent.getStringExtra(VuzixSpeechClient.PHRASE_STRING_EXTRA);
if (phrase != null ) {
if (phrase.equals(MATCH_TESTING)) {
// Todo: add test behavior
}
}
}
}
}
The substitution parameter also allows us to create multiple phrases that perform the same action. Phrases in the command engine must be unique but substitution text does not.
We could have multiple insertPhrase() calls with different phrase parameters and identical substitutions. Use this technique to simplify your code in situations where you do not need to differentiate between phrases. For example, the phrases "start call" and "make a call" can have the same substitution.
Adding Phrases to Receive Custom Intents
To add even more flexibility, the speech SDK can send any intent you define, rather than only sending its own ACTION_VOICE_COMMAND. This is especially useful for creating multiple broadcast receivers and directing the intents properly.
Note, this example differs from the above in that the CUSTOM_SDK_INTENT is used in place of ACTION_VOICE_COMMAND
Java
public class VoiceCmdReceiver extends BroadcastReceiver {
public final String CUSTOM_SDK_INTENT = "com.your_company.CustomIntent";
final String CUSTOM_EVENT = "my_event";
public VoiceCmdReceiver(MainActivity iActivity) {
iActivity.registerReceiver(this, new IntentFilter(CUSTOM_SDK_INTENT);
VuzixSpeechClient sc = new VuzixSpeechClient(iActivity);
Intent customSdkIntent = new Intent(mMainActivity.CUSTOM_SDK_INTENT);
sc.defineIntent(CUSTOM_EVENT, customSdkIntent );
// strings.xml contains: <string name="spoken_phrase_testing">testing my voice application</string>
sc.insertIntentPhrase( iActivity.getResources().getString(R.string.spoken_phrase_testing), CUSTOM_EVENT);
Log.i( LOG_TAG, sc.dump() );
}
@Override
public void onReceive(Context context, Intent intent) {
// Since we only registered one phrase to this intent, we don't need any further switching. We know we got our CUSTOM_EVENT
// todo: add test behavior
}
}
The system can support multiple broadcast receivers. Each receiver simply registers for the intents it expects to receive. They do not need to be in the same class that creates the VuzixSpeechClient.
Deleting a Custom Intent
Beginning with SDK v1.91 you can now delete a custom intent. Similar to inserting an intent, call the deleteIntent() method and supply the label of the intent you wish to delete.
Java
// Voice command custom intent names
final String TOAST_EVENT = "other_toast";
...
sc.deleteIntent(TOAST_EVENT);
Listing all Intent Labels
Beginning with SDK v1.91 you can now list all intent labels. The list will be returned as a List<String>.
Java
List<String> intentLabels = sc.getIntentLabels();
Checking the Engine Version
As mentioned above, it is possible for the SDK to expose newer calls than what is supported by a given device OS version. You can query getEngineVersion() to determine the version of the engine on the device to allow you to protect newer SDK calls with conditional logic to eliminate possible NoClassDefFoundError from being generated. For example, if you know the device is running SDK v1.8 you would not attempt calls introduced in v1.9.
Because getEngineVersion() is a newer SDK call, it should itself be protected.
Java
float version = 1.4f; // The first stable SDK released with M300 v1.2.6
try {
version = sc.getEngineVersion();
Log.d(mMainActivity.LOG_TAG, "Device is running SDK v" + version);
} catch (NoSuchMethodError e) {
Log.d(mMainActivity.LOG_TAG, "Device is running SDK prior to v1.8. Assuming version " + version);
}
Sample Project
A sample application for Android Studio demonstrating the Vuzix Speech SDK is available to download here.
The Getting Started Code article of this knowledge base provides an overview of querying, deleting and adding phrases to the base vocabulary. Those are the only interfaces needed to create a vocabulary from scratch.
This article builds on those concepts to create a more robust mechanism to edit the base vocabulary. Editing the base vocabulary allows you to leave some portion of it unchanged and add your own customization on top of it.
The advantage of editing the base vocabulary, rather than re-inventing it, is to maintain a consistent user experience between your app and other applications. This reduces confusion and training for end users. For example, it would be confusing for the user to say "move left" in all other applications, then say "to the left" in your application. So, in most instances, it is best to leave similar functionality unchanged from the base vocabulary.
The base vocabulary is set by the device operating system based on the active system language. The built-in commands may change from version to version. Therefore, commands like:
Java
sc.deletePhrase("Voice Off");
are only guaranteed to work for customers running in English on the same OS version originally tested against. A future OS version may replace the "Voice Off" phrase with another similar phrase, rendering that code incorrect.
More robust interfaces were introduced in SDK v1.8 to allow applications to easily and reliably edit the base vocabulary. Using the new interfaces described below, the "voice off" example would instead be written:
Java
for( String voiceOffPhrase : sc.getVoiceOffPhrases() ) {
sc.deletePhrase(voiceOffPhrase);
}
Instead of hard-coding the voice off phrase, we query the engine for that information using getVoiceOffPhrases(). This example will behave identically in all languages, including ones to which your application is not translated; and will even work if the actual phrases are changed by future OS revisions.
Phrase Queries
The phrases in the active vocabulary can be queried based on functionality. Some actions have multiple phrases, so each query returns a List<String> of phrases where each phrase is the exact spoken phrase in the current language. An empty list is returned if no phrases match the request.
These interfaces allow you to create detailed help pages describing the built-in phrases, and allow you to delete the phrases you do not want active.
Queries include:
- getPhrases() - returns all phrases
- getWakeWordPhrases() - returns the current wake words, such as "Hello Vuzix", that bring the speech command engine to the active triggered state where it listens for the full vocabulary.
- getVoiceOffPhrases() - returns the current phrases, such as "voice off", that take the speech command engine out of the active triggered state so it once again listens only for wake words.
- getStopPhrases() - returns the current phrases, such as "stop", that terminate previous scroll requests.
- getKeycodePhrases() - return phrases associated with a specific keycode, such as "go left" is associated with KEYCODE_DPAD_LEFT. This is described more below.
- getIntentPhrases() - returns phrases associated with any label you created with defineIntent(). This interface is not used in editing the base vocabulary.
- getBuiltInActionPhrases() - returns phrases associated with various pre-defined actions, such as "take a picture". This is described more below.
Querying Key Codes
The getKeycodePhrases() query for phrases associated with keycodes takes two parameters: keycode and keyFrequency.
The keycode can be any valid key defined in the KeyEvent class, such as KEYCODE_DPAD_LEFT.
The keyFrequency can limit the results to phrases associated with single-press or repeating (scroll) keys. The valid values are:
- KEY_FREQ_SINGLE_OR_REPEAT - match all phrases whether single-press or repeating (scrolling). Both "go left" and "scroll left" would match this.
- KEY_FREQ_SINGLE_PRESS - match only single-press actions. Only "go left" would match this, not "scroll left".
- KEY_FREQ_REPEATING - match only repeating actions. Only "scroll left" would match this, not "go left".
So the request for the "go left" phrase would be
Java
List<String> leftPhrases = sc.getKeycodePhrases(KeyEvent.KEYCODE_DPAD_LEFT, VuzixSpeechClient.KEY_FREQ_SINGLE_PRESS);
It is possible to iterate over all valid key constants to determine if they have at least one valid phrase or you can query to identify keys that have at least one valid phrase associated using getMappedKeycodes(). This takes a single argument keyFrequency as described above, and returns a List<Integer> of keycode values.
This example shows a query for every phrase associated to a key:
Java
int selectedFrequency = VuzixSpeechClient.KEY_FREQ_SINGLE_OR_REPEAT;
for ( Integer keycode : sc.getMappedKeycodes(selectedFrequency) ) {
for ( String eachKeyPhrase : sc.getKeycodePhrases(keycode, selectedFrequency) ) {
// todo: Do something with keycode and eachKeyPhrase
}
}
Query Built-In Actions
The operating system provides some special built-in actions which vary by device. You can query the associated phrases using getBuiltInActionPhrases(). This interface takes a single parameter to identify the action. The action may be one of:
- BUILT_IN_ACTION_SLEEP
- BUILT_IN_ACTION_COMMAND_LIST
- BUILT_IN_ACTION_SPEECH_SETTINGS
- BUILT_IN_ACTION_FLASHLIGHT_ON (M-Series only)
- BUILT_IN_ACTION_FLASHLIGHT_OFF (M-Series only)
- BUILT_IN_ACTION_VIEW_NOTIFICATIONS
- BUILT_IN_ACTION_CLEAR_NOTIFICATIONS (Blade only)
- BUILT_IN_ACTION_START_RECORDING
- BUILT_IN_ACTION_TAKE_PHOTO
- BUILT_IN_ACTION_ASSISTANT (Blade only)
- BUILT_IN_ACTION_ASSIST_ALEXA (Blade only)
- BUILT_IN_ACTION_ASSIST_GOOGLE (Blade only)
For example, to determine which phrase will turn the screen off, we could query:
Java
List<String> sleepPhrases = sc.getBuiltInActionPhrases(VuzixSpeechClient.BUILT_IN_ACTION_SLEEP);
Deleting Items
Any phrase returned from any of the queries may be passed to deletePhrase() to remove the phrase from the vocabulary.
There is also a deleteAllPhrases() interface that removes all phrases to allow your application to insert clean entries.
When editing the base vocabulary it is often preferred to use the deleteAllPhrasesExcept() interface. This interface takes a List<String> of phrases to keep, all others are deleted. This is useful when an application wants to keep basic navigation and delete all built-in actions. As the OS is updated more built-in actions may be added. So, rather than re-releasing your application with more and more delete commands, you can create a list of base phrases to keep and delete the others.
This example deletes all phrases except the wake word, voice off word, those mapped to key presses and the stop phrase.
Java
VuzixSpeechClient sc = new VuzixSpeechClient(this);
ArrayList<String>keepers = new ArrayList<>();
for ( Integer keycode : sc.getMappedKeycodes(VuzixSpeechClient.KEY_FREQ_SINGLE_OR_REPEAT) ) {
keepers.addAll(sc.getKeycodePhrases(keycode, VuzixSpeechClient.KEY_FREQ_SINGLE_OR_REPEAT));
}
keepers.addAll(sc.getWakeWordPhrases());
keepers.addAll(sc.getVoiceOffPhrases());
keepers.addAll(sc.getStopPhrases());
sc.deleteAllPhrasesExcept(keepers);
The built-in speech command engine allows multiple applications to each register their own unique vocabulary. As the user switches between applications, the engine automatically switches to the correct vocabulary. Only the active application receives the recognized phrases.
Applications with only a single vocabulary can get the behavior they desire with very little coding consideration. Applications with multiple activities and multiple vocabularies have special considerations discussed here.
Single Activity and Single Vocabulary
Applications with only a single vocabulary in a single Activity can do the required registration in the onCreate() method, and the required cleanup in the onDestroy().
Java
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
try {
// Register to receive intent that are generated by the Speech Command engine
registerReceiver(this, new IntentFilter(VuzixSpeechClient.ACTION_VOICE_COMMAND));
// Create a VuzixSpeechClient from the SDK and customize the vocabulary
VuzixSpeechClient sc = new VuzixSpeechClient(this);
sc.insertPhrase(mMainActivity.getResources().getString(R.string.btn_text_clear));
} catch (NoClassDefFoundError e) {
Toast.makeText(this, "Cannot find Vuzix Speech SDK", Toast.LENGTH_LONG).show();
} catch (Exception e) {
Log.e(TAG, "Error setting custom vocabulary: " + e.getMessage());
}
}
@Override
protected void onDestroy() {
// Remove the dynamic registration to the Vuzix Speech SDK
unregisterReceiver(this);
super.onDestroy();
}
This application is similar to the ones described in the preceding knowledge base articles, and ensures only the correct commands are received. Unfortunately, this only works in the simplest cases. It requires your application have only a single activity using speech commands, and the vocabulary never be changed within your application.
Multiple Vocabularies
A single Activity will have a single vocabulary associated with it. The Activity can modify that vocabulary at any time by calling deletePhrase(), insertPhrase(), insertIntentPhrase(), and insertKeycodePhrase().
The default intent with an action VuzixSpeechClient.ACTION_VOICE_COMMAND will still be used, so there is no need to call registerReceiver() when changing the existing vocabulary.
For example:
Java
private void enableScanButton(boolean isEnabled ) {
mScanButton.setEnabled(isEnabled);
String scanBarcode = getResources().getString(R.string.scan_barcode);
if (isEnabled ) {
sc.insertPhrase(scanBarcode);
} else {
sc.deletePhrase(scanBarcode);
}
}
This same behavior could be achieved by inserting that scanBarcode phrase in the onCreate() and simply ignoring it when it is not expected. But many developers prefer actively modifying the vocabulary as shown above.
Multiple Activities
Many applications consist of multiple activities. If each Activity were to be registered for the same speech intents, both would receive the commands. This would cause significant confusion if not handled properly. There are a few easy mechanisms that can be used to solve this.
You can dynamically un-register and re-register for the intents based on the activity life cycle, or you can choose to receive a custom unique intent for each phrase. Both are described in sub-sections below.
Multiple Activities - Dynamic Registration
One mechanism to ensure the correct speech commands are routed to the correct activity within a multi-activity application is for each activity to dynamically register and unregister for the speech intent. Each activity will receive the same default intent action, but only while in the foreground.
Java
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
try {
// Create a VuzixSpeechClient from the SDK and customize the vocabulary
VuzixSpeechClient sc = new VuzixSpeechClient(this);
sc.insertPhrase(mMainActivity.getResources().getString(R.string.btn_text_clear));
} catch (NoClassDefFoundError e) {
Toast.makeText(this, "Cannot find Vuzix Speech SDK", Toast.LENGTH_LONG).show();
} catch (Exception e) {
Log.e(TAG, "Error setting custom vocabulary: " + e.getMessage());
}
}
@Override
protect void onResume() {
super.onResume();
// Dynamically register to receive intent from the Vuzix Speech SDK
registerReceiver(this, new IntentFilter(VuzixSpeechClient.ACTION_VOICE_COMMAND));
}
@Override
protected void onPause() {
// Remove the dynamic registration to the Vuzix Speech SDK
unregisterReceiver(this);
super.onPause();
}
In this example, each activity creates its vocabulary once. Each time an activity is paused, it unregisters and stops receiving speech commands. Each time an activity is resumed, it re-registers and resumes receiving speech commands.
This allows multiple activities to co-exist in a single application and each get speech commands only when they are active.
The only downside to this mechanism is the activity will not get the command engine state changes while it is not active. For example, it will not know if the speech command engine has become disabled or has timed out. For many applications this does not impact the behavior, and this mechanism is the most appropriate.
Multiple Activities - Unique Intents
Another mechanism exists to control the routing of speech commands. This uses slightly more code than dynamically registering and un-registering to receive the intents and allows all activities to receive the advanced command engine state data, without receiving the incorrect speech commands. This should be used when it is important to maintain the state of the speech command engine in your activities.
The speech engine allows a developer to specify a custom intent action, instead of relying on the default VuzixSpeechClient.ACTION_VOICE_COMMAND action.
The custom intents do not have any extras, so you must have one custom intent per phrase. Each activity will create a unique intent for each phrase, such as:
Java
public final String CUSTOM_BARCODE_INTENT = "com.vuzix.sample.MainActivity.BarcodeIntent";
public final String CUSTOM_SETTINGS_INTENT = "com.vuzix.sample.MainActivity.SettingsIntent";
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
try {
// Create a VuzixSpeechClient from the SDK
VuzixSpeechClient sc = new VuzixSpeechClient(this);
// Associate the phrase "Scan Barcode" with generating the CUSTOM_BARCODE_INTENT intent
final String barcodeIntentId = "ScanBarcodeId";
registerReceiver(this, new IntentFilter(CUSTOM_BARCODE_INTENT));
sc.defineIntent(barcodeIntentId, new Intent(CUSTOM_BARCODE_INTENT) );
sc.insertIntentPhrase("Scan Barcode", barcodeIntentId);
// Associate the phrase "Show Settings" with generating the CUSTOM_SETTINGS_INTENT intent
final String showSettingsId = "ShowSettingId";
registerReceiver(this, new IntentFilter(CUSTOM_SETTINGS_INTENT));
sc.defineIntent(showSettingsId, new Intent(CUSTOM_SETTINGS_INTENT) );
sc.insertIntentPhrase("Show Settings", showSettingsId);
} catch (NoClassDefFoundError e) {
Toast.makeText(this, "Cannot find Vuzix Speech SDK", Toast.LENGTH_LONG).show();
} catch (Exception e) {
Log.e(TAG, "Error setting custom vocabulary: " + e.getMessage());
}
}
@Override
public void onReceive(Context context, Intent intent) {
// Now we have a unique action for each phrase
if (intent.getAction().equals(CUSTOM_BARCODE_INTENT)) {
// todo: scan barcode
} else if (intent.getAction().equals(CUSTOM_SETTINGS_INTENT)) {
// todo: open settings menu
}
}
@Override
protected void onDestroy() {
unregisterReceiver(this);
super.onDestroy();
}
In the above example, we have a single intent action for each vocabulary word. These each include "MainActivity" in this example. To continue this pattern, you would create other activities with unique names such as "SettingsActivity". This makes the processing very deterministic. Each activity only registers for its own unique intents.
As the user switches between activities, the vocabulary will be changed in the engine, and even if both activities recognize the same phrase, the unique intent will be generated based on the current active activity.
You can create common code to insert common phrases, you just need to be sure to append an Activity name to each so they are not confused. For example:
Java
public final String PACKAGE_PREFIX = "com.vuzix.sample.";
public final String CUSTOM_BARCODE_INTENT = ".BarcodeIntent";
public String GetBarcodeIntentActionName(String ActivityName) {
return (PACKAGE_PREFIX + ActivityName + CUSTOM_BARCODE_INTENT);
}
protected void InsertCustomVocab(String ActivityName) {
try {
// Create a VuzixSpeechClient from the SDK
VuzixSpeechClient sc = new VuzixSpeechClient(this);
// Associate the phrase "Scan Barcode" with generating the unique intent for the calling activity
final String barcodeIntentId = "ScanBarcodeId" + ActivityName;
sc.defineIntent(barcodeIntentId, new Intent(GetBarcodeIntentActionName(ActivityName)));
sc.insertIntentPhrase("Scan Barcode", barcodeIntentId);
} catch (NoClassDefFoundError e) {
Toast.makeText(this, "Cannot find Vuzix Speech SDK", Toast.LENGTH_LONG).show();
} catch (Exception e) {
Log.e(TAG, "Error setting custom vocabulary: " + e.getMessage());
}
}
With combinations of these techniques you can define custom vocabularies in the various onCreate() methods of your activities. These vocabularies generate unique intent actions for each phrase within each Activity. The system will ensure the correct phrases are delivered to the correct activities.
Listing all Vocabulary Names
Beginning with SDK v1.91 you can now list all vocabulary names. The list will be returned as a List<String>.
Java
List<String> storedVocabularyNames = sc.getStoredVocabularyNames();
Summary
By using the techniques described here, your can create an application that dynamically modifies the vocabulary within activities,. You can also create multiple activities that use the same or different vocabularies, and each activity will get the correct speech commands.
Advanced Controls
The Vuzix Speech Command engine has advanced controls described here. These have been expanded since the initial SDK was released.
Enabling and Disabling Speech Recognition
The Vuzix Speech SDK will listen for the trigger phrases "Hello Vuzix" or "Hello Blade" whenever Vuzix Speech Command engine is enabled in the Settings menu (unless explicitly removed by an application).
When the Speech Command engine is enabled, the microphone icon becomes outlined. When the Speech Command engine is disabled, the microphone icon in the notification bar is not present.
The Speech Command engine has global commands, such as "go home" and "start recording" that are processed in any application. The Speech Command engine also supports custom vocabulary that is processed by each individual application.
It is possible for an application to rely on custom voice commands to perform essential tasks. In this scenario, it would be an unwanted burden to require the user to navigate to the system Settings menu. Instead the Speech Command engine may be programmatically enabled from within an application.
Java
import com.vuzix.sdk.speechrecognitionservice.VuzixSpeechClient;
try {
VuzixSpeechClient.EnableRecognizer(getApplicationContext(), true);
} catch(NoClassDefFoundError e) {
// This device does not implement the Vuzix Speech SDK
// todo: Implelment error recovery
}
This method is static. Passing the the optional context parameter allows the proper user permissions to be applied, and is recommended for robustness.
The command engine may be similarly disabled via code during times when false detection would impair the application behavior.
Java
VuzixSpeechClient.EnableRecognizer(getApplicationContext(), false);
Once the Speech Command engine is disabled, the notification bar icon is grayed-out, and the phrase "Hello Vuzix" will no longer trigger speech recognition.
It is safe to set the Speech Command engine to the existing state, so there is no need to query the state before enabling or disabling the Speech Command engine. Simply specify the desired state. However, if you want to display the current enabled/disabled state you can query it using isRecognizerEnabled(). This value is not changed by the system while your application is active so the appropriate place for this query is your activity onResume().
Java
bool mSpeechEnabled;
@Override
protected void onResume() {
super.onResume();
mSpeechEnabled = VuzixSpeechClient.isRecognizerEnabled(this);
// todo: update status to user showing state of mSpeechEnabled
}
Triggering the Speech Command Engine
When the Speech Command engine is enabled, the engine remains in a low-power mode listening only for the trigger phrase, "Hello Vuzix". Once this is heard, the engine wakes and becomes active. This state is indicated by the microphone icon in the notification bar becoming fully filled. While active, all audio data is scanned for known phrases.
It is possible for an application to programmatically trigger the recognizer to wake and become active, rather than relying on the "Hello Vuzix" trigger phrase. This can be tied to a button press or a fragment opening.
Java
import com.vuzix.sdk.speechrecognitionservice.VuzixSpeechClient;
try {
VuzixSpeechClient.TriggerVoiceAudio(getApplicationContext(), true);
} catch(NoClassDefFoundError e) {
// This device does not implement the Vuzix Speech SDK
// TODO: Implement error recovery
}
The Speech Command engine has a timeout that can be modified in the system Settings menu. The active engine will return to idle mode after that duration has elapsed since the most recent phrase was recognized. This state is again indicated by the microphone icon in the notification bar returning to the unfilled outline icon, and the engine will only respond to the trigger phrase "Hello Vuzix."
Some workflows are best suited to return the active engine to idle at a specific time. For example, during recording of a voice memo. This prevents phrases such as "go back" and "go home" from being recognized and acted upon.
The Speech Command engine may be programmatically un-triggered to idle state with the same method.
Java
VuzixSpeechClient.TriggerVoiceAudio(getApplicationContext(), false);
Trigger State Notification
Since the Speech Command engine may be triggered externally and may timeout internally, it is likely that applications that wish to control this behavior need to know the state of the engine.
The same Speech Command Intent that broadcasts phrases also broadcasts state change updates. Simply check for the presence of the extra boolean RECOGNIZER_ACTIVE_BOOL_EXTRA.
Java
boolean mSpeechTriggered;
@Override
public void onReceive(Context context, Intent intent) {
if (intent.getAction().equals(VuzixSpeechClient.ACTION_VOICE_COMMAND)) {
Bundle extras = intent.getExtras();
if (extras != null) {
// We will determine what type of message this is based upon the extras provided
if (extras.containsKey(VuzixSpeechClient.RECOGNIZER_ACTIVE_BOOL_EXTRA)) {
// if we get a recognizer active bool extra, it means the recognizer was
// activated or stopped
mSpeechTriggered = extras.getBoolean(VuzixSpeechClient.RECOGNIZER_ACTIVE_BOOL_EXTRA, false);
// TODO: Implement behavior based upon the engine being changed to active or idle
}
}
}
}
Since the state may also change while your application is not running, if you display the state using these notifications you should also query the current state in your onResume().
Java
bool mSpeechTriggered;
@Override
protected void onResume() {
super.onResume();
mSpeechTriggered = VuzixSpeechClient.isRecognizerTriggered(this);
// TODO: Implement behavior based upon the recognizer being changed to active or idle
}
Startup Timing Concerns
It is possible for applications that automatically launch with the operating system to be initialized before the speech engine has come online. This is true for launcher applications, among others. Any speech queries or commands issued at startup will fail, and must be retried after the speech engine comes online. In such applications, you should surround initialization logic with a call such as:
Java
if( VuzixSpeechClient.isRecognizerInitialized(this) ) {
//todo perform your speech customizations here
}
Even if the initialization code cannot be run at startup, you should still register the broadcast receiver for the trigger state, as described in the preceding section. When the engine becomes initialized it will send out an initial trigger state. The receipt of this trigger state can cause your application to retry the speech initialization. This allows you to create an application that is starts before the speech engine, and can interact with the speech engine as soon as it becomes available without any unnecessary polling.
Canceling Repeating Characters
Certain Commands like "scroll up" and "scroll down" initiate repeating key presses. This allows the user interface to continue to scroll in the selected direction. The repeating key presses stop when the engine detects any other phrase, such as "select this". The default phrase "stop" is recognized by the speech engine and has no behavior other than to terminate the scrolling.
You may wish to stop repeating key presses programatically without requiring the user to say another phrase. This is useful when reaching the first or last item in a list. To do this, simply call StopRepeatingKeys().
Java
try {
sc.StopRepeatingKeys();
} catch(NoClassDefFoundError e) {
// The ability to stop repeating keys was added in Speech SDK v1.6 which
// was released on M400 v1.1.4. Earlier versions will not support this.
}
Get the Maximum Recognizer Timeout Time
Beginning with SDK v1.91 you now have access to the maximum recognizer timeout time.
Java
int recognizedMaxTimeoutTime = sc.getRecognizerTimeoutMax();
Getting and Setting the Recognizer Timeout Config
Beginning with SDK v1.91 you can now retrieve and set the recognizer timeout config.
Java
int recognizerTimeoutConfig = sc.getRecognizerTimeoutConfig();
...
sc.setRecognizerTimeoutConfig(30); // in seconds
Sample Project
A sample application for Android Studio demonstrating the Vuzix Speech SDK is available to download here.
Developing on the Vuzix Blade: Recommended Configuration
Resolution and Presentation: Modify the Resolution and Presentation section of Settings for Android as shown below:
NOTE: For a full screen experience, select the Portrait option above
Other Settings: Modify the Other Settings section of Settings for Android as shown below:
And that's it! You are now ready to develop your first application on the Blade.
Please visit our Blade Code Samples page for examples on how to interact with the device.
As a developer, you can communicate between apps running on the Blade and apps running on a phone. This can be accomplished in a variety of ways. You can choose to manage your own communication protocols using wireless technologies such as Wi-Fi, Wi-Fi Direct, Bluetooth and Bluetooth Low Energy. You can also leverage the Vuzix Connectivity framework to communicate between apps using the same secure communication protocol used between Blade and the Vuzix Blade Companion App.
Managing Your Own Communication
Writing apps for Vuzix Blade is very similar to writing apps for any other Android device. If your Blade app needs to communicate with another device, or an app running on a phone, you can use familiar technologies such as Wi-Fi or Bluetooth to make that happen. The advantage of using these technologies is that they are standard, well-known technologies that can handle all sorts of use cases and requirements. The disadvantage is they often require a lot of coding to get setup and working in your app. This guide will not go into details about how to write a custom protocol for app communication. Instead we will focus on getting up and running quickly using the Vuzix Connectivity framework.
Vuzix Connectivity Framework
The Vuzix Connectivity framework enables simple communication between an app running on a Blade and an app running on a phone. The framework leverages the Android broadcast system and extends it to enable sending broadcasts between devices.
https://developer.android.com/guide/components/broadcasts
This approach gets you up and running very quickly without having to worry about how to locate devices, without worrying about how to secure your data, and without designing a custom communication protocol.
Using the connectivity framework in your app will depend on the type of app you are developing. Blade apps and Android apps designed for a phone will use the Vuzix Connectivity SDK for Android. iPhone apps will use the VuzixConnectivity framework for iOS (coming soon). Both require a phone running the Vuzix Blade Companion App.
The Vuzix Connectivity SDK for Android gives developers access to the Vuzix Connectivity framework allowing simple messaging between apps running on the Blade and apps running on a phone. The same Connectivity SDK library is used regardless of which side of the communication you are on; Blade or phone.
Setup and Installation
To use the SDK, you only need small additions to your build.gradle file(s). First, declare a new maven repository in a repositories element that points to Jitpack
You can use the repositories element under allprojects in your project’s root build.gradle file:
Java
allprojects {
repositories {
google()
jcenter()
maven {
url "https://jitpack.io"
}
}
}
Alternatively you can use a repositories element under the android element in your app’s build.gradle file:
Java
apply plugin: 'com.android.application'
android {
...
repositories {
maven {
url "https://jitpack.io"
}
}
}
Finally, add the Connectivity SDK as a dependency in your app’s build.gradle file:
Java
dependencies {
implementation 'com.vuzix:connectivity-sdk:1.1'
}
That’s it! You are now ready to start sending messages using the Connectivity framework.
The Connectivity Class
The Connectivity class is your main entry point into the Connectivity SDK. The first thing you need to do is use the static get() method to obtain the singleton instance of the Connectivity class. You’ll need to pass in a non-null Context object:
Java
Connectivity c = Connectivity.get(myContext);
Once you have a reference to the Connectivity object, you can call various methods to learn more about the current connectivity status, send broadcasts, and perform other functions related to connectivity. Consult the Connectivity SDK javadocs for complete information on the available methods. Here are some key methods to be aware of:
isAvailable()
isAvailable() can be used to test if the Vuzix Connectivity framework is available on the device. For Blade, this method should always return true. On phones, this method will return true if the Vuzix Blade Companion App is installed and false otherwise. If this method returns false, no other Connectivity methods should be called.
getDevices(), getDevice(), isLinked()
getDevices() returns you all the remote devices you are currently linked to. Currently Blade and the Companion App only support one linked device in either direction. However, the Connectivity framework supports multiple linked devices as a future enhancement. getDevice() is a convenience method that will return you a single linked device if one exists or null if you’re not linked to anything. isLinked() is a convenience method that returns true if you are currently linked to a remote device and false otherwise.
isConnected()
isConnected(device) is used to determine if you are connected to a specific remote device. The no arg version of isConnected() will return true if any remote device is currently connected.
addDeviceListener(listener), removeDeviceListener(listener)
You can add DeviceListeners to be notified when remote devices are added, removed, connected and disconnected. If you only care about one or two particular events, the DeviceListenerAdapter class has no-op implementations of all the DeviceListener methods. Always remember to properly remove your DeviceListeners when done with them.
Sending a Broadcast to a Remote Device
The Connectivity framework supports both regular broadcasts and ordered broadcasts. Broadcasts should be limited to 10 kilobytes or less if possible. Larger broadcasts are supported, but you will always be subject to the Binder transaction buffer limit of 1MB which is shared across your app’s process. In addition, larger broadcasts will take longer to transfer between devices.
The first step to sending a broadcast is creating an Intent with an action:
Java
Intent myRemoteBroadcast = new Intent("com.example.myapp.MY_ACTION");
It is strongly recommended, but not required, that you specify the remote app package that will receive the intent. This ensures that the broadcast is only delivered to a specific app. Use the setPackage() method on the Intent class:
Java
myRemoteBroadcast.setPackage("com.example.myapp");
If you need to set the category, data, type or modify any flags on the intent, you can do so. You can also fill up the intent with extras. For example:
Java
myRemoteBroadcast.putExtra("my_string_extra", "hello");
myRemoteBroadcast.putExtra("my_int_extra", 2);
The following intent extra types are supported:
- boolean, boolean[]
- Bundle, Bundle[]
- byte, byte[]
- char, char[]
- double, double[]
- float, float[]
- int, int[]
- Intent, Intent[]
- Long, long[]
- short, short[]
- String, String[]
If you specify an intent extra of any other type, it will be ignored and will not be broadcast to the remote device.
Once your intent is populated with data, you’re ready to broadcast it to the remote device. To send as a regular broadcast, simply use the sendBroadcast() method on Connectivity:
Java
Connectivity.get(myContext).sendBroadcast(myRemoteBroadcast);
To send as an ordered broadcast, use the sendOrderedBroadcast() method. For ordered broadcasts, you need to specify the remote device:
Java
Connectivity c = Connectivity.get(myContext);
Device device = c.getDevice();
c.sendOrderedBroadcast(device, myRemoteBroadcast, new BroadcastReceiver() {
@Override
public void onReceive(Context context, Intent intent) {
if (getResultCode() == RESULT_OK) {
// do something with results
}
}
});
Notice you can specify a BroadcastReceiver to get results back from the remote device.
Receiving a Remote Broadcast
Receiving a remote broadcast is as easy as receiving a local broadcast. You simply register a BroadcastReceiver in your app.
Java
@Override
protected void onStart() {
super.onStart();
registerReceiver(receiver, new IntentFilter("com.example.myapp.MY_ACTION"));
}
@Override
protected void onStop() {
super.onStop();
unregisterReceiver(receiver);
}
private BroadcastReceiver receiver = new BroadcastReceiver() {
@Override
public void onReceive(Context context, Intent intent) {
// do something with intent
}
};
As with all BroadcastReceivers, don’t forget to unregister your receiver.
You can also declare a BroadcastReceiver in your manifest:
Java
<receiver android:name=".MyReceiver">
<intent-filter>
<action android:name="com.example.myapp.MY_ACTION"/>
</intent-filter>
</receiver>
Returning Data from an Ordered Broadcast
An advantage of ordered broadcasts is that they can return data to the sender. The same is true of ordered broadcasts sent through the Connectivity framework. To return data to the remote sender, simply use any combination of the setResult(), setResultCode(), setResultData() or setResultExtras() methods. You should also check the isOrderedBroadcast() method to make sure you are receiving an ordered broadcast. Setting result data on a non-ordered broadcast will result in an exception being thrown.
Java
private BroadcastReceiver receiver = new BroadcastReceiver() {
@Override
public void onReceive(Context context, Intent intent) {
if (isOrderedBroadcast()) {
setResultCode(RESULT_OK);
setResultData("some result data");
Bundle extras = new Bundle();
extras.putString("key", "value");
setResultExtras(extras);
}
}
};
Note that the result extras Bundle is subject to the same restrictions as broadcast extras (see above).
That’s all you need to do to return data back to the remote sender of the broadcast. The Connectivity framework handles routing the result back to the proper place.
Verifying a Remote Broadcast
Since remote broadcasts are received just like local broadcasts, sometimes you may want to verify a broadcast came from a remote device through the Connectivity framework. You may also want to verify which app on the remote side the broadcast came from. Fortunately, both of these are easy to verify using the Connectivity SDK and the verify() method.
To verify a broadcast came through the Connectivity framework, pass the intent to the verify method. verify() will return true if the broadcast is a legitimate and false otherwise:
Java
private BroadcastReceiver receiver = new BroadcastReceiver() {
@Override
public void onReceive(Context context, Intent intent) {
// verify broadcast came from Connectivity framework
if (Connectivity.get(context).verify(intent)) {
// do something with intent
}
}
};
You can also verify the broadcast came from a specific remote app through the Connectivity framework. Pass both the intent and a package name to the verify() method. Note that patterns are not allowed, you must pass the exact package name you are looking for.
Java
private BroadcastReceiver receiver = new BroadcastReceiver() {
@Override
public void onReceive(Context context, Intent intent) {
// verify broadcast came from the correct remote app
if (Connectivity.get(context).verify(intent, "com.example.myapp")) {
// do something with intent
}
}
};
Adding or Removing Remote Devices
Apps using the Connectivity SDK cannot add or remove remote devices. For remote device management, users should be directed to the Vuzix Blade Companion App to set those devices up ahead of time. Once a remote device is set up, it becomes available for use through the SDK.
Sample Project
A sample application for Android Studio demonstrating the Vuzix Connectivity SDK for Android is available here.
Vuzix Connectivity framework allows iOS apps to share the BLE connnection to communicate with apps that are running on the Vuzix Blade. There is no need to pair the BLE connection, as the framework uses the same BLE connection established and setup in the Vuzix Companion App. There is also no need to define a protocol model as we have already done that. You can send simple messages between Blade and your iPhone app seemlessly. Apps running on the Vuzix Blade will need to utilize the Vuzix Android Connectivity library. Since messages are sent as BLE messages, it is best to send smaller messages as throughput is limited when using BLE.
Requirements
- Blade OS 2.10 or higher.
- Latest version of Vuzix Companion app running on iPhone App Store
- App running on Blade must use the Android Connectivity SDK
- Blade and Companion app must be linked.
Setup and Installation
To use the framework within your Xcode project you can install through cocoa pods or manually.
CocoaPods
CocoaPods is a dependency manager for Xcode projects. For usage and installation instructions, visit their [website](http://www.cocoapods.org).
To integrate the Vuzix Connectivity framework into your Xcode project using CocoaPods, specify it in your Podfile:
Pod ‘VuzixConnectivity’
Manually
If you prefer to not use Cocoapods, then you can integrate the Vuzix Connectivity framework into your project manually.
Simply download the framework from: Vuzix’s github page and follow the steps below:
- Drag the Connectivity.framework into your project.
- Go to General pane of the application target in your project. Add Connectivity.framework to the Embedded Binaries section.
- Import Connectivity in your Swift file and use in your code.
The Connectivity Class
The Connectivity class is your main entry point into the Connectivity framework. The first thing you need to do is use the static shared property to obtain the singleton instance of the Connectivity class.
Java
let connectivity = Connectivity.shared
Once you have a reference to the Connectivity object, you can call various methods to learn more about the current connectivity status, send broadcasts, and perform other functions related to connectivity. Consult the framework for complete documentation on the available methods.
Here are some key methods to be aware of:
isConnected
connectivity.isConnected returns a boolean. It is used to determine if you are connected to a remote device.
requestConnection()
connectivity.requestConnection is used to request a connection to your vuzix smart glasses. Make this request to share the bandwidth with the companion app. Listen for notification on the connectivity state.
Java
if connectivity.isConnected == false {
connectivity.requestConnection()
}
ConnectivityStateChanged
After requesting connection to the device, observe the state of the device by adding an observer on the Notification.Name -- connectivityStateChanged property. Six possible states are possible: .connected, searchingForDevice, bluetoothOff, notConnected, connecting and notSupported which are defined in ENUM ConnectivityState. You can display the correct UI for the states while connections are being made.
Java
NotificationCenter.default.addObserver(forName: .connectivityStateChanged, object: nil, queue: nil) { [weak self] (notification) in
if let state = notification.userInfo?["state"] as? ConnectitivyState {
if state == ConnectitivyState.connected {
self?.textView.text = "--Connected to Blade!"
}
else if state == ConnectitivyState.searchingForDevice {
self?.textView.text = "--Searching for Device"
}
else if state == ConnectitivyState.connecting {
self?.textView.text = "--Connecting to Device"
}
else if state == Connectivity.bluetoothOff {
self?.textView.text = "--Bluetooth turned off on Phone"
}
else if state == Connectivity.notConnected {
self?.textView.text = "--Not Connected. Not searching"
}
else if state == Connectivity.notSupported {
self?.textView.text = "--Blade OS and/or Companion App needs update"
}
}
}
Sending data to the Vuzix device
The Connectivity framework supports an Android data model of using broadcasts and ordered broadcasts. A broadcast is nothing more than a Swift Notification that is sent across the framework to the other end. An orderedBroadcast is a broadcast which responds back with data.
Broadcasts should be limited to 10 kilobytes or less if possible. Larger broadcasts are supported, but you will always be subject to the Binder transaction buffer limit of 1MB which is shared across your app’s process. In addition, larger broadcasts will take longer to transfer between devices.
The first step to sending a broadcast is creating an Intent with an action:
Java
var intent = Intent(action: "com.vuzix.a3rdparty.action.MESSAGE")
It is strongly recommended, but not required, that you specify the remote app package that will receive the intent. This ensures that the broadcast is only delivered to a specific app. Use the package property on the Intent class to set the target.
Java
intent.package = "com.vuzix.a3rdparty"
If you need to set the category or modify any flags on the intent, you can do so. You can also fill up the intent with extras. For example:
Java
intent.addExtra(key: "my_string_extra", value: "hello")
intent.addExtra(key: "my_int_extra", value: 2)
The following intent extra types are supported:
Bool,[Bool]
BundleContainer,[BundleContainer]
Data
Character,[Character]
Double,[Double]
Float,[Float]
Int,[Int]
Intent,[Intent]
Int64,[Int64]
Int16,[Int16]
String,[String]
If you specify an intent extra of any other type, it will be ignored and will not be broadcast to the remote device.
Once your intent is populated with data, you’re ready to broadcast it to the remote device. To send as a regular broadcast, simply use the broadcast(intent:) method on Connectivity:
Java
connectivity.broadcast(intent: intent)
To receive data back from the Blade, send an ordered broadcast. Using the Connectivity shared instance, call orderedBroadcast(intent:callBack:) method with your intent and extras that you want to send. Extras is an optional.
Java
connectivity.orderedBroadcast(intent: intent, extras: extras) { [weak self] (result) in
if result.code == 1 {
//result is of type BroadcastResult
//result will contain the data from the Blade.
}
}
Notice you specify a closure to receive the results back from the remote device.
Receiving a Remote Broadcast
Receiving a remote broadcast is as easy as simply registering a listener in your app.
Java
let connectivity = Connectivity.shared
connectivity.registerReceiver(intentAction: "com.a3rdParty.action.message") { [weak self] intent in
if let message = intent.extras?.getValueForKey(key: "message") as? String {
print(message)
DispatchQueue.main.async {
self?.textView.text = message
}
}
}
Returning Data from an Ordered Broadcast
An advantage of an ordered broadcasts is that they can return data to the sender. The same is true of ordered broadcasts sent through the Connectivity framework. To return data to the remote sender, simply set the data on the BroadcastResult in the onReceive listener.
Java
var itemReceiver = BroadcastReceiver()
itemReceiver.onReceive { (result: BroadcastResult) in
result.data = <your data you want to return to the other side>
result.code = -1
}
connectivity.registerReceiver(intent: "com.vuzix.a3rdparty.action.ITEM2", receiver: itemReceiver)
The Vuzix M300XL and M300 are monocular Android-based wearable computer. These are designed to allow the wearer to quickly and easily leverage specialized Android based applications to access critical information relevant to their workflows, in order to allow them to be more efficient with the task at hand, while simultaneously reducing errors in the process.
The M300XL and M300 contain many of the capabilities as a standard Android smartphone, in an unobtrusive head-mounted form factor. See below for a full list of hardware and interaction capabilities of the device:
- nHD resolution full-color display
- Dual-core Intel Atom processor
- 2GB system RAM
- 64GB internal flash storage
- Camera capable of up to 10MP stills and 1080p video with auto-focus and image stabilization
- Orientation sensors (gyroscope, accelerometer, magnetometer)
- Inner/outer proximity sensors
- Outward facing Ambient Light Sensor
- User-facing speaker
- Dual noise-cancelling microphones
- 4 standard Android control buttons
- 2-axis touchpad
- Voice control
The primary difference between the M300XL and M300 is the battery connection. The M300XL uses a standard USB Micro-B connector to the battery. The M300 uses a proprietary battery connection which may be replaced with an adapter to connect USB Micro-B.
The M300XL also includes an updated camera that improves auto-focus time, and supports slightly higher video frame rates. These rates are documented in the Camera Knowledge Base
From a developer point-of-view, the M300XL and M300 may be treated identically.
The Android OS running on the M300XL and M300 is a modified version of Android 6.0.1, tailored to the components and capabilities of the device.
For the most part, development of applications intended to be used on the M300XL and M300 can be accomplished by following standard Android development methodologies and by leveraging existing Android APIs. The APIs listed below are some of the prominent features of Android for which default APIs should be leveraged:
- Camera – android.hardware.Camera or android.hardware.Camera2 may be used
- Sensors – use SensorManager
- Bluetooth – use BluetoothManager and BluetoothAdapter. Standard Bluetooth and BLE are supported
- Database – standard Android SQLite supported
- Google Cloud Messaging – use Google Play Services Client Library 9.8.0 or earlier
- Maps – use Google Play Services Client Library 9.8.0 or earlier
- Speech Recognition – use Vuzix Speech SDK
- Barcode Engine - use Vuzix Barcode SDK
There are some components of the M300XL and M300 which will require device-specific APIs to access, these APIs will be covered in detail in other sections of the SDK documentation.
The M300XL and M300 feature interaction methods which differ significantly from traditional touchscreen Android devices, and it is particularly important to keep these considerations in mind when designing the User Interface of an application intended to run on this device.
Existing applications which heavily leverage touchscreen interactions do not translate well to this device. This is due to touchscreen UI’s leveraging taps for input based on specific screen coordinates, which is not possible with the available interaction methods.
Voice
Voice commands are the ideal method of interacting with the device under many circumstances, as they will allow users to quickly control the device and provide input without requiring them to physically interact with the device, thus interrupting their workflow.
The device includes a Speech Recognition engine. Refer to the Speech SDK section for additional details on the engine and the speech vocabulary it supports.
Applications can leverage alternate recognition engines by including them within the application itself.
Navigation Buttons
The three navigation buttons on the device include both short and long-press functionality.
The buttons generate KeyEvents which can be intercepted and handled explicitly in your application, or can be left to the system to handle. Reference Android KeyEvent documentation for details.
Short presses on the buttons will perform the following functions:
- Foremost Button – Move focus to the right within a UI or move down if no focusable objects are available to the right. Returns the KEYCODE_DPAD_RIGHT KeyEvent.
- Middle Button – Move focus to the left within a UI or move up if no focusable objects are available to the left. Returns the KEYCODE_DPAD_LEFT KeyEvent.
- Rearmost Button – Will select the current UI element which has focus. Returns the KEYCODE_DPAD_CENTER KeyEvent.
Long presses on the buttons will perform the following functions:
- Foremost Button – Brings up a context menu for the current area of the UI, allowing users to access additional functions without crowding the UI. (KEYCODE_MENU)
- Middle Button – Returns to the Home screen. Returns KEYCODE_HOME.
- Rearmost Button – Moves back one step in the UI. Returns KEYCODE_BACK.
Touchpad
The device features a two-axis touchpad with double-tap gestures for select actions. The swipe gestures can be leveraged for left/right navigation with horizontal swipes, as well as up/down navigation with vertical swipes. Double-tap gestures are leveraged for select actions in order to avoid unintentional input when interacting with the device.
The touchpad is implemented as a trackball device, and methods such as dispatchTrackballEvent() and onTrackballEvent() can be used to capture and process the raw touchpad events.
As a fallback, if you do not handle the trackball events in your application, the touchpad will generate KEYCODE_DPAD events which can be captured with standard Android methods. Reference Android KeyEvent documentation for details.
The following key events are returned by the touchpad:
- Swipe back to front: KEYCODE_DPAD_RIGHT
- Swipe front to back: KEYCODE_DPAD_LEFT
- Swipe bottom to top: KEYCODE_DPAD_UP
- Swipe top to bottom: KEYCODE_DPAD_DOWN
- Double-tap: KEYCODE_DPAD_CENTER
Proximity Sensors
The device implements two non-contact proximity sensors, accessed through the Android Sensor API as Sensor.TYPE_PROXIMITY sensors:
- One inward-facing TMD-26723 head sensor, which wakes the device on state change
- One outward-facing APDS-9960 hand proximity sensor, which does not wake the device
Each of these sensors is a Sensor.REPORTING_MODE_ON_CHANGE device; it generates an event only upon state change and not periodically.
The following code excerpt demonstrates use of the hand proximity sensor:
Java
// One of: Sensor.TYPE_ACCELEROMETER, Sensor.TYPE_AMBIENT_TEMPERATURE,
// Sensor.TYPE_GAME_ROTATION_VECTOR, Sensor.TYPE_GEOMAGNETIC_ROTATION_VECTOR,
// Sensor.TYPE_GRAVITY, Sensor.TYPE_GYROSCOPE, Sensor.TYPE_GYROSCOPE_UNCALIBRATED,
// Sensor.TYPE_HEART_RATE, Sensor.TYPE_LIGHT, Sensor.TYPE_LINEAR_ACCELERATION,
// Sensor.TYPE_MAGNETIC_FIELD, Sensor.TYPE_MAGNETIC_FIELD_UNCALIBRATED,
// Sensor.TYPE_PRESSURE, Sensor.TYPE_PROXIMITY, Sensor.TYPE_RELATIVE_HUMIDITY,
// Sensor.TYPE_ROTATION_VECTOR, Sensor.TYPE_SIGNIFICANT_MOTION,
// Sensor.TYPE_STEP_COUNTER, Sensor.TYPE_STEP_DETECTOR
private final int sensor_type = Sensor.TYPE_PROXIMITY;
// Note: For Vuzix M300XL/M300, wakeUp=true accesses the inward-facing tmd26723 head
// sensor and wakeUp=false accesses the outward-facing apds9960 hand proximity sensor.
private final boolean wakeUp = false;
...
SensorManager sm = ((SensorManager)getSystemService(SENSOR_SERVICE));
SensorEventListener listener = new mySensorListener();
Sensor sens = sm.getDefaultSensor(sensor_type, wakeUp);
sm.registerListener(listener, sens, 0);
...
private class mySensorListener implements SensorEventListener {
public void onAccuracyChanged(Sensor sens, int accuracy) {
...
}
public void onSensorChanged(SensorEvent ev) {
...
}
}
Both head and hand proximity sensors return through SensorEventListener.onSensorChanged() a float value in SensorEvent.values[0]: 0.0 for proximity detected, > 1.0 for proximity not detected.
The M300XL and M300 can run most any Android application compiled for a minimum SDK version of 23. Because of this, deploying applications to the M300 is often as simple as updating your existing application to allow button navigation.
Most Android developers prefer Android Studio. You may use any Android development environment including Android Studio, Xamarin, Eclipse, IntelliJ IDEA, and many more.
To get started, simply create a new Android project with minimum SDK version 23.
Screen Orientation
The device may be worn on the left or right eye, and will always be in landscape or reverseLandscape orientation.
- The proper orientation to specify for your Activity in the manifest is:
Java
android:screenOrientation="sensorLandscape"
Navigation
The user will navigate your UI with the three physical buttons or touch swipes that can be automatically translated to KeyEvent key codes as described in the Interaction Methods article.
- Allow UI elements to be navigated with simple left/right/up/down navigation, and give users clear visual indicators which UI element currently has focus.
- Consider adding customized verbal navigation using the Vuzix Speech SDK.
- Explicitly control the focus order. More details can be found in the separate Navigation and Focus article.
Usability Best Practices
The most important aspect to a well-designed user interface for an application intended to be used on the M300XL and M300 is simplicity. This is largely driven by the limited amount of space on the display.
- Avoid complex menu systems, instead use a linear progression through interface components to minimize the amount of time a user needs to dedicate to navigating the application UI.
- Limit the amount of information being displayed to the user at any given moment to be very contextually relevant. I.E. a single, specific instruction to guide the user to performing an individual step in a procedure.
- Avoid displaying complex diagrams or schematics to the user, instead opt for displaying only the components immediately relevant to the task at hand.
- Try to minimize the frequency of scenarios requiring the user to physically interact with the device by leveraging alternative methods of advancing a user through the application interface. The best way to do so is by leveraging voice commands, but this can also be done by automatically advancing a user to the next screen based on any number of interactions, such as scanning a barcode, taking a picture, or some other form of verification that the step has been completed.
Some developers may prefer to remove appcompat from the project, which requires they change the theme.
- If you remove the dependency on the appcompat library, you’ll need to change styles.xml to base your AppTheme on another parent theme. Change
Java
Theme.AppCompat.Light.DarkActionBar to either @android:style/Theme.Material.Light.DarkActionBar or @android:style/Theme.Holo.Light.DarkActionBar
The Holo theme can preferable to Material on the device because button focus states are more apparent with Holo. If you do use Material, we recommend defining a custom button style that clearly changes the look of buttons when they have focus.
- If you changed your app theme to Material, in styles.xml you need to prepend the android namespace in front of the color attributes:
Java
<item name="android:colorPrimary">@color/colorPrimary</item>
<item name="android:colorPrimaryDark">@color/colorPrimaryDark</item>
<item name="android:colorAccent">@color/colorAccent</item>
- If you changed your app theme to Holo, in styles.xml you should remove the three <item> sub elements under <style> that refer to colors as these items are not used. In addition, you can delete colors.xml if you choose.
- Under the mipmap resource directories, you can delete ic_launcher_round.png as round icons are not used on the device. If you choose to delete the round icon, you should remove the roundIcon attribute that got generated in AndroidManifest.xml on the <application> tag.
The Vuzix M300XL and M300 include a barcode scanning engine with a license for the most common symbologies.
- QR
- QR Micro
- EAN (8 & 13)
- UPC (A & E)
- Data Matrix
- Code 128
Developers may leverage this barcode engine in their own applications via the Vuzix Barcode SDK. This SDK provides multiple options for integrating barcode scanning into your applications.
The easiest integration uses an intent to launch the Vuzix scanner user interface. The more advanced integration allows your application to embed the scanner directly into your own user interface, and process offline image captures.
Additional symbologies may be scanned using this SDK by providing a paid license key.
It is recommended that developers obtain the Barcode SDK via Maven. Simply make this addition to your project build.gradle to define the Vuzix repository.
Java
allprojects {
repositories {
google()
jcenter()
// The speech SDK is currently hosted by jitpack
maven {
url "https://jitpack.io"
}
}
}
Then add a dependency to the Barcode SDK library in your application build.gradle
Java
dependencies {
implementation fileTree(include: ['*.jar'], dir: 'libs')
implementation 'com.vuzix:sdk-barcode:1.6'
}
Proguard Rules
If you are using Proguard you will need to prevent obfuscating the Vuzix Barcode SDK. Failure to do so will result in calls to the SDK raising the RuntimeException "Stub!". Add the following -keep statement to the proguard rules file, typically named proguard-rules.pro.
Java
-keep class com.vuzix.sdk.barcode.** {
*;
}
The R8 Optimization may omit arguments required by the SDK methods, resulting in the NullPointerException "throw with null exception" being raised. The current workaround is to disable R8 and use Proguard to do the shrinking and obfuscation. Add the following to your gradle.properties to change from R8 to Proguard.
Java
android.enableR8=false
The simplest way to integrate barcode scanning into your app is to invoke the Barcode Scanner via an intent.
Java
import com.vuzix.sdk.barcode.ScannerIntent;
import com.vuzix.sdk.barcode.ScanResult2;
private static final int REQUEST_CODE_SCAN = 0;
Intent scannerIntent = new Intent(ScannerIntent.ACTION);
startActivityForResult(scannerIntent, REQUEST_CODE_SCAN);
The scan result will be returned to you in onActivityResult():
Java
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
switch (requestCode) {
case REQUEST_CODE_SCAN:
if (resultCode == Activity.RESULT_OK) {
ScanResult2 scanResult = data.getParcelableExtra(ScannerIntent.RESULT_EXTRA_SCAN_RESULT2);
// do something with scan result
}
return;
}
super.onActivityResult(requestCode, resultCode, data);
}
For more information on barcode scanning by intent, including intent extras that are available, refer to the JavaDoc for the ScannerIntent class.
A sample Android Studio project demonstrating barcode scanning via intent is available here: Sample App Download
If you need a bit more control over barcode scanning, you can also embed a barcode scanner directly within your app. This gives you the flexibility to customize the barcode scanning UI or to filter discovered barcodes before taking action on them. The Barcode SDK provides a ScannerFragment which can be embedded in your app like any other Android fragment. You can embed directly in layout xml using the <fragment> tag or use FragmentManager to dynamically add it to your activity. For example:
Java
import com.vuzix.sdk.barcode.ScannerFragment;
import com.vuzix.sdk.barcode.ScanResult2;
ScannerFragment scannerFragment = new ScannerFragment();
Bundle args = new Bundle();
// specify any scanner args here
scannerFragment.setArguments(args);
getFragmentManager().beginTransaction().replace(R.id.fragment_container, scannerFragment).commit();
You can register a listener with ScannerFragment to get callbacks when a successful scan takes place or an error occurs. For example:
Java
scannerFragment.setListener2(new ScannerFragment.Listener2() {
@Override
public void onScan2Result(Bitmap bitmap, ScanResult2[] scanResults) {
// handle barcode scanning results
}
@Override
public void onError() {
// scanner fragment encountered a fatal error and will no longer provide results
}
});
For more information on using ScannerFragment, including supported arguments, refer to the JavaDoc for the ScannerFragment class.
A sample Android Studio project demonstrating embedding a barcode scanning is available here: Sample App Download
If you need to customize barcode scanning even further, the Barcode SDK also provides the ability to scan raw image data for barcodes. This is the most advanced use of barcode scanning on the device. The developer will be responsible for both acquiring and pre-processing image data before handing off to the barcode scanner. Before barcodes can be scanned, the scanner needs to be initialized with a context:
Java
import com.vuzix.sdk.barcode.Scanner2;
import com.vuzix.sdk.barcode.ScanResult2;
import com.vuzix.sdk.barcode.Scanner2Factory;
Scanner2 scanner = Scanner2Factory.getScanner(context);
You can then scan your raw image data for barcodes. The byte array data you scan should be grayscale image data. You’ll also need to provide the width and height of the image data. For example:
Java
// acquire image data
byte[] data;
int width;
int height;
Rect rect = null;
// scan image data
ScanResult2[] results = scanner.scan(data, width, height, rect);
If (results.length > 0) {
// we got results, do something with them
}
For more information on scanning raw image data for barcodes, including the various options available, refer to the Java Doc for the Scanner class.
A sample Android Studio project demonstrating barcode scanning via raw image data is available here: Sample App Download
Supported Symbologies
The Vuzix M300XL and M300 include a barcode scanning engine with a license for the most common symbologies.
- QR
- QR Micro
- EAN (8 & 13)
- UPC (A & E)
- Data Matrix
- Code 128
The following additional symbologies may be scanned using this SDK by providing a paid license key.
- GS1 DATABAR
- CODE 39
- PDF417
- AZTEC CODE
- CODE 25
- CODE 93
- CODABAR
- DOTCODE
- CODE 11
- MSI PLESSEY
- MAXICODE
- POSTAL BARCODES
Obtaining a License
The Manatee Works barcode engine is used internally in the device. Developers who wish to replace the Vuzix default license with their own license should register directly on https://manateeworks.com.
Trial licenses are available for a limited duration test on a small number of devices
Replacing the License in Code
The license must be provided at scan time. The license key is global to a product you have registered, and the same on all devices. This means it should be hard-coded into your application. If you will be reselling this license to the end customers, be sure any trial version you distribute does NOT provide the license key as that will count towards your maximum number of devices even if it is only used to scan the default symbologies.
To pass the license key when scanning by intent, simply add one Extra to the intent.
Java
private static final int REQUEST_CODE_SCAN = 0;
Intent scannerIntent = new Intent(ScannerIntent.ACTION);
scannerIntent.putExtra(ScannerIntent.EXTRA_LICENSE_KEY, getResources().getString(R.string.secret_license_key) );
startActivityForResult(scannerIntent, REQUEST_CODE_SCAN);
To add a license when scanning by fragment, simply add the license to the Bundle passed to setArguments()
Java
ScannerFragment scannerFragment = new ScannerFragment();
Bundle args = new Bundle();
// A rectangle must be defined for the scanner to function. This is a recommended default.
args.putParcelable(ScannerFragment.ARG_SCANNING_RECT, new ScanningRect(.6f, .75f));
args.putString(ScannerFragment.ARG_LICENSE_KEY, getResources().getString(R.string.secret_license_key) );
scannerFragment.setArguments(args);
The more symbologies that are active, the longer the engine will take to resolve a scan. It is strongly encouraged to limit the number of symbologies to those that are relevant using
Java
String[] barcodes = new String[] {BarcodeType.CODE_25_STANDARD.name(), BarcodeType.CODE_25_INVERTED.name()};
intent.putExtra(ScannerIntent.EXTRA_BARCODE_TYPES, barcodes);
or
Java
String[] barcodes = new String[] {BarcodeType.CODE_25_STANDARD.name(), BarcodeType.CODE_25_INVERTED.name()};
args.putStringArray(ScannerFragment.ARG_BARCODE_TYPES, barcodes);
The M300 and M300XL behave similarly to other Android 6.0 devices. Specific details on the camera, audio subsytem, and battery are available here.
Architecture
The Vuzix M300XL and M300 audio architecture provides, in addition to the expected application audio capture and playback channels, a speech recognition audio capture channel and a low-power DSP-based trigger word detection speech recognizer. The following diagram represents the architecture of the audio system.
M300XL and M300 Audio Subsystem Conceptual Diagram
Audio Playback
Audio playback is a straightforward audio output, with appropriate overload protection and filtration blocks in the audio DSP. It is not configurable by user software beyond standard Android audioManager controls.
Audio Capture
Application Audio
Audio capture for Android application audio input is effected using two microphones, a user microphone and an environment microphone. These are conflated by the DSP audio processor to allow "beam forming" or controllable directionality. Hence the microphones may emphasize the user's voice with cancellation of environmental sound, or emphasize the environment with cancellation of user-generated sound (e.g. breathing noise), or operate omni-directionally. Additionally, acoustic echo cancellation removes sound from the audio output and speaker. The captured audio is filtered, noise reduced, and equalized before it becomes the Android microphone input for application audio using the media recorder. Most Media Recorder audio sources will use this audio path, adjusting beam forming and audio processing parameters as appropriate.
The Android application audio input is, by the design of the Android OS, a singleton resource. Only one application may "own" the audio capture at any one time, hence it is not possible to have a foreground activity consuming captured audio while a background service simultaneously does so.
Speech Audio
A second channel for audio capture is implemented as well, intended for use by speech recognition and other software-interpreted audio usage. This channel operates independently of the Android application audio channel.
Speech Audio is optimized for user microphone input; it cancels environmental sound and coupled sound from the output channel. However, it does not implement adaptive filtration, noise suppression, or other algorithms which may introduce artifacts to which machine recognition may be vulnerable. This channel is also available to Android application audio when AudioSource.VOICE_RECOGNITION is selected.
This is a background audio stream that has approximately ¼ second delay. Because of this latency, developers should expect to hear "pre-roll" in the captured audio. Also, developers should prevent speech from being cut-off early due to the latency by capturing an additional ¼ second after the user requests the capture to stop.
Audio Sources
The developer controls the DSP processing by selecting the appropriate AudioSource as defined by MediaRecorder.AudioSource. The following describes the audio processing in M300XL and M300 version 1.5.
- DEFAULT: Identical to MIC
- MIC: The noise cancellation emphasizes the user mic and reduces sound from the environment.
- CAMCORDER: The noise cancellation emphasizes the environment mic and reduces sound from the user.
- VOICE_COMMUNICATION: Similar to MIC, but adds acoustic echo cancellation so speaker sounds are not heard by the microphone. This is intended for use during Voice over IP (VoIP) and video calls.
- VOICE_RECOGNITION: The high-latency Speech Audio stream from the DSP.
Native Speech Audio Capture
At the native code layer, this second audio capture channel is implemented as a local socket interface. Hence multiple (currently five) clients may "listen" to this channel simultaneously. The internal Vuzix Sensory command and control recognizer may listen, an application using AudioSource.VOICE_RECOGNITION may listen, an external cloud recognizer service may listen, and a user-defined native trigger daemon may listen, all simultaneously, to this capture stream.
Contact Vuzix Technical Support if your application requires a native speech interface.
Hotword Detection
The device is capable of waking on the hotword trigger "Hello Vuzix." Since the hotword detection occurs completely within the very low power audio DSP, the application processor and supporting devices may be in a low-power or power-off state while the DSP is listening for the trigger. Upon recognition of the phrase, the DSP wakes the application processor.
The consequence of this implementation is that, while voice control and recognition are implemented in application processor software and hence may be localized to a user's preferred language, the "Hello Vuzix" trigger phrase is an immutable part of the DSP software and may not be changed.
Note, however, that a user-defined trigger in application software is entirely possible; for such a design use of the DSP hotword trigger should be avoided to prevent confusion, and the low-power wake-on-hotword capability should be disabled.
The M300XL and M300 are externally powered to allow adequate run time in enterprise applications.
The external power to the M300 is provided by the custom external battery pack (446MA0116) or from any 1.5 amp USB power source via the Micro USB Power Adaptor (446MA0120). The custom external battery pack (446MA0116) has the advantage that the M300 displays the power level of the external battery.
The external power to the M300XL can be provided by any 1.5 amp USB power power source.
To allow hot swapping of the external power, the M300XL and M300 have a small internal battery. This internal battery will not power the display, but maintains the microprocessor and other hardware while the external power supply is being changed.
When the external power is disconnected, the device begins a 60 second countdown. If a new external power source is not connected, the device will shut down gracefully to prevent loss of data. If a new power source is connected within 60 seconds, the device continues normal operation.
It may be useful for 3rd-party applications to know when the external battery becomes disconnected. This allows them to take necessary steps to prevent data loss, synchronize states, and provide operator feedback. As of M300XL and M300 version 1.4, the 3rd party applications can register for notification when the battery changes state.
Examples of cleanups might include:
- Store all in-progress work to the network server so it may be resumed later
- Log the current user out of a network system to prevent the server from later detecting duplicate sessions
- Immediately terminate a video call so the remote party knows the operator is unavailable
Registering and Receiving the Notification
Register for the com.vuzix.action.EXTERNAL_BATTERY_CHANGE intent. This can be done dynamically, or in the manifest.
For example, to handle the notification in a class called BatteryReceiver, the following would be put in the AndroidManifest.xml
Java
<receiver android:name=".BatteryReceiver">
<intent-filter>
<action android:name="com.vuzix.action.EXTERNAL_BATTERY_CHANGE" />
</intent-filter>
</receiver>
The received intent can be handled by any BroadcastReceiver. This example shows the creation of a class dedicated to receiving only these notifications.
Java
public class BatteryReceiver extends BroadcastReceiver {
public static String TAG = "BatteryNotification";
@Override
public void onReceive(Context context, Intent intent) {
if (intent.getAction().equals("com.vuzix.action.EXTERNAL_BATTERY_CHANGE"))
{
boolean result = intent.getBooleanExtra("connected", false);
if(result){
Log.d(TAG, "Battery is connected!");
//Todo:
}else{
Log.d(TAG, "Battery is unplugged!");
//Todo:
}
}
}
}
The boolean extra “connected” will always be present in the intent.
If it is true, it means the battery is connected. If it is false, it means the battery is unplugged.
Timing Notes
The device will wait 60 seconds before shutting down, after broadcasting this intent. The 3rd party application can choose to respond immediately, or start their own timer to do the cleanup work at the tail end of that 60 second interval.
For example, if designing an application where the cleanup logic is expected to take 5 seconds, the application could safely wait 50 seconds after receiving the battery-loss notification before staring the cleanup. This would leave a 5 second buffer after the cleanup completes before the device powers down. Using this scheme, if power is restored within the first 49 seconds, the user session can resume without executing the cleanup code.
Supported API
The M300XL and M300 support both the deprecated Android Camera API and the updated Android Camera2 API.
Still Image Resolutions
The following still image resolutions are supported:
- 4160x2340
- 2944x1656
- 1920x1080
- 1408x792 * (center cropped)
- 1280x720
- 1024x768
- 720x480
- 640x480
- 640x360
- 320x240
* The center cropped resolution uses pixels from the center of the image, and discards all pixels around the edges of the sensor. This creates an effect similar to a zoom. This resolution is ideal for barcode imaging where maximum detail is desired and the object of interest is small and centered.
Video Resolutions
The following video resolutions are supported by the M300XL:
- 1920x1080 @ 24 fps
- 1408x792 @ 24 fps * (center cropped)
- 1280x720 @ 30 fps
- 720x480 @ 30 fps
- 640x480 @ 30 fps
- 640x360 @ 30 fps
- 320x240 @ 30 fps
The following video resolutions are supported by the M300:
- 1920x1080 @ 24 fps
- 1408x792 @ 24 fps * (center cropped)
- 1280x720 @ 24 fps
- 720x480 @ 24 fps
- 640x480 @ 24 fps
- 640x360 @ 24 fps
- 320x240 @ 24 fps
Other resolutions are supported via software resampling, but using them will require more CPU usage and cause the device to produce more heat than if a supported resolution were used. For these reasons, using a supported resolution and frame rate is recommended.
Video Codec
The M300XL and M300 have a hardware-based encoder for the following formats:
- H.263
- Baseline Profile, Level 20, CIF or QVGA at 15 fps
- Baseline Profile, Level 30, CIF or QVGA at 30 fps
- Baseline Profile, Level 40, CIF or QVGA at 30 fps
- H.264/AVC
- Baseline/Medium/High Profile, Level 1.2, CIF at 15 fps or QVGA at 20 fps
- Baseline/Medium/High Profile, Level 2.0, CIF or QVGA at 30 fps
- Baseline/Medium/High Profile, Level 3.0, VGA or 720x480 at 30 fps, 720x576 at 25 fps
- Baseline/Medium/High Profile, Level 3.1, 720p at 30 fps
- Baseline/Medium/High Profile, Level 4.0, 1080p at 30 fps
- Baseline/Medium/High Profile, Level 4.1, 1080p at 30 fps
- MPEG-4
- Simple Profile, Level 2, CIF or QVGA at 15 fps
- Simple Profile, Level 3, CIF or QVGA at 30 fps
- Simple Profile, Level 4, CIF or QVGA at 30 fps
- Simple Profile, Level 5, VGA or 720x480 at 30 fps, 720x576 at 25 fps
- Baseline/Medium Profile, Level 1, CIF at 30 fps
- Baseline/Medium Profile, Level 2, VGA or 720x480 at 30 fps, 720x576 at 25 fps
- VP8
- 1080p at 30 fps
Other video formats are supported via software codecs, but using them will require more CPU usage and cause the device to produce more heat than if a hardware encoder were used. For these reasons, a supported hardware codec is recommended. An example application demonstrating H.264 video capture can be found here.
Auto Focus
The M300XL and M300 support automatic focus from 10 centimeters (3.94 inches) to infinity. An example application controlling the auto-focus can be found here.
Flash LED
The M300XL and M300 have a flash that can operate automatically, or be manually triggered. An example application showing flash control can be found here.
Optical Image Stabilization
The M300XL and M300 camera have an optical image stabilization (OIS) motor to offset small movements caused by the operator wearing the camera. This feature is enabled or disabled in the Settings application and affects all software applications that use the camera.
This device-wide setting can also be controlled in software by 3rd party applications so the user does not need to navigate into the Settings application.
To disable OIS from an application, send an intent as follows:
Java
Intent intent = new Intent("com.vuzix.ois.action.setting");
intent.putExtra("enable", false);
sendBroadcast(intent);
To enable OIS from an application, send an intent as follows:
Java
Intent intent = new Intent("com.vuzix.ois.action.setting");
intent.putExtra("enable", true);
sendBroadcast(intent);
Note: When the M300 battery 446MA0116 drops below 15% the M300 will cease OIS functionality to conserve power. This behavior does not occur on the M300XL or on an M300 connected to other batteries via the M-Series Micro USB Power Adapter.
Overview
With the announcement of the M400, many developers are wondering how existing M300XL and M300 applications can be ported to run on the M400, or how they can develop new applications specifically for the M400 before the M400 is generally available. Migrating applications to M400 is usually very straightforward, as described below. The ease of porting applications between these devices makes the M300XL and M300 excellent platforms for developing new applications intended to run on the M400.
UI Design
The primary task to port existing Android applications to the M400 is to deal with the unique Interaction Methods and UI Design Best Practices that were initially introduced on the M300 and continued on the M300XL. Applications that follow those guidelines properly will already have the majority of the M400 porting work complete.
Display
The M400 supports the same display resolutions and orientations as the M300XL and M300. Any screens and layouts will display identically on the M400.
User Input
The M400 has the same physical buttons as the M300XL and M300. Any button-based navigation will behave identically on the M400.
The M400 has a touchpad that provides all the features of the M300XL and M300. Whether your application handled touchpad input as trackball input or KeyEvent input, any touchpad-based navigation will work identically on the M400.
An application being developed specifically for the M400 can also handle multi-finger gestures that are not available on the M300XL and M300. Those multi-finger inputs were introduced for the Blade and documented here. These inputs can be tested on the M300XL or M300 by simulating the input keys instead of using the device touchpad. You can use use a Bluetooth keyboard, or you can use a software interface such as Vuzix View to generate the required key inputs.
Speech SDK
The M400 uses the same Vuzix Speech SDK as the M300XL, M300, and Blade. All custom vocabulary and speech recognizer controls will behave identically on the M400.
The M300XL and M300 initially used an Add-On SDK to implement the device-specific features. This required the application to target the specific Vuzix API 23. The M300XL and M300 SDK have since been migrated to use a Maven repository for distribution of individual libraries. Changing from the Add-On SDK to use the Maven libraries requires changing the project structure in Android Studio for any existing application. You should target any of the standard APIs revision 23 or earlier, and add the references to obtain the Vuzix SDK libraries from Maven. More information is available in the documentation for the Speech SDK.
Barcode SDK
The M400 version 1.1.0 and higher supports the same Vuzix Barcode SDK as the M300XL and M300. All barcode scanning functionality including scanning by intent, by embedding the barcode engine, and evaluating captured images behave identically on the M400.
Additionally, the underlying barcode engine has been replaced. This allows all supported barcode formats to be made available to third-party applications with no additional licensing fees. There is currently no need to transmit a custom license key to the M400 barcode engine, as required by the M300XL and M300. However, if an M300XL license key is provided, it is simply ignored by the M400, so no code changes are necessary to port M300XL barcode behavior to the M400.
The Vuzix Barcode SDK is distributed via Maven, similarly to the Vuzix Speech SDK.
Third-Party Libraries
Many third-party libraries will work on M400, M300XL, and M300. The M300XL and M300 have a 32-bit x86 processor. The M400 has a 64-bit ARM processor. So be sure to choose third-party libraries that support both architectures, and configure your project with the necessary build flavors to accommodate both.
Camera
The M400 and and M4000 camera provides auto-focus and optical image stabilization similar to the M300XL and M300. It also enables higher resolutions. The 1408x792 center-crop resolution is the same on both products. So camera applications should work similarly on the M400 and M4000.
It is always suggested to query the appropriate Camera APIs for the capabilities and properties of the camera, rather than expecting certain behaviors to be present. If this is done, the differences between devices can be minimized.
Audio
The M400 and M4000 use three microphones to support enhanced audio processing features compared to the M300XL and M300 which had only two microphones. Since the processing is done by the chip, the microphones behavior visible to the programmer appears similar on both. Any application using microphones should work without modification on the M400.
The M400 and M4000 also have a single speaker very similar to the M300XL and M300. Any application that uses the speaker should behave the same on the M400 and M4000.
The M400 and M4000 also support Bluetooth audio devices. All devices supported by the M300XL and M300 should work identically on the M400 and M4000.
General Processing
The M400 and M4000 have a significantly more powerful CPU and graphics engine than the M300XL and M300. Any processing-intensive or graphics-intensive application written on the M300XL or M300 will perform better on the M400 and M4000. Because of this, it is not possible to simulate the final behavior of the M400 or M4000 using an older device.
Summary
The similarities between the M400 and M4000, M300XL, and M300 allow the M300XL and M300 to be used to develop new applications intended to run on the M400 or M4000. Many applications will run unchanged on all three devices. When minor changes are required to accommodate the different devices, the changes can be made appropriately so the majority of the code base can be still be shared.
As a developer, you can communicate between apps running on the M300 and apps running on a phone. This can be accomplished in a variety of ways. You can choose to manage your own communication protocols using wireless technologies such as Wi-Fi, Wi-Fi Direct, Bluetooth and Bluetooth Low Energy. You can also leverage the Vuzix Connectivity framework to communicate between apps using the same secure communication protocol used between M300 and the Vuzix M300 Companion App.
Managing Your Own Communication
Writing apps for Vuzix M300 is very similar to writing apps for any other Android device. If your M300 app needs to communicate with another device, or an app running on a phone, you can use familiar technologies such as Wi-Fi or Bluetooth to make that happen. The advantage of using these technologies is that they are standard, well-known technologies that can handle all sorts of use cases and requirements. The disadvantage is they often require a lot of coding to get setup and working in your app. This guide will not go into details about how to write a custom protocol for app communication. Instead we will focus on getting up and running quickly using the Vuzix Connectivity framework.
Vuzix Connectivity Framework
The Vuzix Connectivity framework enables simple communication between an app running on a M300 and an app running on a phone. The framework leverages the Android broadcast system and extends it to enable sending broadcasts between devices.
https://developer.android.com/guide/components/broadcasts
This approach gets you up and running very quickly without having to worry about how to locate devices, without worrying about how to secure your data, and without designing a custom communication protocol.
Using the connectivity framework in your app will depend on the type of app you are developing. M300 apps and Android apps designed for a phone will use the Vuzix Connectivity SDK for Android. iPhone apps will use the VuzixConnectivity framework for iOS (coming soon). Both require a phone running the Vuzix M300 Companion App.
The Vuzix Connectivity SDK for Android gives developers access to the Vuzix Connectivity framework allowing simple messaging between apps running on the M300 and apps running on a phone. The same Connectivity SDK library is used regardless of which side of the communication you are on; M300 or phone.
Setup and Installation
To use the SDK, you only need small additions to your build.gradle file(s). First, declare a new maven repository in a repositories element that points to the Vuzix maven repository.
You can use the repositories element under allprojects in your project’s root build.gradle file:
Java
allprojects {
repositories {
google()
jcenter()
maven {
url "https://dl.bintray.com/vuzix/lib"
}
}
}
Alternatively you can use a repositories element under the android element in your app’s build.gradle file:
Java
apply plugin: 'com.android.application'
android {
...
repositories {
maven {
url "https://dl.bintray.com/vuzix/lib"
}
}
}
Finally, add the Connectivity SDK as a dependency in your app’s build.gradle file:
Java
dependencies {
implementation 'com.vuzix:connectivity-sdk:1.1'
}
That’s it! You are now ready to start sending messages using the Connectivity framework.
The Connectivity Class
The Connectivity class is your main entry point into the Connectivity SDK. The first thing you need to do is use the static get() method to obtain the singleton instance of the Connectivity class. You’ll need to pass in a non-null Context object:
Java
Connectivity c = Connectivity.get(myContext);
Once you have a reference to the Connectivity object, you can call various methods to learn more about the current connectivity status, send broadcasts, and perform other functions related to connectivity. Consult the SDK javadocs for complete information on the available methods. Here are some key methods to be aware of:
isAvailable()
isAvailable() can be used to test if the Vuzix Connectivity framework is available on the device. For M300, this method should always return true. On phones, this method will return true if the Vuzix M300 Companion App is installed and false otherwise. If this method returns false, no other Connectivity methods should be called.
getDevices(), getDevice(), isLinked()
getDevices() returns you all the remote devices you are currently linked to. Currently M300 and the Companion App only support one linked device in either direction. However, the Connectivity framework supports multiple linked devices as a future enhancement. getDevice() is a convenience method that will return you a single linked device if one exists or null if you’re not linked to anything. isLinked() is a convenience method that returns true if you are currently linked to a remote device and false otherwise.
isConnected()
isConnected(device) is used to determine if you are connected to a specific remote device. The no arg version of isConnected() will return true if any remote device is currently connected.
addDeviceListener(listener), removeDeviceListener(listener)
You can add DeviceListeners to be notified when remote devices are added, removed, connected and disconnected. If you only care about one or two particular events, the DeviceListenerAdapter class has no-op implementations of all the DeviceListener methods. Always remember to properly remove your DeviceListeners when done with them.
Sending a Broadcast to a Remote Device
The Connectivity framework supports both regular broadcasts and ordered broadcasts. Broadcasts should be limited to 10 kilobytes or less if possible. Larger broadcasts are supported, but you will always be subject to the Binder transaction buffer limit of 1MB which is shared across your app’s process. In addition, larger broadcasts will take longer to transfer between devices.
The first step to sending a broadcast is creating an Intent with an action:
Java
Intent myRemoteBroadcast = new Intent("com.example.myapp.MY_ACTION");
It is strongly recommended, but not required, that you specify the remote app package that will receive the intent. This ensures that the broadcast is only delivered to a specific app. Use the setPackage() method on the Intent class:
Java
myRemoteBroadcast.setPackage("com.example.myapp");
If you need to set the category, data, type or modify any flags on the intent, you can do so. You can also fill up the intent with extras. For example:
Java
myRemoteBroadcast.putExtra("my_string_extra", "hello");
myRemoteBroadcast.putExtra("my_int_extra", 2);
The following intent extra types are supported:
- boolean, boolean[]
- Bundle, Bundle[]
- byte, byte[]
- char, char[]
- double, double[]
- float, float[]
- int, int[]
- Intent, Intent[]
- Long, long[]
- short, short[]
- String, String[]
If you specify an intent extra of any other type, it will be ignored and will not be broadcast to the remote device.
Once your intent is populated with data, you’re ready to broadcast it to the remote device. To send as a regular broadcast, simply use the sendBroadcast() method on Connectivity:
Java
Connectivity.get(myContext).sendBroadcast(myRemoteBroadcast);
To send as an ordered broadcast, use the sendOrderedBroadcast() method. For ordered broadcasts, you need to specify the remote device:
Java
Connectivity c = Connectivity.get(myContext);
Device device = c.getDevice();
c.sendOrderedBroadcast(device, myRemoteBroadcast, new BroadcastReceiver() {
@Override
public void onReceive(Context context, Intent intent) {
if (getResultCode() == RESULT_OK) {
// do something with results
}
}
});
Notice you can specify a BroadcastReceiver to get results back from the remote device.
Receiving a Remote Broadcast
Receiving a remote broadcast is as easy as receiving a local broadcast. You simply register a BroadcastReceiver in your app.
Java
@Override
protected void onStart() {
super.onStart();
registerReceiver(receiver, new IntentFilter("com.example.myapp.MY_ACTION"));
}
@Override
protected void onStop() {
super.onStop();
unregisterReceiver(receiver);
}
private BroadcastReceiver receiver = new BroadcastReceiver() {
@Override
public void onReceive(Context context, Intent intent) {
// do something with intent
}
};
As with all BroadcastReceivers, don’t forget to unregister your receiver.
You can also declare a BroadcastReceiver in your manifest:
Java
<receiver android:name=".MyReceiver">
<intent-filter>
<action android:name="com.example.myapp.MY_ACTION"/>
</intent-filter>
</receiver>
Returning Data from an Ordered Broadcast
An advantage of ordered broadcasts is that they can return data to the sender. The same is true of ordered broadcasts sent through the Connectivity framework. To return data to the remote sender, simply use any combination of the setResult(), setResultCode(), setResultData() or setResultExtras() methods. You should also check the isOrderedBroadcast() method to make sure you are receiving an ordered broadcast. Setting result data on a non-ordered broadcast will result in an exception being thrown.
Java
private BroadcastReceiver receiver = new BroadcastReceiver() {
@Override
public void onReceive(Context context, Intent intent) {
if (isOrderedBroadcast()) {
setResultCode(RESULT_OK);
setResultData("some result data");
Bundle extras = new Bundle();
extras.putString("key", "value");
setResultExtras(extras);
}
}
};
Note that the result extras Bundle is subject to the same restrictions as broadcast extras (see above).
That’s all you need to do to return data back to the remote sender of the broadcast. The Connectivity framework handles routing the result back to the proper place.
Verifying a Remote Broadcast
Since remote broadcasts are received just like local broadcasts, sometimes you may want to verify a broadcast came from a remote device through the Connectivity framework. You may also want to verify which app on the remote side the broadcast came from. Fortunately, both of these are easy to verify using the Connectivity SDK and the verify() method.
To verify a broadcast came through the Connectivity framework, pass the intent to the verify method. verify() will return true if the broadcast is a legitimate and false otherwise:
Java
private BroadcastReceiver receiver = new BroadcastReceiver() {
@Override
public void onReceive(Context context, Intent intent) {
// verify broadcast came from Connectivity framework
if (Connectivity.get(context).verify(intent)) {
// do something with intent
}
}
};
You can also verify the broadcast came from a specific remote app through the Connectivity framework. Pass both the intent and a package name to the verify() method. Note that patterns are not allowed, you must pass the exact package name you are looking for.
Java
private BroadcastReceiver receiver = new BroadcastReceiver() {
@Override
public void onReceive(Context context, Intent intent) {
// verify broadcast came from the correct remote app
if (Connectivity.get(context).verify(intent, "com.example.myapp")) {
// do something with intent
}
}
};
Adding or Removing Remote Devices
Apps using the Connectivity SDK cannot add or remove remote devices. For remote device management, users should be directed to the Vuzix M300 Companion App to set those devices up ahead of time. Once a remote device is set up, it becomes available for use through the SDK.
Sample Project
A sample application for Android Studio demonstrating the Vuzix Connectivity SDK for Android is available here.
Introduction
The Vuzix Speech Recognizer is a fully embedded, fast, phrase-matching recognition system designed to interpret and respond to voice commands. A platform base vocabulary is available to all apps; it is intended to facilitate default navigation and selection without direction from the client app. That is, a client app can benefit from navigation provided by a base vocabulary with no setup or explicit awareness of the speech recognizer. This capability is implemented by mapping phrases to Android key events.
For many applications, it is desirable to implement a custom vocabulary which performs application-specific actions when an application-specific phrase is spoken (e.g. to capture a still image when “take a picture” is spoken.) The Vuzix Speech Recognition system provides two mechanisms by which this can be achieved: Android key events and Android intents.
Custom Vocabulary Architecture
The Vuzix Speech Recognizer is implemented as an Android service that runs locally on the device. No cloud servers are used, and the audio data never leaves the device.
Each Activity or Fragment can have its own vocabulary. The system will automatically switch to the proper vocabulary as Activities are paused and resumed. If no vocabulary is provided, the system will use the default navigation commands.
An 3rd party Application may create a custom vocabulary for the Vuzix Speech Recognizer service by utilizing the Vuzix Speech SDK as described throughout this section of the knowledge base.
It is recommended that users obtain the Speech SDK via Maven. Simply make this addition to your project build.gradle to define the Vuzix repository.
Java
allprojects {
repositories {
google()
jcenter()
// The speech SDK is currently hosted by jitpack
maven {
url "https://jitpack.io"
}
}
}
Then add a dependency to the Speech SDK library in your application build.gradle
Java
dependencies {
implementation 'com.vuzix:sdk-speechrecognitionservice:1.7'
}
Proguard Rules
If you are using Proguard you will need to prevent obfuscating the Vuzix Speech SDK. Failure to do so will result in calls to the SDK raising the RuntimeException "Stub!". Add the following -keep statement to the proguard rules file, typically named proguard-rules.pro.
Java
# Vuzix speech recognition requires the SDK names not be obfuscated
-keep class com.vuzix.sdk.speechrecognitionservice.** {
*;
}
The R8 Optimization may omit arguments required by the SDK methods, resulting in the NullPointerException "throw with null exception" being raised. The current workaround is to disable R8 and use Proguard to do the shrinking and obfuscation. Add the following to your gradle.properties to change from R8 to Proguard.
Java
android.enableR8=false
The custom vocabulary is created when the VuzixSpeechClient class is instantiated. The newly instantiated custom vocabulary inherits the platform base vocabulary, which is currently:
- hello vuzix - Activates the listener and will wake up the M-Series Smart Glasses from sleep mode
- cancel - Triggers the Escape action
- confirm - Activates the item that currently has focus
- flashlight off - Disable the front flashlight LED
- flashlight on - Enable the front flashlight LED
- go back - Navigates backward in the history stack
- go down - Basic directional navigation
- go forward - Navigates forward in the history stack
- go home - Return to the main page that is seen at power-on
- go in - Activates the item that currently has focus or expands to the next level of a navigation hierarchy
- go left - Basic directional navigation
- go out - Backs out one level of a navigation hierarchy or collapses the item that currently has focus
- go right - Basic directional navigation
- go to sleep - Put the M-Series Smart Glasses into sleep mode
- go up - Basic directional navigation
- light off - Disable the front flashlight LED
- light on - Enable the front flashlight LED
- next - Advances to the next item in an ordered collection of items
- okay - Activates the item that currently has focus
- pick this - Activates the item that currently has focus
- previous - Goes backward by one item in an ordered collection of items
- quit - Triggers the Home action
- scroll down - Basic directional navigation
- scroll left - Basic directional navigation
- scroll right - Basic directional navigation
- scroll up - Basic directional navigation
- select this - Activates the item that currently has focus
- show menu - Brings up context menu for current UI screen
- stop - Will stop repeating voice commands from custom SDK, investigating additional functionality
- torch off - Disables the front flashlight LED
- torch on - Enables the front flashlight LED
- voice off - Disables listener, intended for use with timeout set to an unlimited value
- please enter evaluation mode - Triggers diagnostic mode which will display recognized keycodes in on-screen messages
- please exit evaluation mode - Disables diagnostic mode
Creating the Speech Client
To work with the Vuzix Speech SDK, you first create a VuzixSpeechClient, and pass it your Activity
Java
import com.vuzix.sdk.speechrecognitionservice.VuzixSpeechClient;
Activity myActivity = this;
VuzixSpeechClient sc = new VuzixSpeechClient(myActivity);
Handling Exceptions
It is possible for a user to attempt to run code compiled against the Vuzix Speech SDK on non-Vuzix hardware. This will generate a RuntimeException "Stub!" to be thrown. It is also possible to write an application against the latest Vuzix Speech SDK, and have customers attempt to run the application on older devices. Any calls to unsupported interfaces will cause a NoClassDefFoundError. For this reason all SDK calls should be inside try / catch blocks.
Java
// Surround the creation of the VuzixSpeechClient with a try/catch for non-Vuzix hardware
VuzixSpeechClient sc;
try {
sc = new VuzixSpeechClient(myActivity);
} catch(RuntimeException e) {
if(e.getMessage().equals("Stub!")) {
// This is not being run on Vuzix hardware (or the Proguard rules are incorrect)
// Alert the user, or insert recovery here.
} else {
// Other RuntimeException to be handled
}
}
// Surround all speech client commands with try/catch
try {
// sc.anySdkCommandHere();
} catch(NoClassDefFoundError e) {
// The hardware does not support the specific command expected by the Vuzix Speech SDK.
// Alert the user, or insert recovery here.
}
For brevity, this article may omit the try/catch blocks, but creating a robust application requires they be present.
Removing Existing Phrases
Removing existing phrases may reduce the likelihood that the speech recognizer resolves the incorrect phrase. This is especially true if your phrases sound similar to default phrases. For example, a game control of "roll right" might be confused with the default phrase "scroll right."
Your application may want to control the navigation itself, in which case you could remove default navigation commands to prevent confusion.
The default vocabulary may be modified by removing individual phrases with commands such as:
Java
sc.deletePhrase("torch on");
sc.deletePhrase("torch off");
Or the entire default vocabulary may be removed from your activity using an asterisk as a wildcard.
Java
sc.deletePhrase("*");
Note, on M300XL and M300 the default wake work "Hello Vuzix" is not deleted.
The results of the modified recognizer map can be viewed during debugging with the dump command.
Java
Log.i(LOG_TAG, sc.dump());
Additionally it is important to note that when the speech recognizer gets a command that is already in the list, the previous command is overwritten. There is no need to delete the original entry first. For example this is useful if you want to implement your own navigation methods using "go up", "go down", "go left", ect.
Adding Custom Trigger Phrases
When the speech recognition engine is enabled but idle it is listening only for a trigger phrase, also known as a "wake word". Once the trigger phrase is recognized, the engine transitions to the active state where it listens for the full vocabulary (such as "go home" and "select this"). The M300XL and M300 use localized wake words based on which language is currently selected. For each language the wake word will be a translation of "Hello Vuzix".
The speech engine will time-out after the period configured in the system settings and return to the idle state that listens only for the trigger phrases. The operator can circumvent the timeout and immediately return to idle by saying a voice-off phrase. By default the voice-off phrase is "voice off". You can insert custom voice-off phrases using the following commands.
Java
sc.insertVoiceOffPhrase("voice off"); // Add-back the default phrase for consistency
sc.insertVoiceOffPhrase("privacy please"); // Add application specific stop listening phrase
Adding Phrases to Receive Keycodes
You can register for a spoken command that will generate a keycode. This keycode will behave exactly the same as if a USB keyboard were present and generated that key. This capability is implemented by mapping phrases to Android key events (android.view.KeyEvent).
Java
sc.insertKeycodePhrase("toggle caps lock", KEYCODE_CAPS_LOCK);
Log.i(LOG_TAG, sc.dump());
Keycodes added by your application will be processed in addition to the Keycodes in the base vocabulary set:
- K_DPAD_LEFT "move left"
- K_DPAD_RIGHT "move right"
- K_DPAD_UP "move up"
- K_DPAD_DOWN "move down"
- K_NAVIGATE_IN "move in"
- K_NAVIGATE_OUT "move out"
- K_FORWARD "move forward"
- K_BACK "move back"
- K_DPAD_LEFT "scroll left" (repeats)
- K_DPAD_RIGHT "scroll right" (repeats)
- K_DPAD_UP "scroll up" (repeats)
- K_DPAD_DOWN "scroll down" (repeats)
- K_DPAD_LEFT "go left"
- K_DPAD_RIGHT "go right"
- K_DPAD_UP "go up"
- K_DPAD_DOWN "go down"
- K_NAVIGATE_IN "go in"
- K_NAVIGATE_OUT "go out"
- K_FORWARD "go forward"
- K_BACK "go back"
- K_HOME "go home"
- k_ENTER "select this"
- k_ENTER "pick this"
- K_HOME "quit"
- K_ENTER "okay"
- K_ENTER "confirm"
- K_NAVIGATE_NEXT "next"
- K_NAVIGATE_PREVIOUS "previous"
- K_ESCAPE "cancel"
- K_MENU "show menu"
Note that speaking any valid phrase will terminate a previous repeating keycode. The phrase "stop" terminates the repeating keycodes an has no further behavior.
Adding Phrases to Receive Intents
The most common use for the speech recognition is to receive intents which can trigger any custom actions, rather than simply receiving keycodes. To do this you must have a broadcast receiver in your application such as:
Java
public class VoiceCmdReceiver extends BroadcastReceiver {
....
}
The broadcast receiver must register with Android for the Vuzix speech intent. This can be done in the constructor as shown here.
Java
public class VoiceCmdReceiver extends BroadcastReceiver {
public VoiceCmdReceiver(MainActivity iActivity) {
iActivity.registerReceiver(this, new IntentFilter(VuzixSpeechClient.ACTION_VOICE_COMMAND));
...
}
}
The phrases you want to receive can be inserted in the same constructor. This is done using insertPhrase() which registers a phrase for the speech SDK intent. The parameter is a string containing the phrase for which you want the device to listen.
Java
public class VoiceCmdReceiver extends BroadcastReceiver {
public VoiceCmdReceiver(MainActivity iActivity) {
iActivity.registerReceiver(this, new IntentFilter(VuzixSpeechClient.ACTION_VOICE_COMMAND));
VuzixSpeechClient sc = new VuzixSpeechClient(iActivity);
sc.insertPhrase( "testing" );
Log.i( LOG_TAG, sc.dump() );
}
}
Now handle the speech SDK intent VuzixSpeechClient.ACTION_VOICE_COMMAND in your onReceive() method. Whatever phrase you used in insertPhrase() will be provided in the the received intent as a string extra named VuzixSpeechClient.PHRASE_STRING_EXTRA.
Java
public class VoiceCmdReceiver extends BroadcastReceiver {
public VoiceCmdReceiver(MainActivity iActivity) {...}
@Override
public void onReceive(Context context, Intent intent) {
// All phrases registered with insertPhrase() match ACTION_VOICE_COMMAND
if (intent.getAction().equals(VuzixSpeechClient.ACTION_VOICE_COMMAND)) {
String phrase = intent.getStringExtra(VuzixSpeechClient.PHRASE_STRING_EXTRA);
if (phrase != null ) {
if (phrase.equals("testing")) {
// todo: add test behavior
}
}
}
}
}
With that, you will be able to say "Hello Vuzix" to activate the recognizer, followed by "testing" and the code you inserted in place of the //todo will execute.
The recognizer always broadcasts the phrase that was registered in insertPhrase(). Note: If the phrase contains spaces, they will be replaced by underscores.
Replacement Text
As mentioned above, the string that was recognized is returned, with spaces replaced by underscores. That can be somewhat cumbersome to the developer, especially since we expect recognized spoken phrases to be localized into many languages.
To make this easier, insertPhrase() can take an optional substitution string parameter. When this is supplied, the substitution string is returned in place of the spoken text.
This example updates the original by replacing the hard-coded strings properly. Notice insertPhrase() is given two parameters, and it is the second that is used by the onReceive() method.
Note: The substitution string may not contain spaces
This now gives us a complete solution to receive a custom phrase and handle it properly.
Java
public class VoiceCmdReceiver extends BroadcastReceiver {
final String MATCH_TESTING = "Phrase_Testing";
public VoiceCmdReceiver(MainActivity iActivity) {
iActivity.registerReceiver(this, new IntentFilter(VuzixSpeechClient.ACTION_VOICE_COMMAND));
VuzixSpeechClient sc = new VuzixSpeechClient(iActivity);
// strings.xml contains: <string name="spoken_phrase_testing">testing my voice application</string>
sc.insertPhrase( iActivity.getResources().getString(R.string.spoken_phrase_testing), MATCH_TESTING);
Log.i( LOG_TAG, sc.dump() );
}
@Override
public void onReceive(Context context, Intent intent) {
// All phrases registered with insertPhrase() match ACTION_VOICE_COMMAND
if (intent.getAction().equals(VuzixSpeechClient.ACTION_VOICE_COMMAND)) {
String phrase = intent.getStringExtra(VuzixSpeechClient.PHRASE_STRING_EXTRA);
if (phrase != null ) {
if (phrase.equals(MATCH_TESTING)) {
// Todo: add test behavior
}
}
}
}
}
The substitution parameter also allows us to create multiple phrases that perform the same action. Phrases in the recognizer must be unique but substitution text does not.
We could have multiple insertPhrase() calls with different phrase parameters and identical substitutions. Use this technique to simplify your code in situations where you do not need to differentiate between phrases. For example, the phrases "start call" and "make a call" can have the same substitution.
Adding Phrases to Receive Custom Intents
To add even more flexibility, the speech SDK can send any intent you define, rather than only sending its own ACTION_VOICE_COMMAND. This is especially useful for creating multiple broadcast receivers and directing the intents properly.
Note, this example differs from the above in that the CUSTOM_SDK_INTENT is used in place of ACTION_VOICE_COMMAND
Java
public class VoiceCmdReceiver extends BroadcastReceiver {
public final String CUSTOM_SDK_INTENT = "com.your_company.CustomIntent";
final String CUSTOM_EVENT = "my_event";
public VoiceCmdReceiver(MainActivity iActivity) {
iActivity.registerReceiver(this, new IntentFilter(CUSTOM_SDK_INTENT);
VuzixSpeechClient sc = new VuzixSpeechClient(iActivity);
Intent customSdkIntent = new Intent(mMainActivity.CUSTOM_SDK_INTENT);
sc.defineIntent(CUSTOM_EVENT, customSdkIntent );
// strings.xml contains: <string name="spoken_phrase_testing">testing my voice application</string>
sc.insertIntentPhrase( iActivity.getResources().getString(R.string.spoken_phrase_testing), CUSTOM_EVENT);
Log.i( LOG_TAG, sc.dump() );
}
@Override
public void onReceive(Context context, Intent intent) {
// Since we only registered one phrase to this intent, we don't need any further switching. We know we got our CUSTOM_EVENT
// todo: add test behavior
}
}
The system can support multiple broadcast receivers. Each receiver simply registers for the intents it expects to receive. They do not need to be in the same class that creates the VuzixSpeechClient.
Deleting a Custom Intent
Beginning with SDK v1.91 you can now delete a custom intent. Similar to inserting an intent, call the deleteIntent() method and supply the label of the intent you wish to delete.
Java
// Voice command custom intent names
final String TOAST_EVENT = "other_toast";
...
sc.deleteIntent(TOAST_EVENT);
Listing all Intent Labels
Beginning with SDK v1.91 you can now list all intent labels. The list will be returned as a List<String>.
Java
List<String> intentLabels = sc.getIntentLabels();
Checking the Engine Version
As mentioned above, it is possible for the SDK to expose newer calls than what is supported by a given device OS version. You can query getEngineVersion() to determine the version of the engine on the device to allow you to protect newer SDK calls with conditional logic to eliminate possible NoClassDefFoundError from being generated. For example, if you know the device is running SDK v1.8 you would not attempt calls introduced in v1.9.
Because getEngineVersion() is a newer SDK call, it should itself be protected.
Java
float version = 1.4f; // The first stable SDK released with M300 v1.2.6
try {
version = sc.getEngineVersion();
Log.d(mMainActivity.LOG_TAG, "Device is running SDK v" + version);
} catch (NoSuchMethodError e) {
Log.d(mMainActivity.LOG_TAG, "Device is running SDK prior to v1.8. Assuming version " + version);
}
Sample Project
A sample application for Android Studio demonstrating the Vuzix Speech SDK is available to download here.
Advanced Controls
The Vuzix Speech Recognition engine has advanced controls described here. These have been expanded since the initial SDK was released. All features here require the Vuzix Speech SDK version 1.4, and M300 or M300XL running version 1.3 or higher.
Enabling and Disabling Speech Recognition
The Vuzix Speech SDK will listen for the trigger phrase "Hello Vuzix" whenever Vuzix Speech Recognition is enabled in the Settings menu. When Speech Recognition is disabled, the microphone icon in the notification bar is grayed-out. When Speech Recognition is enabled, the microphone icon becomes outlined.
The speech recognizer has global commands, such as "go home" and "flashlight on" that are processed in any application. The recognizer also supports custom vocabulary that is processed by each individual application.
It is possible for an application to rely on custom voice commands to perform essential tasks. In this scenario, it would be an unwanted burden to require the user to navigate to the system Settings menu. Instead the Vuzix Speech Recognition may be programatically enabled from within an application.
Java
import com.vuzix.sdk.speechrecognitionservice.VuzixSpeechClient;
try {
VuzixSpeechClient.EnableRecognizer(getApplicationContext(), true);
} catch(NoClassDefFoundError e) {
// This device does not implement the Vuzix Speech SDK
// todo: Implelment error recovery
}
This method is static. Passing the the optional context parameter allows the proper user permissions to be applied, and is recommended for robustness.
The recognizer may be similarly disabled via code during times when false detection would impair the application behavior.
Java
VuzixSpeechClient.EnableRecognizer(getApplicationContext(), false);
Once Vuzix Speech Recognition is disabled, the notification bar icon is grayed-out, and the phrase "Hello Vuzix" will no longer trigger speech recognition.
It is safe to set the Speech Recognition to the existing state, so there is no need to query the state before enabling or disabling Vuzix Speech Recognition. Simply specify the desired state. However, if you want to display the current enabled/disabled state you can query it using isRecognizerEnabled(). This value is not changed by the system while your application is active so the appropriate place for this query is your activity onResume().
Java
bool mSpeechEnabled;
@Override
protected void onResume() {
super.onResume();
mSpeechEnabled = VuzixSpeechClient.isRecognizerEnabled(this);
// todo: update status to user showing state of mSpeechEnabled
}
Triggering the Speech Recognizer
When Speech Recognition is enabled, the recognizer remains in a low-power mode listening only for the trigger phrase, "Hello Vuzix". Once this is heard, the recognizer wakes and becomes active. This state is indicated by the microphone icon in the notification bar becoming fully filled. While active, all audio data is scanned for known phrases.
It is possible for an application to programatically trigger the recognizer to wake and become active, rather than relying on the "Hello Vuzix" trigger phrase. This can be tied to a button press or a fragment opening.
Java
import com.vuzix.sdk.speechrecognitionservice.VuzixSpeechClient;
try {
VuzixSpeechClient.TriggerVoiceAudio(getApplicationContext(), true);
} catch(NoClassDefFoundError e) {
// This device does not implement the Vuzix Speech SDK
// todo: Implelment error recovery
}
The recognizer has a timeout that can be modified in the system Settings menu. The active recognizer will return to idle mode after that duration has elapsed since the most recent phrase was recognized. This state is again indicated by the microphone icon in the notification bar returning to the unfilled outline icon, and the recognizer will only respond to the trigger phrase "Hello Vuzix."
Some workflows are best suited to return the active recognizer to idle at a specific time. For example, during recording of a voice memo. This prevents phrases such as "go back" and "go home" from being recognized and acted upon.
The recognizer may be programatically un-triggered to idle state with the same method.
Java
VuzixSpeechClient.TriggerVoiceAudio(getApplicationContext(), false);
Trigger State Notification
Since the Speech Recognition engine may be triggered externally and may timeout internally, it is likely that applications that wish to control this behavior need to know the state of the recognizer.
The same Speech Recognition Intent that broadcasts phrases also broadcasts state change updates. Simply check for the presence of the extra boolean RECOGNIZER_ACTIVE_BOOL_EXTRA.
Java
boolean mSpeechTriggered;
@Override
public void onReceive(Context context, Intent intent) {
if (intent.getAction().equals(VuzixSpeechClient.ACTION_VOICE_COMMAND)) {
Bundle extras = intent.getExtras();
if (extras != null) {
// We will determine what type of message this is based upon the extras provided
if (extras.containsKey(VuzixSpeechClient.RECOGNIZER_ACTIVE_BOOL_EXTRA)) {
// if we get a recognizer active bool extra, it means the recognizer was
// activated or stopped
mSpeechTriggered = extras.getBoolean(VuzixSpeechClient.RECOGNIZER_ACTIVE_BOOL_EXTRA, false);
// todo: Implement behavior based upon the recognizer being changed to active or idle
}
}
}
}
Since the state may also change while your application is not running, if you display the state using these notifications you should also query the current state in your onResume().
Java
bool mSpeechTriggered;
@Override
protected void onResume() {
super.onResume();
mSpeechTriggered = VuzixSpeechClient.isRecognizerTriggered(this);
// todo: Implement behavior based upon the recognizer being changed to active or idle
}
Startup Timing Concerns
It is possible for applications that automatically launch with the operating system to be initialized before the speech engine has come online. This is true for launcher applications, among others. Any speech queries or commands issued at startup will fail, and must be retried after the speech engine comes online. In such applications, you should surround initialization logic with a call such as:
Java
if( VuzixSpeechClient.isRecognizerInitialized(this) ) {
//todo perform your speech customizations here
}
Even if the initialization code cannot be run at startup, you should still register the broadcast receiver for the trigger state, as described in the preceding section. When the engine becomes initialized it will send out an initial trigger state. The receipt of this trigger state can cause your application to retry the speech initialization. This allows you to create an application that is starts before the speech engine, and can interact with the speech engine as soon as it becomes available without any unnecessary polling.
Canceling Repeating Characters
Certain Commands like "scroll up" and "scroll down" initiate repeating key presses. This allows the user interface to continue to scroll in the selected direction. The repeating key presses stop when the engine detects any other phrase, such as "select this". The default phrase "stop" is recognized by the speech engine and has no behavior other than to terminate the scrolling.
You may wish to stop repeating key presses programatically without requiring the user to say another phrase. This is useful when reaching the first or last item in a list. To do this, simply call StopRepeatingKeys().
Java
try {
sc.StopRepeatingKeys();
} catch(NoClassDefFoundError e) {
// The ability to stop repeating keys was added in Speech SDK v1.6 which
// was released on M400 v1.1.4. Earlier versions will not support this.
}
Get the Maximum Recognizer Timeout Time
Beginning with SDK v1.91 you now have access to the maximum recognizer timeout time.
Java
int recognizedMaxTimeoutTime = sc.getRecognizerTimeoutMax();
Getting and Setting the Recognizer Timeout Config
Beginning with SDK v1.91 you can now retrieve and set the recognizer timeout config.
Java
int recognizerTimeoutConfig = sc.getRecognizerTimeoutConfig();
...
sc.setRecognizerTimeoutConfig(30); // in seconds
Sample Project
A sample application for Android Studio demonstrating the Vuzix Speech SDK is available to download here.
The built-in speech recognition engine allows multiple applications to each register their own unique vocabulary. As the user switches between applications, the engine automatically switches to the correct vocabulary. Only the active application receives the recognized phrases.
Applications with only a single vocabulary can get the behavior they desire with very little coding consideration. Applications with multiple activities and multiple vocabularies have special considerations discussed here.
Single Activity and Single Vocabulary
Applications with only a single vocabulary in a single Activity can do the required registration in the onCreate() method, and the required cleanup in the onDestroy().
Java
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
try {
// Register to receive intent that are generated by the Speech Recognition engine
registerReceiver(this, new IntentFilter(VuzixSpeechClient.ACTION_VOICE_COMMAND));
// Create a VuzixSpeechClient from the SDK and customize the vocabulary
VuzixSpeechClient sc = new VuzixSpeechClient(this);
sc.insertPhrase(mMainActivity.getResources().getString(R.string.btn_text_clear));
} catch (NoClassDefFoundError e) {
Toast.makeText(this, "Cannot find Vuzix Speech SDK", Toast.LENGTH_LONG).show();
} catch (Exception e) {
Log.e(TAG, "Error setting custom vocabulary: " + e.getMessage());
}
}
@Override
protected void onDestroy() {
// Remove the dynamic registration to the Vuzix Speech SDK
unregisterReceiver(this);
super.onDestroy();
}
This application is similar to the ones described in the preceding knowledge base articles, and ensures only the correct commands are received. Unfortunately, this only works in the simplest cases. It requires your application have only a single activity using speech commands, and the vocabulary never be changed within your application.
Multiple Vocabularies
A single Activity will have a single vocabulary associated with it. The Activity can modify that vocabulary at any time by calling deletePhrase(), insertPhrase(), insertIntentPhrase(), and insertKeycodePhrase().
The default intent with an action VuzixSpeechClient.ACTION_VOICE_COMMAND will still be used, so there is no need to call registerReceiver() when changing the existing vocabulary.
For example:
Java
private void enableScanButton(boolean isEnabled ) {
mScanButton.setEnabled(isEnabled);
String scanBarcode = getResources().getString(R.string.scan_barcode);
if (isEnabled ) {
sc.insertPhrase(scanBarcode);
} else {
sc.deletePhrase(scanBarcode);
}
}
This same behavior could be achieved by inserting that scanBarcode phrase in the onCreate() and simply ignoring it when it is not expected. But many developers prefer actively modifying the vocabulary as shown above.
Multiple Activities
Many applications consist of multiple activities. If each Activity were to be registered for the same speech intents, both would receive the commands. This would cause significant confusion if not handled properly. There are a few easy mechanisms that can be used to solve this.
You can dynamically un-register and re-register for the intents based on the activity life cycle, or you can choose to receive a custom unique intent for each phrase. Both are described in sub-sections below.
Multiple Activities - Dynamic Registration
One mechanism to to ensure the correct speech commands are routed to the correct activity within a multi-activity application is for each activity to dynamically register and unregister for the speech intent. Each activity will receive the same default intent action, but only while in the foreground.
Java
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
try {
// Create a VuzixSpeechClient from the SDK and customize the vocabulary
VuzixSpeechClient sc = new VuzixSpeechClient(this);
sc.insertPhrase(mMainActivity.getResources().getString(R.string.btn_text_clear));
} catch (NoClassDefFoundError e) {
Toast.makeText(this, "Cannot find Vuzix Speech SDK", Toast.LENGTH_LONG).show();
} catch (Exception e) {
Log.e(TAG, "Error setting custom vocabulary: " + e.getMessage());
}
}
@Override
protect void onResume() {
super.onResume();
// Dynamically register to receive intent from the Vuzix Speech SDK
registerReceiver(this, new IntentFilter(VuzixSpeechClient.ACTION_VOICE_COMMAND));
}
@Override
protected void onPause() {
// Remove the dynamic registration to the Vuzix Speech SDK
unregisterReceiver(this);
super.onPause();
}
In this example, each activity creates its vocabulary once. Each time an activity is paused, it unregisters and stops receiving speech commands. Each time an activity is resumed, it re-registers and resumes receiving speech commands.
This allows multiple activities to co-exist in a single application and each get speech commands only when they are active.
The only downside to this mechanism is the activity will not get the recognizer state changes while it is not active. For example, it will not know if the speech recognition has become disabled or has timed out. For many applications this does not impact the behavior, and this mechanism is the most appropriate.
Multiple Activities - Unique Intents
Another mechanism exists to control the routing of speech commands. This uses slightly more code than dynamically registering and un-registering to receive the intents and allows all activities to receive the advanced recognizer state data, without receiving the incorrect speech commands. This should be used when it is important to maintain the state of the speech recognizer in your activities.
The speech engine allows a developer to specify a custom intent action, instead of relying on the default VuzixSpeechClient.ACTION_VOICE_COMMAND action.
The custom intents do not have any extras, so you must have one custom intent per phrase. Each activity will create a unique intent for each phrase, such as:
Java
public final String CUSTOM_BARCODE_INTENT = "com.vuzix.sample.MainActivity.BarcodeIntent";
public final String CUSTOM_SETTINGS_INTENT = "com.vuzix.sample.MainActivity.SettingsIntent";
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
try {
// Create a VuzixSpeechClient from the SDK
VuzixSpeechClient sc = new VuzixSpeechClient(this);
// Associate the phrase "Scan Barcode" with generating the CUSTOM_BARCODE_INTENT intent
final String barcodeIntentId = "ScanBarcodeId";
registerReceiver(this, new IntentFilter(CUSTOM_BARCODE_INTENT));
sc.defineIntent(barcodeIntentId, new Intent(CUSTOM_BARCODE_INTENT) );
sc.insertIntentPhrase("Scan Barcode", barcodeIntentId);
// Associate the phrase "Show Settings" with generating the CUSTOM_SETTINGS_INTENT intent
final String showSettingsId = "ShowSettingId";
registerReceiver(this, new IntentFilter(CUSTOM_SETTINGS_INTENT));
sc.defineIntent(showSettingsId, new Intent(CUSTOM_SETTINGS_INTENT) );
sc.insertIntentPhrase("Show Settings", showSettingsId);
} catch (NoClassDefFoundError e) {
Toast.makeText(this, "Cannot find Vuzix Speech SDK", Toast.LENGTH_LONG).show();
} catch (Exception e) {
Log.e(TAG, "Error setting custom vocabulary: " + e.getMessage());
}
}
@Override
public void onReceive(Context context, Intent intent) {
// Now we have a unique action for each phrase
if (intent.getAction().equals(CUSTOM_BARCODE_INTENT)) {
// todo: scan barcode
} else if (intent.getAction().equals(CUSTOM_SETTINGS_INTENT)) {
// todo: open settings menu
}
}
@Override
protected void onDestroy() {
unregisterReceiver(this);
super.onDestroy();
}
In the above example, we have a single intent action for each vocabulary word. These each include "MainActivity" in this example. To continue this pattern, you would create other activities with unique names such as "SettingsActivity". This makes the processing very deterministic. Each activity only registers for its own unique intents.
As the user switches between activities, the vocabulary will be changed in the engine, and even if both activities recognize the same phrase, the unique intent will be generated based on the current active activity.
You can create common code to insert common phrases, you just need to be sure to append an Activity name to each so they are not confused. For example:
Java
public final String PACKAGE_PREFIX = "com.vuzix.sample.";
public final String CUSTOM_BARCODE_INTENT = ".BarcodeIntent";
public String GetBarcodeIntentActionName(String ActivityName) {
return (PACKAGE_PREFIX + ActivityName + CUSTOM_BARCODE_INTENT);
}
protected void InsertCustomVocab(String ActivityName) {
try {
// Create a VuzixSpeechClient from the SDK
VuzixSpeechClient sc = new VuzixSpeechClient(this);
// Associate the phrase "Scan Barcode" with generating the unique intent for the calling activity
final String barcodeIntentId = "ScanBarcodeId" + ActivityName;
sc.defineIntent(barcodeIntentId, new Intent(GetBarcodeIntentActionName(ActivityName)));
sc.insertIntentPhrase("Scan Barcode", barcodeIntentId);
} catch (NoClassDefFoundError e) {
Toast.makeText(this, "Cannot find Vuzix Speech SDK", Toast.LENGTH_LONG).show();
} catch (Exception e) {
Log.e(TAG, "Error setting custom vocabulary: " + e.getMessage());
}
}
With combinations of these techniques you can define custom vocabularies in the various onCreate() methods of your activities. These vocabularies generate unique intent actions for each phrase within each Activity. The system will ensure the correct phrases are delivered to the correct activities.
Listing all Vocabulary Names
Beginning with SDK v1.91 you can now list all vocabulary names. The list will be returned as a List<String>.
Java
List<String> storedVocabularyNames = sc.getStoredVocabularyNames();
Summary
By using the techniques described here, your can create an application that dynamically modifies the vocabulary within activities,. You can also create multiple activities that use the same or different vocabularies, and each activity will get the correct speech commands.
The Vuzix App Store connects the digital world to the real world by providing real-time access to key alerts and information from your Vuzix Developer Account.
The Vuzix App Store has many developer features and capabilities that developers come to expect from other App Stores.
- Add Listing and Version Management for all our Vuzix Products.
- Resource management for your listing.
- Private Apps to distribute your apps (in development stage) only to the people you want to.
How to upload Apps to the Vuzix App Store
1. Login using your developer account.
2. Click on Vuzix App Store from the main menu. You should be redirected to the Vuzix App Store and the main menu should be replaced with the app store menu.
3. Click on Developer Account from the top bar menu.
You will be redirected to the Apps page where you can submit a new app or a new app version, and also mark apps as online or offline.
4. To Submit a new App, Click on SUBMIT NEW APP button located on the top left side of the page.
5. Fill in the App information like:
a. App Name
b. Monetization: Free/Paid. Once you save the app you cannot change monetization.
c. Friendly URL: This is auto generated based on app name.
d. Description: Please provide a description of the app.
e. Categories: Select the category under which you want your app to show up in the Vuzix App Store. You’re allowed to select up to 8 different categories for an app.
f. Thumbnail Images: Recommended size is 96 x 96px.
g. Screenshots: You can also add app screenshots which will appear on the app details page. The recommended image size for screenshots is 640 x 360 px for m300 and 480 x 480 px for Blade.
h. App Resources: You can attach any related documents (if applicable) to your app here.
i. You can also add details to the Installation Guide, Other App Information, and App Support Information fields which will appear on the app details page.
6. Click Submit App
After you have added all the information about your app on this page you can click Submit App.
By clicking on the Submit App button, you will be brought back to the Apps page where you can see the newly added app under the Offline/Pending list.
7. The next step is to add the app version. At this point your app is added to the Vuzix app store and the status of the app is set to online by default. Even though it is online it will not show up on the Vuzix Website until you add an app version, and then set this as the current version. To add an app version:
a. Click on the App Versions link for the app you want to upload an APK for.
You should now be taken to the App Versions page where you can upload APK files.
You can add the APK file to M300, M100 and Blade separately. An app will only be available for the device you upload a version to (M100/M300/Blade).
b. Click on ADD NEW VERSION button.
On this page:
External Billing:
i. Please select 'yes' if:
1. The app is free to download but requires users to pay a fee to use it beyond an initial trial period.
2. The app is free to download but requires users to sign up for a paid membership after an initial free trial period.
External Billing option is only available for free apps. You will not see this option for paid apps.
Auto Update:
i. If Auto Update is selected the app will be automatically updated to the next version once it becomes available.
Notes: You can provide release notes about your app here.
Upload APK: Please see the APK rules below:
i. Make sure the APK is in release mode and the production signature key is included.
ii. Icon files must be included in the APK.
iii. We are checking the Package Name and Version Code combination here.
If you have uploaded APK #1 to APP #1 then you cannot add the same APK to some other app.
iv. You can add an app with the same Package Name and Version Code to different devices within the same app. However, the APK with this same Package Name + Version Code cannot be added to the same device twice.
While uploading apps to the Vuzix app store, if you think you have made a mistake and need to fix it before going live, consider deleting the old version. Please note that deletion is not recommended after the APK has already been approved.
v. To upload a new version of the app the Version Code must be incremented.
vi. We will also be reading the permissions from the Android Manifest.
vii. Once you upload an APK, the following information detected from the APK will be shown on screen:
1. Detected Package
2. Detected Version
3. Detected Version Code
c. Agree to the Content Policy, Publisher Distribution Agreement, Terms of Service, and Privacy Policy.
d. Click Submit
Please note that you cannot add prices to the app version if you selected “Free” monetization while adding your app.
If you selected One Time Download Fee for monetization, you should see a price field for adjusting prices on the Add Version screen.
8. After you submit your app, you must wait for a Vuzix Admin to review your application and approve it. Your app will remain in a Pending Approval state until this approval process is finished.
Once a Vuzix Admin approves or rejects your app you will get an email. If the app is rejected, you will receive details on the reasoning behind the rejection and what steps you must take for it to be approved.
9. If the admin approves it, then you can go ahead and take your app online by clicking the Take Online button.
10. After taking this version online, the Set as Current button will become enabled. Make sure to set this version as current even if this is your only version for this app. Making a version a current version will allow your app to be shown in search results on the Vuzix App Store.
All done. You should now be able to see your app on the Vuzix App Store.
Take your app offline
You can take you app offline at any time by clicking the Take Offline button from the App Versions page. Doing so will make it so your app does not appear on the Vuzix App Store.
What is a Private App?
Setting your app to private gives you the ability to maintain a list of users that can download your app. Only the users you have granted access to will be able to install your app on their devices.
Furthermore, no one else will be able to find it in the Vuzix App Store.
Submitting a Private App
Submitting a Private App is a straight forward process.
First, navigate to your Developer Account tab of the Vuzix App Store.
From your Apps page, select the "SUBMIT NEW APP" button.
On this screen, select your app's name, planned monetization options and the checkbox indicating you are submitting this application as a Private App.
The remaining fields are optional at this point.
However, please keep in mind that the information provided in this page will help the users you grant access to your app, to get started using your application on their device.
Uploading the APK
To submit your APK, select the App Versions option from your private app listing.
On Addscreen, you can add new versions of your app, as well as, manage previously submitted versions.
To upload a new version, select the "ADD NEW VERSION" button.
Fill in the information on the Add Version screen as applicable to your application.
When finished, click the "SUBMIT" button to submit your application to the app approval process.
*Note: The app approval process will be completed within 48 hours of app submission. During this time you may be contacted for further information about your application if the app approval team deems it necessary.
Managing User Access
To manage who has access to your private app, select the Users option at the bottom of your app listing.
From the Private App Users page, you can grant access to your private app by selecting the "ADD USER" button and providing the user's email.
On this page you can also revoke access from any users you no longer wish to have access to your app.
After adding a new user to the access list, an auto-generated email will be sent to the address provided instructing them on how to access your app and install it on their devices.
The user can click the "Go to app page" button to be linked directly to your app details page and install the app onto their Vuzix Smart Glasses.
Using the Vuzix App Store subscription service
When listing an app on the Vuzix App Store, you have the option to take advantage of the Vuzix App Store’s billing system in order to collect one-time or recurring payments for use of the app.
To configure the app listing as a paid app, simply click “Paid” under the App Monetization Options in your app listing and select one of the Vuzix Billing options.
If you plan to set up a subscription plan for your app, a secret key will be generated and become available in the App Details view after submitting the app listing.
This key needs to be provided with every call to the Licensing API. Keep this key secret. If it is ever compromised, just generate a new one using the “Generate New” button as seen above. (After generating, your app will need to be updated to use the new key.)
For information on the available classes for managing the subscription payment status, please refer to the Licensing Java Docs here: Vuzix Licensing Java Docs
To set up the subscription plan(s) to be associated with your app, navigate to the Subscription Plans page found under your app listing.
By clicking Add Subscription Plan you can configure the options (including price and renewal frequency) for each plan.
*Note: You may have multiple plans associated with one app. Each plan will be reviewed by the Vuzix team as part of the approval process.
Once the app has been submitted it will be processed by the Vuzix team to approve both the app and the subscription plan(s). After both are approved, all purchases will be processed through the Vuzix App Store and payments will be made out to you minus the app store fee of 30%.