Connect IQ SDK

How To Test

Connect IQ has a couple of different methods for testing and debugging your apps:

  1. Basic debugging with println() statements
  2. Run No Evil unit test framework

Basic Debugging

One way to test Connect IQ apps is to include System.println() statements at strategic points in your app. Within Eclipse, these println() statements will output to the console. On a device, println() statements write to an <APPNAME>.TXT file in the /GARMIN/APPS/LOGS directory in the device file system.

These log files are not automatically created, so must be manually created on the device and named to match the name of the app’s corresponding PRG file. For example, to log output from /GARMIN/APPS/MYAPP.PRG, you must create /GARMIN/APPS/LOGS/MYAPP.TXT.

Debugging via Eclipse

The Connect IQ application run configurations can be used to run an app in debug mode in addition to run mode. Running an app in debug mode follows the same steps as running an app normally. Debugging is only supported while running an application on the Connect IQ simulator.

While actively debugging an app, only the File menu of the Simulator will be accessible. Once the app has resumed normal execution the remaining menus will be re-activated.

Setting a Breakpoint

To set a breakpoint in the Connect IQ editor right click on the vertical ruler and select the “Toggle Breakpoint” option in the context menu.

Viewing Application Status

When your application suspends while debugging you will be prompted to open the native Debug perspective. From here you can view the stack trace within the native Debug view. Clicking on a stack frame within the stack trace will populate the native Variables view with the applicable variables at that stack frame. Global variables will only be visible in the top stack frame; they appear as the $ variable within that stack frame.

Run No Evil

For more advanced testing and degubbing capabilities, Connect IQ has Run No Evil, an automated unit testing framework found in the Test module. Run No Evil provides the ability to add asserts and unit test methods to your app. The unit tests include a handy logger with different logging levels for more meaningful error reporting.

Run No Evil operates only within the Connect IQ simulator. Asserts will always execute when an app is launched in the simulator, while unit test code will not execute unless the compiler is explicitly told to run unit tests. All test code is automatically removed at compile time when your app is exported for use on devices.

Asserts

Asserts are a useful way to check for conditions at critical points in your code. For example, if your app always expects the value of x and y to not be equal:

function onShow() {
    var x = 1;
    var y = 1;
    // Prints an error to the console when x and y are equal
    Test.assertNotEqualMessage(x, y, "x and y are equal!");
}

The code above produces the following output in the console when the app is run in the Simulator:

Device Version 0.1.0
Device id 1 name "A garmin device"
Shell Version 0.1.0
ASSERTION FAILED: x and y are equal!

Assert code requires no special compiler commands to execute within the simulator, and are removed by the compiler when building release code. Run No Evil has four different assert flavors:

//! Assert throws an exception if the test is false
//! @param [Boolean] test Expression to test for true.
function assert(test);

//! Assert throws an exception if the test is false
//! @param [Boolean] test Expression to test for true.
//! @param [String] the identifying message for the assert.
function assertMessage(test, message);

//! Throws an exception if value1 and value2 are not equal. The objects
//! passed to this function must implement the equals() method.
//! @param [Object] value1 Value to test for equality
//! @param [Object] value2 Value to test for equality
function assertEqual(value1, value2);

//! Throws an exception if value1 and value2 are not equal. The objects
//! passed to this function must implement the equals() method.
//! @param [Object] value1 Value to test for equality
//! @param [Object] value2 Value to test for equality
//! @param [String] message The identifying message for the assert
function assertEqualMessage(value1, value2, message);

//! Throws an exception if value1 and value2 are equal. The objects
//! passed to this function must implement the equals() method.
//! @param [Object] value1 Value to test for equality
//! @param [Object] value2 Value to test for equality
function assertNotEqual(value1, value2);

//! Throws an exception if value1 and value2 are equal. The objects
//! passed to this function must implement the equals() method.
//! @param [Object] value1 Value to test for equality
//! @param [Object] value2 Value to test for equality
//! @param [String] message The identifying message for the assert
function assertNotEqualMessage(value1, value2, message);

Unit Tests

Unit tests are a great way to check discrete pieces of your app for pass/fail criteria. Each test is run independently, so if a test fails or causes a crash, the test will be marked as a failed test and the next test will automatically be executed. This allows an entire suite of tests to be run with a single command in an automated fashion.

Unit tests are written mostly like any other class, module, or function in Monkey C, but have the following requirements:

  1. Tests methods must be marked with the :test annotation
  2. Test methods must take a Logger object
  3. Tests methods that are not global (part of a test class or custom test module) must be static methods

Here is a simple example of a unit test method:

// Unit test to check if 2 + 2 == 4
(:test)
function myUnitTest(logger) {
    var x = 2 + 2; logger.debug("x = " + x);
    return (x == 4); // returning true indicates pass, false indicates failure
}

The sample code above uses a “debug” logging level, but the Logger contains a total of three logging levels that can be used to distinguish between different types of errors in your unit test output:

//! Write a debug string to the output stream. The String is prefixed with [DEBUG] and time stamp
//! @param [String] Output string
function debug(str);

//! Write a warning string to the output stream. The String is prefixed with [WARNING] and time stamp
//! @param [String] Output string
function warning(str);

//! Write an error string to the output stream. The String is prefixed with [ERROR] and time stamp
//! @param [String] Error string
function error(str);

Running Unit Tests From the Command Line

Unlike asserts, which always execute in the simulator, unit tests only run when the compiler is told to run them. Unit tests must be executed from the terminal and are not currently available in the Eclipse plug-in. Fortunately, it’s a straight-forward, three-step process:

  1. Build the Project: Use the --unit-test flag on the build command to compile with unit tests. It’s usually easiest to copy and paste the build command from the Eclipse console and add the unit test flag.
  2. Launch the Simulator: You must use the connectiq script in your SDK’s bin directory to launch the simulator from the terminal (no arguments required).
  3. Run the App: Use the monkeydo script in your SDK’s bin directory with the -t flag to run the app with unit tests enabled:
monkeydo.bat path\to\projects\bin\MyApp.prg -t

You may also supply a function name after -t to run the test associated with a single function. The sample unit test above produces the following output in the console:

Device Version 0.1.0
Device id 1 name "A garmin device"
Shell Version 0.1.0
------------------------------------------------------------------------------
Executing test myUnitTest...
DEBUG (14:16): x = 4
Pass
==============================================================================
RESULTS
Test:                           Status:
myUnitTest                      Pass
Ran 1 test

PASSED (failures=0, errors=0)
Connection Finished
Closing shell and port

Running Unit Tests From Eclipse

The Eclipse plug-in provides a framework to run Connect IQ applications as Run No Evil tests.

Using Run No Evil Launch Configurations

Run No Evil tests use the same launch process as Connect IQ apps. You can create new Run No Evil run configurations by opening up the Run Configurations dialog and clicking the New launch configuration button in the upper left corner. A Run No Evil run configuration will require you to select the project to execute as a Run No Evil test as well as the devices to execute the test as. You can optionally specify specific test names to run; leaving this field blank will run all the tests within the selected project.

You can also create temporary, single use run configurations. The first way to do this is by right clicking on a project in the Project Explorer and selecting Run As > Run No Evil test. You can also right click on a test result in the Run No Evil View and run that individual test by selecting Run as Run No Evil test.

After launching a Run No Evil test you will see a Run No Evil run console opened with the output from the active test as it’s running. The console output colors can be configured in the Connect IQ Run No Evil preferences.

Viewing Test Results

When a test completes the results will be displayed in the Run No Evil view. This view will automatically be added and opened within your perspective when you execute a Run No Evil test but you can manually add the Run No Evil view via the Windows > Show View menu.

The Run No Evil view will display the number of tests that ran, passed, failed and errored out at the top of the view. The middle section of the view shows the individual test cases which can be expanded to see the result for each device the test was executed as. The bottom portion of the view will show the output from a device run of a test case if you click on it in the middle section.

Handling Crashes

Despite the best debugging efforts, crashes will sometimes happen. There are two general types of on-device crash that can occur related to Connect IQ, which each generate log files: app crashes and device crashes.

App Crashes

App crashes typically result in an app quitting unexpectedly or displaying an ‘IQ!’ icon, but does not cause the entire device to crash or reboot. This kind of crash is most commonly due to a bug in an app, though it can also be due to a bug in Connect IQ itself. Whenever an app crash occurs, a CIQ_LOG.YAML file is written or updated to /GARMIN/APPS/LOGS on the device, and contains information related to the crash that app developers may use to address the problem. Here is what a CIQ_LOG generally looks like:

Error: ErrorName
Details: Error description.
Time: 2018-02-07T19:07:56Z
Store-Id: 00000000-0000-0000-0000-000000000000
Store-Version: 0
Device-Id: 006-B0000-00
Device-Version: '0.00'
ConnectIQ-Version: 3.0.0
Filename: PRGNAME
Appname: DemoAppName
Stack:
  - pc: 0x100000ef
    File: 'C:\Path\To\source\File.mc'
    Line: 53
    Function: function_causing_error
  - pc: 0x10000080
    File: 'C:\Path\To\source\File2.mc'
    Line: 30
    Function: otherBrokenItems

The ConnectIQ-Version entry is not the Connect IQ version of the device. Rather, this refers to the SDK version used by the developer when exporting the application.

Note: For devices running versions older than Connect IQ 3.0, a simplified eror log will be printed as CIQ_LOG.TXT.

Device Crashes

Device crashes typically cause the device to reboot or freeze. These indicate a Connect IQ or device firmware bug, and should be much less common than app crashes. When a device crash occurs, an ERR_LOG.txt file is written to /GARMIN on the device, containing stack trace information related to the crash. Please provide this file when reporting a crash on our developer forum. Garmin’s device teams can take a look at the device crash logs to determine the cause of the crash and will typically provide a fix in a future firmware release.

A Note About Log Files

When any log file on a device exceeds 5kb in size, it will automatically be archived to <LOGNAME>.BAK, and a new log will be started. Any old .BAK files will be overwritten when the archive occurs, so the max space a log can reach is around 10kb.