Skip to content

Unit Testing Framework

Overview

After code has been compiled to Javascript (for typescript project types) and before it is packaged to an Orchestrator .package, a testbed is created. An Orchestrator Runtime is inserted so you can use some modules natively (things like Workflow, Properties, LockingSystem, Server, System, etc.). After the test bed is created, Jasmine is run either through IstanbulJS or directly (depending on if code coverage is enabled).

Jasmine 4.0.2 is integrated into Build Tools for VMware Aria. The testing framework is supported by three main components:

  • vro-scripting-api: Simulates Aria Automation Orchestrator inside the NodeJS environment.
  • vro-types: Provides type information and intellisense in VS Code for TypeScript projects.
  • vrotest: Orchestrates and executes the unit tests.

Note

Unit test framework is configurable. Jasmine is used by default. Default configuration is also available for Jest.


Folder Structure & Naming Conventions

The location and naming of your test files depend on your project type:

  • TypeScript Projects: Unit tests can be placed in any folder or subfolder relative to the /src/ directory. The file name must end with *.test.ts. The test are executed, but the test code is not added to the target package.
  • JavaScript (Actions) Projects: All unit test files must be placed under the /src/test/resources/ directory (or subfolders). File names must end with *Test.js or *Tests.js.

Executing Unit Tests

Unit tests are automatically built and run by Maven as part of the regular packaging phase.

  • Run all tests and package: mvn clean package
  • Run tests explicitly: mvn clean test
  • Skip tests: Append the -DskipTests flag to your Maven command.

Logs & Reporting: * Execution logs (including output from System.log/warn/error) can be found in target/vro-tests/logs. * During the build process, test results and coverage data are automatically reported to Bamboo and made available to SonarQube for further analysis.


Limitations & Specific Behaviors

  • Workflows Cannot Be Tested: The only file types that can be tested are Actions. Workflows, configuration elements, and resource elements are not supported. Keep your Workflows as minimal as possible and abstract logic into testable Actions.
  • Shared JavaScript Context: All unit tests within a project are executed in the same Javascript context. It is critical to use beforeAll and afterAll to prepare and clean up your environment. Leftover state from one test affects the execution of subsequent tests.

Code Coverage

Enabling Code Coverage

Start by adding the testing profile to your ~/.m2/settings.xml (makes the configuration applicable to all projects) or project-specific pom.xml.

<profile>
    <id>vro-testing</id>
    <properties>
        <test.coverage.enabled>true</test.coverage.enabled>
        <test.coverage.reports>text,html,clover,cobertura,lcovonly</test.coverage.reports>
        <test.coverage.thresholds.error>85</test.coverage.thresholds.error>
    </properties>
</profile>

Activate the profile by adding it to <activeProfiles>.

<activeProfiles>
    <activeProfile>vro-testing</activeProfile>
</activeProfiles>
<properties>
    <test.coverage.enabled>true</test.coverage.enabled>
    <test.coverage.reports>text,html,clover,cobertura,lcovonly</test.coverage.reports>
    <test.coverage.thresholds.error>85</test.coverage.thresholds.error>
</properties>

Output files are generated in <PROJECT_DIR>/target/vro-tests/coverage/.

Reporters

The toolchain supports many different code coverage reporters. Internally we use a tool called IstanbulJS, so the supported reporters and their documentation can be found here: Using Alternative Reporters.

After enabling a reporter and running mvn clean test, you can see the output files in: <PROJECT_DIR>/target/vro-tests/coverage/

Setting Thresholds

Using <test.coverage.thresholds.error> creates hard limits for code coverage in your local builds and CI/CD pipelines. If the specified percentage is not met, the tests are considered failed and build operations fail. This introduces a very good quality gate. It is suggested to start with a lower threshold for older projects and higher threshold for new projects. A good example of setting an error threshold would be around 60-70 and a possible warning threshold in the 80s.

Individual overrides can also be set for branches, lines, functions, and statements.

Per-file Configuration

It is possible to set code coverage per file basis. Set custom --coverage-thresholds, if any file in the project drops below those thresholds, the build fails. Enable by setting <test.coverage.perfile>true</test.coverage.perfile> in your ~/.m2/settings.xml testing profile. Refer to InstanbulJS documentation for more information: https://github.com/istanbuljs/nyc.

<profile>
    <id>pscoe-testing</id>
    <properties>
        <test.coverage.enabled>true</test.coverage.enabled>
        <test.coverage.reports>text,html,clover,cobertura,lcovonly</test.coverage.reports>

        <test.coverage.thresholds.error>70</test.coverage.thresholds.error>
        <test.coverage.thresholds.warn>80</test.coverage.thresholds.warn>

        <test.coverage.thresholds.branches.error>60</test.coverage.thresholds.branches.error>
        <test.coverage.thresholds.branches.warn>70</test.coverage.thresholds.branches.warn>
        <test.coverage.thresholds.lines.error>60</test.coverage.thresholds.lines.error>
        <test.coverage.thresholds.lines.warn>70</test.coverage.thresholds.lines.warn>
        <test.coverage.thresholds.functions.error>60</test.coverage.thresholds.functions.error>
        <test.coverage.thresholds.functions.warn>70</test.coverage.thresholds.functions.warn>
        <test.coverage.thresholds.statements.error>60</test.coverage.thresholds.statements.error>
        <test.coverage.thresholds.statements.warn>70</test.coverage.thresholds.statements.warn>
        <test.coverage.perfile>true</test.coverage.perfile>
    </properties>
</profile>

All available configurations per project type:

File Exclusion

Files can be excluded from code coverage by naming them with the pattern *.helper.[tj]s. Custom patterns can also be defined via the .vroignore file. For more details refer to the vroIgnoreFile section.


Test Helpers

Helpers are testing files that are compiled and can be used in your testing setup, but they do not generate code coverage and are not be pushed to Orchestrator. Mocks and repetitive test data setups are typically defined in these Helper files.

  • Naming Convention: filename.helper.ts, filename.helper.js, or filename_helper.js.
  • Location: Helper files must be located in any folder under src/, though the recommended place is src/tests/helpers.
  • Usage: During testing, you can use these files by importing them normally (e.g., import testHelper from "./testHelper.helper";).
  • Exclusions: You can also define custom patterns to ignore these files via the .vroignore file. For more details refer to the vroIgnoreFile section.

Jasmine Constructs

Overview

Jasmine tests are written as a normal Orchestrator action that is able to call any other Orchestrator action and utilizes out of the box global Jasmine functions for defining test cases and evaluating conditions.

The example below shows how to create a unit test for a simple "Hello World" library defining the following action:

/**
 * Hello World
 * Module: "com.vmware.pscoe.library.helloworld"
 * 
 */
(function() {
    return "Hello World"; 
}) 

The unit test can be defined as follows:

describe("Hello World", function(){ 
   it("should Return Hello world",function(){ 
      var result = System.getModule("com.vmware.pscoe.library.helloworld").helloworld()
      expect(result).toEqual('Hello World'); 
   }); 
}); 

Suites and Specs

The describe() function groups related specs into a suite. It is common practise for each test file to have a single describe() at the top level. The string input parameter is for naming the collection of specs, and is concatenated with specs to make a spec's full name. This aids in finding specs in a large suite. If named well, specs should read as full sentences in traditional BDD style.

Specs are defined by calling the global Jasmine function it(), which, like describe() takes a string and a function inputs. The string is the title of the spec and the function is the spec, or test. A spec contains one or more expectations that test the state of the code. An expectation in Jasmine is an assertion that is either true or false. A spec with all true expectations is a passing spec. A spec with one or more false expectations is a failing spec.

describe("A suite", function() {
  it("contains spec with an expectation", function() {
    expect(true).toBe(true);
  });
});

Since describe() and it() blocks are functions, they can contain any executable code necessary to implement the test. JavaScript scoping rules apply, so variables declared in a describe are available to any it block inside the suite.

describe("A suite is just a function", function() {
  var a;

  it("and so is a spec", function() {
    a = true;

    expect(a).toBe(true);
  });
});

Expectations and Matchers

Expectations are built with the expect() function which takes a value, called the actual. It is chained with a Matcher function, which takes the expected value.

Each matcher implements a boolean comparison between the actual value and the expected value. It is responsible for reporting to Jasmine if the expectation is true or false. Based on that Jasmine passes or fails the spec. Any matcher can evaluate to a negative assertion by chaining the call to expect with a not before calling the matcher. Jasmine has a rich set of matchers included, full list can be found in the API docs.

describe("The 'toBe' matcher compares with ===", function() {

  it("and has a positive case", function() {
    expect(true).toBe(true);
  });

  it("and can have a negative case", function() {
    expect(false).not.toBe(true);
  });

});
describe('Array.prototype', function() {
  describe('.push(x)', function() {
    beforeEach(function() {
      this.initialArray = [];
    });

    it('appends x to the Array', function() {
      this.initialArray.push(1);
      expect(this.initialArray).toContain(1);
    });
  });
});

Setup and Teardown

Use beforeEach(), afterEach(), beforeAll(), and afterAll() to keep your test suite DRY. As the name implies, the beforeEach() function is called once before each spec in the describe() block in which it is called and the afterEach() function is called once after each spec. The beforeAll() function is called only once before all the specs in describe() block are run and the afterAll() function is called after all specs finish. beforeAll() and afterAll() can be used to speed up test suites with expensive setup and teardown.

Note

beforeAll() and afterAll() are not reset between specs. It is easy to accidentally leak state between specs so that they erroneously pass or fail.

describe("A suite with some shared setup", function() {
  var foo = 0;

  beforeEach(function() {
    foo += 1;
  });

  afterEach(function() {
    foo = 0;
  });

  beforeAll(function() {
    foo = 1;
  });

  afterAll(function() {
    foo = 0;
  });

});

You can use the this keyword to safely share variables between the beforeEach(), it() and afterEach() blocks. Each spec's beforeEach()/it()/afterEach() has the this as the same empty object that is set back to empty for the next spec's beforeEach()/it()/afterEach().

describe("A spec", function() {
  beforeEach(function() {
    this.foo = 0;
  });

  it("can use the `this` to share state", function() {
    expect(this.foo).toEqual(0);
    this.bar = "test pollution?";
  });

  it("prevents test pollution by having an empty `this` created for the next spec", function() {
    expect(this.foo).toEqual(0);
    expect(this.bar).toBe(undefined);
  });
});

Failing a spec

The fail() function causes a spec to fail. It can take a failure message or an Error object as a parameter.

describe("A spec using the fail function", function() {
  var foo = function(x, callBack) {
    if (x) {
      callBack();
    }
  };

  it("should not call the callBack", function() {
    foo(false, function() {
      fail("Callback has been called");
    });
  });
});

Nested suites

describe() functions can be nested with specs defined at any level. This allows a suite to be composed as a tree of functions. Before a spec is executed, Jasmine walks down the tree executing each beforeEach() function in order. After the spec is executed, Jasmine walks through the afterEach() functions similarly.

describe("A spec", function() {
  var foo;

  beforeEach(function() {
    foo = 0;
    foo += 1;
  });

  afterEach(function() {
    foo = 0;
  });

  it("is just a function, so it can contain any code", function() {
    expect(foo).toEqual(1);
  });

  it("can have more than one expectation", function() {
    expect(foo).toEqual(1);
    expect(true).toEqual(true);
  });

  describe("nested inside a second describe", function() {
    var bar;

    beforeEach(function() {
      bar = 1;
    });

    it("can reference both scopes as needed", function() {
      expect(foo).toEqual(bar);
    });
  });
});

Jasmine Spies (Mocking)

A Spy simulates the behavior of existing code (like DB calls, Web Services, or external systems) and tracks calls made to it. Spies only exist in the describe or it block in which they are defined and are removed after each spec.

createSpy()

Creates a "bare" spy when there is no existing function to mock. It tracks calls and arguments but has no implementation behind it.

var readFromDB = jasmine.createSpy('readFromDB');
readFromDB('some', 'fake', 'data'); 
expect(readFromDB).toHaveBeenCalledWith("some", "fake", "data");

createSpyObj()

Creates a mock object that spies on multiple methods at once. Pass the name of the object and an array of method string names.

describe("ApiCall", () => {
  it('should do an api call', function () {
    const restHostTestDouble = jasmine.createSpyObj<RESTHost>("RESTHost", ["createRequest"]);

    const restRequestTestDouble = jasmine.createSpyObj<RESTRequest>("RESTRequest", ["execute"]);

        // Properties mock
    const restResponseTestDouble = jasmine.createSpyObj<RESTResponse>("RESTResponse", [], {contentAsString: JSON.stringify({test: 2})});

    restHostTestDouble.createRequest.and.returnValue(restRequestTestDouble);
    restRequestTestDouble.execute.and.returnValue(restResponseTestDouble);

    const restHostExample = new RestHostExample(restHostTestDouble);

    const response = restHostExample.doApiCall();

    expect(response.test).toBe(2);
    expect(restHostTestDouble.createRequest).toHaveBeenCalledTimes(1);
    expect(restHostTestDouble.createRequest).toHaveBeenCalledWith("GET", "/api/v1/test", "");
    expect(restRequestTestDouble.execute).toHaveBeenCalledTimes(1);
  });
})

Useful Hints & FAQ

  • Where can I find execution logs? You have access to the output from System.info, error, warn, and the logger in the target/vro-tests/logs folder.

  • Can I test workflows? Testing workflows is currently not supported. As a general rule of thumb, keep your Workflows as minimal as possible. Abstract the logic away from the Workflows and put it in an Action that is easily testable.

  • What if VS Code is not showing intellisense for Jasmine keywords? Execute this command in the VS Code terminal to install the necessary types:

    npm i @types/jasmine
    

  • How do I identify if code is running in the NodeJS test environment vs. Orchestrator? If you need to simulate different behavior specific to NodeJS, you can check the environment. The following statement evaluates to false when running in Orchestrator:

    if (typeof module !== "undefined" && module.exports) {
        // do the coding
    }
    

  • Can I use Jasmine Helpers? Native Jasmine helpers are not supported, though the toolchain does inject the Orchestrator Runtime with a helper.