Authoring SF CLI Plugins
Posted: March 17, 2024

Authoring SF CLI Plugins

Table of Contents:

  • Getting Started

    • Project Creation Basics
    • Installing the Minimum Number of Dependencies
    • Adding TypeScript & Mocha-Specific Configs
    • How SF Plugins Use Directories to Map Commands
    • Adding Scripts To package.json
  • Test Driven SF CLI Development

    • Writing Our First Failing Test
    • Using Tests To Model Dependencies
    • A Note On Creating Types
    • Shifting To Pure Unit Tests
    • Writing A Second Test
    • Implementing the MVP Functionality
  • Continuing To Build Functionality

    • Adding Support For Minute-Based Trace Durations
    • Adding Support For Setting A Max Of 24 Hours Tracing
    • Setting Traces For Another User
  • Publish / Installation

  • Wrapping Up

I’ve seen a few external guides come and go to authoring SF CLI plugins, but all of the ones I’ve seen tend to skip over crucial pieces of setup, or to over-install dependencies to get around knowledge gaps. That is, in my opinion, a shame — primarily because authoring SF plugins gives us all the chance to work with TypeScript, which is both a wonderfully expressive language and one that isn’t always easy to setup. With that in mind, I want to take you through every single command necessary to start off authoring SF plugins, with an eye towards ‘minimum viable product’ delivery. In other words: no frills, just the details you need and an explanation as to why each step is necessary.

For this example, I’ll be running through a TDD-based version that augments sf apex to create a trace command that creates/maintains debug trace flags for users. While sf apex tail log does some of this functionality at the moment, it’s primarily hidden behind the creation of the streaming client for receiving new logs, and exposing this functionality in a more ergonomic fashion seems like a nice quality of life adjustment.

Getting Started

Project Creation Basics

Every new project should start with the same commands:

  • git init
  • (your make file command of choice) the creation of .gitignore
  • the addition of the following to .gitignore:
node_modules

Then, you can either create your basic package.json or use npm init to do so:

{
  "author": "James Simone <16430727+jamessimone@users.noreply.github.com>",
  "description": "A plugin that allows you to update trace flags in a target salesforce org",
  "name": "sf-trace-plugin",
  "version": "1.0.0"
}

Installing the Minimum Number of Dependencies

Let’s move on to what might qualify as MVP for dependencies by running:

npm install @types/chai @types/mocha chai mocha ts-node typescript --save-dev
npm install @salesforce/core @salesforce/sf-plugins-core

That’ll update your package.json file to look like the following:

{
  "author": "Your Name <your@email.com>",
  "description": "A plugin that allows you to update trace flags in a target salesforce org",
+ "dependencies": {
+   "@salesforce/core": "6.4.7",
+   "@salesforce/sf-plugins-core": "7.1.3"
+ },
+ "devDependencies": {
+   "@types/chai": "4.3.12",
+   "@types/mocha": "10.0.6",
+   "chai": "5.1.0",
+   "mocha": "10.3.0",
+   "ts-node": "10.9.2",
+   "typescript": "5.3.3"
+ },
  "name": "sf-trace-plugin",
+ "oclif": {
+   "commands": "./lib/commands",
+   "bin": "sf",
+   "topicSeparator": " ",
+   "devPlugins": [
+     "@oclif/plugin-help"
+   ],
+   "topics": {
+     "trace": {
+       "description": "Starts a TraceFlag"
+     }
+   },
+   "flexibleTaxonomy": true
+  },
+  "version": "1.0.0"
}

Note that the oclif node is required in order for the plugin to end up getting built/compiled correctly as an SF CLI-recognizable command.

When I initially started writing this post, I was considering using the built in Node test runner (which would have avoided a few of those dev dependencies), but found the console output so horrid that I caved and started using Mocha. Mocha is also the test runner used by internal SF teams for all of the existing SF CLI plugins, and because embracing learning is the easiest way to uplevel your skills as an engineer, I’m going to show in this post what using Mocha would look like. Jest is also a fine choice.

Having TypeScript itself listed as a dev dependency might seem strange, given that it’s required to be present for our project to be built, but it’s not required at runtime for our plugin because by that point our TypeScript files will have been transpiled to JavaScript. For that reason, it’s kept with the dev dependencies.

Adding TypeScript & Mocha-Specific Configs

We’re pretty close to being able to write a failing test, but since we’ll be writing our plugin in TypeScript, we need to create a tsconfig.json file in our project root. These are the defaults recommended by Salesforce’s own plugin-dev CLI repo:

{
  "compilerOptions": {
    "alwaysStrict": true,
    "declaration": true,
    "esModuleInterop": true,
    "lib": ["ES2022"],
    "module": "Node16",
    "moduleResolution": "Node16",
    "noUnusedLocals": true,
    "outDir": "lib",
    "rootDir": "core",
    "skipLibCheck": true,
    "sourceMap": true,
    "target": "ES2022"
  },
  "include": ["./core/**/*"],
  "exclude": ["node_modules/**/*"],
  "ts-node": {
    "esm": true
  }
}

That include section is of my own preference to label the entry point to project files within a directory named “core”. skipLibCheck also prevents TS from emitting warnings about your dependencies while building.

Let’s add some of the recommended defaults for Mocha to a .mocharc.json file in our project root:

{
  "require": "ts-node/register",
  "watch-extensions": "ts",
  "timeout": 600000,
  "node-option": [
    "experimental-specifier-resolution=node",
    "loader=ts-node/esm",
    "no-warnings"
  ],
  "spec": "test/*.test.ts"
}

Similar to the include property in our tsconfig.json file, the spec property here tells Mocha where to scan. To keep things simple, I’m not going to have the test folder directory exactly mirror how our CLI command file will end up nestled into the core directory — if you prefer your directories to exactly match, by all means do so, but you’ll also have to set the recursive property to true in your .mocharc.json file.

How SF Plugins Use Directories to Map Commands

With the CLI updates to the sf command came a new way of structuring commands. While I’m strongly opposed to project directories implying a semantic connection to tooling, I can’t argue with one of the upsides of this change, as it allows you to run commands by listing things out of order sf run apex is synonymous with sf apex run, for example. That’s pretty cool.

With that being said, this is what your directory structure should look like:

core
│
└───commands
│  │ ...
│  └─── apex
│     │
│     │ trace.ts
│
└───test
   │
   │ trace-plugin.test.ts

Adding Scripts To package.json

Let’s add a few common scripts to our package.json — this is the last step before we can write our first failing test!

+"scripts": {
+  "build": "tsc -p . --pretty && git add .",
+  "link": "npm run build && npm run test && sf plugins link .",
+  "test": "mocha -c --full-trace"
+},

Test Driven SF CLI Development

We’re finally ready to start using TDD to author our plugin!

In order to get a failing test going, let’s scaffold out the bare minimum within trace.ts:

import { SfCommand } from "@salesforce/sf-plugins-core";

export default class Trace extends SfCommand<void> {
  public async run(): Promise<void> {
    throw new Error("Method not implemented.");
  }
}

Writing Our First Failing Test

And then in trace-plugin.test.ts, we can finally write our first failing test!

import { expect } from "chai";

import Trace from "../core/commands/apex/trace.ts";

describe("trace plugin", () => {
  it("throws an exception when target-org is not provided & no default org is set", async () => {
    let thrownError: Error;
    try {
      await Trace.run([]);
    } catch (ex: unknown) {
      thrownError = ex as Error;
    }

    expect(thrownError?.message).to.equal(
      "No default environment found. Use -o or --target-org to specify an environment.",
    );
  });
});

The message property that we’re testing here correlates exactly to the error message sent to stderr by the CLI.

The test can be run by invoking npm run test within a terminal, which leads to:

1) trace plugin - throws an exception when target-org is not provided & no default org is set:
- AssertionError: expected 'Method not implemented.' to equal 'No default environment found. Use -o …'
+ expected
- actual

- Method not implemented.
+ No default environment found. Use -o or --target-org to specify an environment.

The SF CLI plugin development docs have a whole section about command flags — in other words, how to pass arguments to our command. Let’s import the Flags object exported by the CLI and inspect it:

//         👇 inspect this
import { Flags, SfCommand } from "@salesforce/sf-plugins-core";
// ...
requiredOrg: import("@oclif/core/lib/interfaces/parser.js").FlagDefinition<
  import("@salesforce/core").Org,
  import("@oclif/core/lib/interfaces/parser.js").CustomOptions,
  {
    multiple: false;
    requiredOrDefaulted: true;
  }
>;
// ...

That requiredOrg property looks pretty promising. Within classes that extend SfCommand, arguments are defined as follows:

import { Flags, SfCommand } from "@salesforce/sf-plugins-core";

export default class Trace extends SfCommand<void> {
  public static readonly flags = {
    "target-org": Flags.requiredOrg({
      char: "o",
      description: "The org where the trace will be set",
      required: false,
      summary: "The org where the trace will be set",
    }),
  };

  public async run(): Promise<void> {
    throw new Error("Method not implemented.");
  }
}

The nomenclature here is a bit odd; Flags has both requiredOrg and optionalOrg — the key difference between the two of them is that requiredOrg can fall back on the default org that’s been set in your sf config prior to erroring out. That’s why we set the required property to false within the flag. However, there’s no sf config within our test, which is what we want.

However, the test still isn’t passing. For that, we’ll need to modify the run method as well:

export default class Trace extends SfCommand<void> {
  public static readonly flags = {
    "target-org": Flags.requiredOrg({
      char: "o",
      description: "The org where the trace will be set",
      required: false,
      summary: "The org where the trace will be set",
    }),
  };

  public async run(): Promise<void> {
    // Flags are only validated when this.parse is called
    await this.parse(Trace);
    throw new Error("Method not implemented.");
  }
}

The usage of this.parse here is the first interesting thing that you’ll likely encounter when authoring a plugin, particularly one that requires a Salesforce org connection in order to proceed. It’s a protected method that Trace inherits from SfCommand, but that means it’s also a strict dependency for us in order to resolve a connected org. SfCommand also exposes this.argv — the array of arguments passed to the given command — in the event that you need to do pre-parse validation on arguments, but that’s also duplicating the work that this.parse already does. Despite that, I mention argv for those curious, as that’s the standard way CLI tools interact with passed arguments.

To further the example, this is what argv would look like if I didn’t have a default org set and I called our plugin on the command line:

sf apex trace --target-org myOrgAlias --json

In that example, argv would look like this: ["--target-org", "myOrgAlias", "--json"]. Be glad you don’t have to take care of matching arguments to their respective values!

Using Tests To Model Dependencies

Running the test again yields the following:

Error (1): No default environment found. Use -o or --target-org to specify an environment.

  ✔ throws an exception when target-org is not provided & no default org is set (76ms)

1 passing (82ms)``

Our test passes 🎉! In keeping with the “red, green, refactor” TDD mantra, now’s the time to start thinking about if there’s anything we can do to clean things up. But the implementation is about as simple as can be, and it’s trivial to write another test to get back to red:

it("does not throw an exception when target-org is specified", async () => {
  await Trace.run(["--target-org", "myOrgAlias"]);
});

Note that this isn’t really a meaningful test as of yet, and technically we could write a much better test for additional parameters (like trace duration, the Debug Level to use/create, etc…) but I’m skipping ahead a bit because getting our first test to pass using this.parse — and its continued usage as a dependency, as I mentioned above — now requires us to think about how we’d like our test setup to work.

Running our tests again leads to the following (I’ve trimmed some of the output associated with our first test):

1 passing (119ms)
1 failing

1) trace plugin - does not throw an exception when target-org is specified:

NamedOrgNotFoundError: Parsing --target-org
No authorization information found for myOrgAlias.
at Messages.createError (~\Code\sf-trace-plugin\node_modules\@salesforce\core\lib\messages.js:408:16)
  at AuthInfo.init (~\Code\sf-trace-plugin\node_modules\@salesforce\core\lib\org\authInfo.js:602:28)
  at async Function.create (~\Code\sf-trace-plugin\node_modules\@salesforce\kit\lib\creatable.js:57:9)
  at async Org.init (~\Code\sf-trace-plugin\node_modules\@salesforce\core\lib\org\org.js:784:27)
  (long stacktrace here partially elided; that's what using the --full-trace flag in our test script command allows for)

So. We’ve gone from unit testing to integration testing in just a few lines of code 😅! This is awkward. I personally don’t want to actually be authenticated to a Salesforce org in order to write tests for this plugin. There are some examples of how to mock connections returned from the use of Flags: they use sinon as a dependency, along with some imports from within @salesforce/core. They’re not very straightforward. They’re also incredibly verbose.

At the moment, that verbosity strikes me as exactly the sort of thing we should be avoiding. I’ve talked a bit about “The Art of Unit Testing” by Roy Osherove previously in [Apex Object-Oriented Basics(/blog/joys-of-apex/apex-object-oriented-basics/) and in Replacing DLRS With Custom Rollup]; while it’s written primarily with C# developers in mind, the principles it discusses are universally applicable when it comes to modeling dependencies, and in particular it talks about the three most common ways to inject dependencies into your code:

  • receiving an interface/subclass from a property get/set
  • via the constructor
  • via configuration

The third option isn’t really applicable here, which leaves us with options 1 & 2. I don’t really have a strong preference at the moment, but for now let’s go with option 1 (which has the added advantage of being the simplest, at least for now). Another reason to go with a property as opposed to constructor-based DI (which is typically my preferred route) is due to a vaguery specific to JavaScript: classes in JavaScript are really just an abstraction over prototypical inheritance (in the same way that async/await in JavaScript is really just an abstraction over Promises), and as such they suffer from the limitation that only one constructor can be defined for a class instance. Our command class extends SfCommand already, and that class has a constructor pre-defined; this makes adding an “additional” constructor awkward.

That leaves us with the property-based means for dependency injection, which is just fine.

A Note On Creating Types

In TypeScript, the sky is essentially the limit when it comes to consuming and defining types for our application. I bring that up because I’ve brought up the issue of our usage of this.parse to retrieve an authenticated Salesforce org — we now need to type around our usage of this.parse. Let’s create a new file:

core
│
└───commands
│  │ ...
│  └─── apex
│     │
│     │ trace.ts
│
└───dependencies
│  │
│  │ dependencyMapper.ts
│
└───test
   │
   │ trace-plugin.test.ts

Within our new dependencyMapper file, we’ll recreate how flags are getting parsed by inspecting the type definition for the parse function:

// in @oclif/core/lib/command.d.ts
import {
  ArgOutput,
  FlagOutput,
  Input,
  ParserOutput,
} from "./interfaces/parser";

export declare abstract class Command {
  // lots of other stuff in this class
  // but I'm only including the definition for "parse"
  protected parse<
    F extends FlagOutput,
    B extends FlagOutput,
    A extends ArgOutput,
  >(options?: Input<F, B, A>, argv?: string[]): Promise<ParserOutput<F, B, A>>;
}

Woof. That’s quite the type signature. Still, in the scheme of things it’s not that bad — here’s how we can introduce an abstraction layer to hide the inner workings of parse from our Trace class consumer:

// in core/dependencies/dependencyMapper.ts
import { Flags, SfCommand } from "@salesforce/sf-plugins-core";
import {
  ArgOutput,
  FlagOutput,
  Input,
} from "@oclif/core/lib/interfaces/parser.js";

export interface DependencyMapper {
  getDependencies(
    options?: Input<FlagOutput, FlagOutput, ArgOutput>,
  ): Promise<Dependencies>;
}

// bit of a pre-factor here:
// guessing at some additional properties
// that it might be nice to include
export type Dependencies = {
  debugLevel: string;
  fallbackDebugLevelName?: string;
  org: typeof Flags.requiredOrg;
  traceDuration: string;
};

export class ActualMapper extends SfCommand<void> implements DependencyMapper {
  public async getDependencies(
    clazz: Input<FlagOutput, FlagOutput, ArgOutput>,
  ): Promise<Dependencies> {
    const { flags } = await this.parse(clazz);
    return {
      debugLevel: "TODO",
      org: flags["target-org"],
      traceDuration: "TODO",
    };
  }
  // we don't actually need to do
  // anything with the run method
  // but we do need to define it
  // in order to satisfy SfCommand's
  // abstract class contract
  public async run() {}
}

The introduction of the Dependencies type is a bit duplicative, and that’s normally a no-no. However, I personally find the ergonomics of working with camelCased properties much more satisfying than working with kebab-cased strings — this, even though TypeScript offers strongly-typed support for such properties. Your mileage may vary. I’ll chalk this one up to personal preference rather than veering into dogma on the subject.

Back in trace.ts we can now safely refactor to:

import { Flags, SfCommand } from "@salesforce/sf-plugins-core";

import {
  ActualMapper,
  DependencyMapper,
} from "../../dependencies/dependencyMapper.js";

export default class Trace extends SfCommand<void> {
  public static dependencyMapper: DependencyMapper;

  public static readonly flags = {
    "target-org": Flags.requiredOrg({
      char: "o",
      description: "The org where the trace will be set",
      required: false,
      summary: "The org where the trace will be set",
    }),
  };

  public async run(): Promise<void> {
    if (!Trace.dependencyMapper) {
      Trace.dependencyMapper = new ActualMapper(this.argv, this.config);
    }

    const { org, debugLevel, fallbackDebugLevelName, traceDuration } =
      await Trace.dependencyMapper.getDependencies(Trace);

    throw new Error("Method not implemented.");
  }
}

And, finally, in our relatively nonsensical test class at the moment:

import { expect } from "chai";

import Trace from "../core/commands/apex/trace.js";
import {
  Dependencies,
  DependencyMapper,
} from "../core/dependencies/dependencyMapper.js";
import {
  Input,
  FlagOutput,
  ArgOutput,
} from "@oclif/core/lib/interfaces/parser.js";

class FakeDependencyMapper implements DependencyMapper {
  getDependencies(
    _?: Input<FlagOutput, FlagOutput, ArgOutput>,
  ): Promise<Dependencies> {
    return Promise.resolve({
      org: {} as unknown as any,
      debugLevel: "DEBUG",
      traceDuration: "1hr",
    });
  }
}

describe("trace plugin", () => {
  it("throws an exception when target-org is not provided & no default org is set", async () => {
    // unchanged from previously, omitted
  });

  it("does not throw an exception when target-org is specified", async () => {
    Trace.dependencyMapper = new FakeDependencyMapper();
    await Trace.run(["--target-org", "myOrgAlias"]);
  });
});

Note that the org property signature isn’t quite complete yet — that’s totally fine at present. We haven’t gotten there yet. All of this work was to get to the point of having a meaningful failure within our test that didn’t involve myOrgAlias not existing. And, indeed, our previous test now passes (as it’s using the ActualMapper dependency) and our new test now fails for … a better reason:

Error (1): Method not implemented.

1 passing (142ms)
1 failing

1) trace plugin - does not throw an exception when target-org is specified:

Error: Method not implemented.
at Trace.run (~/Code/sf-trace-plugin/core/commands/apex/trace.ts:23:15)
at async Trace._run (~/Code/sf-trace-plugin/node_modules/@oclif/core/lib/command.js:304:22)
at async Context.<anonymous> (~/Code/sf-trace-plugin/test/trace-plugin.test.ts:25:9)

Well, well, well. This is great! The exception we’re throwing at the end of Trace is now being thrown. In other words — with the addition of our dependencyMapper, which is a Facade of sorts, we’ve decoupled our tests from anything in our filesystem (like auth/refresh tokens for any given org).

Shifting To Pure Unit Tests

At the moment, the first test in our suite is an integration test; it’s nice, in the sense that it documents how the CLI responds to a required argument missing, but things sort of fall apart from there once we have more than one required argument:

// in trace.ts
export default class Trace extends SfCommand<void> {
  public static readonly flags = {
    "target-org": Flags.requiredOrg({
      char: "o",
      description: "The org where the trace will be set",
      required: false,
      summary: "The org where the trace will be set",
    }),
    "debug-level": Flags.string({
      char: "l",
      required: true,
    }),
  };
}

If we try to add a test for this, things get weird:

describe("trace plugin", () => {
  it("throws an exception when target-org is not provided & no default org is set", async () => {
    let thrownError: Error;
    try {
      await Trace.run([]);
    } catch (ex: unknown) {
      thrownError = ex as Error;
    }
    expect(thrownError?.message).to.equal(
      "No default environment found. Use -o or --target-org to specify an environment.",
    );
  });

  it("throws an exception when target-org is provided & debug-level is not", async () => {
    let thrownError: Error;
    try {
      await Trace.run(["--target-org", "fakeAlias"]);
    } catch (ex: unknown) {
      thrownError = ex as Error;
    }
    expect(thrownError?.message).to.equal(
      "Something about debug-level not being passed",
    );
  });
});

But instead, the message we get back is:

No default environment found. Use -o or --target-org to specify an environment

So … that’s not at all expected. But it’s probably for the best. This allows us to shift our unit tests even further to the left by leaving the parse validation to the CLI itself, and instead inspecting the flag configuration that’s passed — if our plugin doesn’t work with the correct flags passed, it’s not because of anything we’ve done wrong, in other words, and leaving that part of the functionality to the underlying CLI dependency is for the best.

With that being said, the first test gets transformed:

class FakeDependencyMapper implements DependencyMapper {
  public debugLevelFlag: FlagOutput;
  public orgFlag: FlagOutput;
  getDependencies(
    options?: Input<FlagOutput, FlagOutput, ArgOutput>,
  ): Promise<Dependencies> {
    this.debugLevelFlag = options.flags["debug-level"];
    this.orgFlag = options.flags["target-org"];
    console.log(this.orgFlag);
    return Promise.resolve({
      org: {} as unknown as any,
      debugLevel: "DEBUG",
      traceDuration: "1hr",
    });
  }
}

describe("trace plugin", () => {
  it("passes the right flags", async () => {
    const depMapper = new FakeDependencyMapper();
    Trace.dependencyMapper = depMapper;

    await Trace.run();

    expect(depMapper.debugLevelFlag).not.to.equal(undefined);
    expect(depMapper.debugLevelFlag.required).to.be.true;
    expect(depMapper.debugLevelFlag.char).to.eq("l");
    expect(depMapper.orgFlag).not.to.equal(undefined);
    expect(depMapper.orgFlag.required).to.be.false;
    expect(depMapper.orgFlag.char).to.eq("t");
  });
});

So the only thing that isn’t ideal about this is that the type of flag gets lost in the shuffle. It’s not part of the debugLevelFlag and orgFlag properties that are captured in the FakeDependencyMapper. But — this is TypeScript! That means that we can shift the burden of proving that the flags that we want are the flags that we get to a compile-time problem:

// back in dependencyMapper.ts
import { Flags, SfCommand } from "@salesforce/sf-plugins-core";
import {
  ArgOutput,
  FlagOutput,
  Input,
} from "@oclif/core/lib/interfaces/parser.js";
import {
  CustomOptions,
  OptionFlag,
} from "@oclif/core/lib/interfaces/parser.js";
import { Org } from "@salesforce/core";

export interface DependencyMapper {
  getDependencies(
    options: Input<FlagOutput, FlagOutput, ArgOutput>,
  ): Promise<Dependencies>;
}

export type ExpectedFlags = {
  "debug-level": OptionFlag<string, CustomOptions>;
  "target-org": OptionFlag<Org, CustomOptions>;
  "trace-duration": OptionFlag<string, CustomOptions>;
};

export type Dependencies = {
  debugLevel: string;
  fallbackDebugLevelName?: string;
  org: typeof Flags.requiredOrg;
  traceDuration: string;
};

export class ActualMapper extends SfCommand<void> implements DependencyMapper {
  public async getDependencies(
    options: Input<FlagOutput, FlagOutput, ArgOutput>,
  ): Promise<Dependencies> {
    const passedFlags = options.flags as ExpectedFlags;
    const { flags } = await this.parse(options);
    return {
      debugLevel: flags[passedFlags["debug-level"].name],
      org: flags[passedFlags["target-org"].name],
      traceDuration: flags[passedFlags["trace-duration"].name],
    } as Dependencies;
  }
  public async run() {}
}

And then in our test class:

import { expect } from "chai";
import {
  ArgOutput,
  CustomOptions,
  FlagOutput,
  Input,
  OptionFlag,
} from "@oclif/core/lib/interfaces/parser.js";

import {
  Dependencies,
  DependencyMapper,
  ExpectedFlags,
} from "../core/dependencies/dependencyMapper.js";
import Trace from "../core/commands/apex/trace.js";

class FakeDependencyMapper implements DependencyMapper {
  public passedFlags: ExpectedFlags;

  getDependencies(
    options: Input<FlagOutput, FlagOutput, ArgOutput>,
  ): Promise<Dependencies> {
    this.passedFlags = options.flags as ExpectedFlags;

    return Promise.resolve({
      org: {} as unknown as any,
      debugLevel: "DEBUG",
      traceDuration: "1hr",
    });
  }
}

describe("trace plugin", () => {
  it("passes the right flags", async () => {
    const depMapper = new FakeDependencyMapper();
    Trace.dependencyMapper = depMapper;

    await Trace.run();

    const debugLevel: OptionFlag<string, CustomOptions> =
      depMapper.passedFlags["debug-level"];
    expect(debugLevel).not.to.equal(undefined);
    expect(debugLevel.required).to.be.true;
    expect(debugLevel.char).to.eq("l");
    expect(debugLevel.default).to.eq("DEBUG");
    const targetOrg: OptionFlag<string, CustomOptions> =
      //                          👆 this is the wrong type!
      depMapper.passedFlags["target-org"];
    expect(targetOrg).not.to.equal(undefined);
    expect(targetOrg.required).to.be.false;
    expect(targetOrg.char).to.eq("o");
    const traceDuration: OptionFlag<string, CustomOptions> =
      depMapper.passedFlags["trace-duration"];
    expect(traceDuration).not.to.equal(undefined);
    expect(traceDuration.required).to.be.false;
    expect(traceDuration.char).to.eq("d");
    expect(traceDuration.default).to.be("1hr");
  });
});

When targetOrg is typed as such, we now get a compile time error:

Type 'OptionFlag<Org, CustomOptions>' is not assignable to type 'OptionFlag<string, CustomOptions>'.
  Type 'OptionFlag<Org, CustomOptions>' is not assignable to type '{ parse: FlagParser<string, string, CustomOptions>; defaultHelp?: FlagDefaultHelp<string, CustomOptions>; input: string[]; default?: FlagDefault<...>; }'.
    Types of property 'parse' are incompatible.
      Type 'FlagParser<Org, string, CustomOptions>' is not assignable to type 'FlagParser<string, string, CustomOptions>'.
        Type 'Org' is not assignable to type 'string'.ts(2322)

When we switch to:

import { Org } from "@salesforce/core";
// ...
const targetOrg: OptionFlag<Org, CustomOptions> =
  depMapper.passedFlags["target-org"];
//                           👆 now this compiles

By introducing the ExpectedFlags type as an explicit dependency, we get compile-time safety on accessing each flags’ properties and we get compile-time safety in our tests that the types we expect are the types being used. This is a huge win.

Writing A Second Test

We still haven’t tackled setting up the fallbackDebugLevelName property fully, but that would be getting ahead of ourselves. For now, let’s move on to our next test that we’d like to write. Note that not all of this exists yet — but it’s a chance to stub out some configurable testing infrastructure:

it("gets an existing trace flag back for the current user", async () => {
  const depMapper = new FakeDependencyMapper();
  Trace.dependencyMapper = depMapper;

  await Trace.run();

  expect(depMapper.queriesMade.length).to.eq(3);
  expect(depMapper.queriesMade[0]).to.eq(
    `SELECT Id FROM User WHERE Username = "${depMapper.username}"`,
  );
  expect(depMapper.queriesMade[1]).to.eq(
    `SELECT Id FROM DebugLevel WHERE DeveloperName = "${depMapper.matchingDebugLevel.DeveloperName}"`,
  );
  expect(depMapper.queriesMade[1]).to.eq(
    `SELECT Id FROM TraceFlag WHERE DebugLevelId = "${depMapper.matchingDebugLevel.Id}" AND LogType = "USER_DEBUG" AND TraceEntityId = "${depMapper.matchingUser.Id}"`,
  );
});

Then, we can update our class at the top of the test file:

class FakeDependencyMapper implements DependencyMapper {
  public passedFlags: ExpectedFlags;
  public queriesMade: string[] = [];
  public username = "test@user.com";

  public matchingUser: SalesforceRecord = { Id: "005..." };
  public matchingDebugLevel: SalesforceRecord & {
    ApexCode: string;
    DeveloperName: string;
  } = {
    ApexCode: "DEBUG",
    DeveloperName: "someName",
    Id: "7dl....",
  };
  public matchingTraceFlag: SalesforceRecord = { Id: "7tf..." };

  getDependencies(
    options: Input<FlagOutput, FlagOutput, ArgOutput>,
  ): Promise<Dependencies> {
    this.passedFlags = options.flags as ExpectedFlags;

    return Promise.resolve({
      org: {
        getUsername: () => this.username,
        getConnection: () => ({
          singleRecordQuery: (query: string) => {
            this.queriesMade.push(query);
            return this.matchingUser;
          },
          tooling: {
            query: (query: string) => {
              this.queriesMade.push(query);
              let matchingRecord = null;
              if (query.indexOf("FROM DebugLevel") > -1) {
                matchingRecord = this.matchingDebugLevel;
              } else if (query.indexOf("FROM TraceFlag") > -1) {
                matchingRecord = this.matchingTraceFlag;
              }
              return {
                totalSize: 1,
                records: matchingRecord ? [matchingRecord] : null,
              };
            },
          },
        }),
      } as unknown as Org,
      debugLevel: this.matchingDebugLevel.ApexCode,
      traceDuration: "1hr",
    });
  }
}

It’s certainly not perfect. I’m skipping ahead a bit here, because I know we need three distinct pieces of information in order to set up traces. You might remember this, as well from Building An Apex Logging Service, which was all about exporting log data to a 3rd party log platform in order to standardize logs across various systems. To recap, here’s what we need:

  • a user to trace
  • a matching DebugLevel record, which is only accessible via the Tooling API
  • (potentially) a matching TraceFlag record, also only accessible via the Tooling API

In writing this test, though, the initial flags that we’ve set up are already starting to show some weaknesses:

  • it seems like the username should be an optional flag — plus, what if we want to start a trace on the Automated Process User? That’s a whole ‘nother can of worms, as the only reliable way to retrieve that user is via its “autoproc” alias
  • the most basic approach to creating/updating a TraceFlag record probably involves using pre-existing values for the DebugLevel record — for instance, using the SFDC_DevConsole record (instead of what’s happening now with the coupling between the DebugLevel record and the level of logging we’re using)
  • when I think about the most basic version of this command, I think of invoking sf apex trace without any arguments at all — beyond being in a directory where you have a default org set — and then afterwards, layering in the arguments that matter most (trace duration, the logging level to use, whether or not to trace the autoproc user, etc…)

With all of that being said, let’s update our test file:

class FakeDependencyMapper implements DependencyMapper {
  // ...
  public matchingDebugLevel: SalesforceRecord & {
    ApexCode: string;
    DeveloperName: string;
  } = {
    ApexCode: "DEBUG",
    DeveloperName: "SFDC_DevConsole",
    Id: "7dl....",
  };
  // ...
}
it("gets an existing trace flag back for the current user", async () => {
  const depMapper = new FakeDependencyMapper();
  Trace.dependencyMapper = depMapper;

  await Trace.run();

  expect(depMapper.queriesMade.length).to.eq(3);
  expect(depMapper.queriesMade[0]).to.eq(
    `SELECT Id FROM User WHERE Username = "${depMapper.username}"`,
  );
  expect(depMapper.queriesMade[1]).to.eq(
    `SELECT Id FROM DebugLevel WHERE DeveloperName = "SFDC_DevConsole"`,
  );
  expect(depMapper.queriesMade[1]).to.eq(
    `SELECT Id FROM TraceFlag WHERE DebugLevelId = "${depMapper.matchingDebugLevel.Id}" AND LogType = "USER_DEBUG" AND TraceEntityId = "${depMapper.matchingUser.Id}"`,
  );
});

And, finally, we’re ready to start hammering away at our implementation.

Implementing the MVP Functionality

In trace.ts, it’s time to start adding in functionality before we add more asserts to our second test:

import { Flags, SfCommand } from "@salesforce/sf-plugins-core";
import { QueryResult } from "jsforce";

import {
  ActualMapper,
  DependencyMapper,
  ExpectedFlags,
} from "../../dependencies/dependencyMapper.js";

const DEFAULT_DEBUG_LEVEL_NAME = "SFDC_DevConsole";
const DEFAULT_LOG_TYPE = "USER_DEBUG";

export default class Trace extends SfCommand<void> {
  public static dependencyMapper: DependencyMapper;

  public static readonly flags = {
    // TODO enable adding trace flags for ANY user as an optional arg AND for the autoproc user
    "target-org": Flags.requiredOrg({
      char: "o",
      description: "The org where the trace will be set",
      required: false,
      summary: "The org where the trace will be set",
    }),
    "debug-level-name": Flags.string({
      char: "l",
      description: "The DeveloperName to use for the DebugLevel record",
      default: DEFAULT_DEBUG_LEVEL_NAME,
      required: false,
      summary: "Optional - the name of the DebugLevel record to use",
    }),
    "trace-duration": Flags.string({
      char: "d",
      description: "How long the trace is active for",
      default: "1hr",
      required: false,
      summary:
        "Defaults to 1 hour, max of 24 hours. You can set duration in minutes (eg 30m) or in hours (eg 2h)",
    }),
  } as ExpectedFlags;

  public async run(): Promise<void> {
    if (!Trace.dependencyMapper) {
      Trace.dependencyMapper = new ActualMapper(this.argv, this.config);
    }

    const { debugLevelName, org } =
      await Trace.dependencyMapper.getDependencies(Trace);
    const orgConnection = org.getConnection();
    const [user, existingDebugLevelRes] = await Promise.all([
      // TODO allow autoproc alias here, as well as configurable user names
      orgConnection.singleRecordQuery<{ Id: string }>(
        `SELECT Id FROM User WHERE Username = '${org.getUsername()}'`,
      ),
      orgConnection.tooling.query(
        `SELECT Id FROM DebugLevel WHERE DeveloperName = '${debugLevelName}'`,
      ),
    ]);

    const existingDebugLevel = this.getSingleOrDefault(existingDebugLevelRes);
    // TODO - null check existingDebugLevel
    const existingTraceFlag = this.getSingleOrDefault(
      await orgConnection.tooling.query(
        `SELECT Id FROM TraceFlag WHERE DebugLevelId = '${existingDebugLevel.Id}' AND LogType = '${DEFAULT_LOG_TYPE}' AND TraceEntityId = '${user.Id}'`,
      ),
    );
  }

  private getSingleOrDefault<T>(toolingApiResult: QueryResult<T>) {
    if (toolingApiResult.totalSize === 0) {
      return null;
    }
    return toolingApiResult.records[0];
  }
}

We also need to update our dependency mapper class to reflect the updated flag names — thanks to the compiler, we’re already getting warnings about debug-level-name not existing on ExpectedFlags:

export type ExpectedFlags = {
  "debug-level-name": OptionFlag<string, CustomOptions>;
  "target-org": OptionFlag<Org, CustomOptions>;
  "trace-duration": OptionFlag<string, CustomOptions>;
};

export type Dependencies = {
  debugLevelName: string;
  fallbackDebugLevelName?: string;
  org: Org;
  traceDuration: string;
};

export class ActualMapper extends SfCommand<void> implements DependencyMapper {
  public async getDependencies(
    options: Input<FlagOutput, FlagOutput, ArgOutput>,
  ): Promise<Dependencies> {
    const passedFlags = options.flags as ExpectedFlags;
    const { flags } = await this.parse(options);
    return {
      debugLevelName: flags[passedFlags["debug-level-name"].name],
      org: flags[passedFlags["target-org"].name],
      traceDuration: flags[passedFlags["trace-duration"].name],
    };
  }
  public async run() {}
}

So now the asserts in the second test are passing as is, but we haven’t quite “finished” — the last step is actually updating the TraceFlag record, or creating a new one if one doesn’t exist. That actually sounds like two tests (as does the TODO shown above about handling the case where the queried-for DebugLevel record doesn’t exist). Let’s tackle the happy path, first — updating the trace using the default duration of 1 hour:

// back in the test file ...

type TraceFlag = SalesforceRecord & { StartDate: Date; ExpirationDate: Date };

class FakeDependencyMapper implements DependencyMapper {
  // ...
  public updatedSObjectName: string;
  public updatedTraceFlag: TraceFlag;
  // look at the "update" function added to the tooling object
  // in "getDependencies", below

  getDependencies(
    options: Input<FlagOutput, FlagOutput, ArgOutput>,
  ): Promise<Dependencies> {
    this.passedFlags = options.flags as ExpectedFlags;

    return Promise.resolve({
      org: {
        getUsername: () => this.username,
        getConnection: () => ({
          singleRecordQuery: (query: string) => {
            this.queriesMade.push(query);
            return this.matchingUser;
          },
          tooling: {
            query: (query: string) => {
              this.queriesMade.push(query);
              let matchingRecord = null;
              if (query.indexOf("FROM DebugLevel") > -1) {
                matchingRecord = this.matchingDebugLevel;
              } else if (query.indexOf("FROM TraceFlag") > -1) {
                matchingRecord = this.matchingTraceFlag;
              }
              return {
                totalSize: 1,
                records: matchingRecord ? [matchingRecord] : null,
              };
            },
            update: (sObjectName: string, record: TraceFlag) => {
              this.updatedSObjectName = sObjectName;
              this.updatedTraceFlag = record;
            },
          },
        }),
      } as unknown as Org,
      debugLevelName: "someName",
      traceDuration: "1hr",
    });
  }
}

Which allows us to augment our asserts:

it("gets an existing trace flag back for the current user", async () => {
  const depMapper = new FakeDependencyMapper();
  Trace.dependencyMapper = depMapper;
+ const nowish = Date.now();

  await Trace.run();

  expect(depMapper.queriesMade.length).to.eq(3);
  expect(depMapper.queriesMade[0]).to.eq(
    `SELECT Id FROM User WHERE Username = "${depMapper.username}"`,
  );
  expect(depMapper.queriesMade[1]).to.eq(
    `SELECT Id FROM DebugLevel WHERE DeveloperName = "${depMapper.matchingDebugLevel.DeveloperName}"`,
  );
  expect(depMapper.queriesMade[2]).to.eq(
    `SELECT Id FROM TraceFlag WHERE DebugLevelId = "${depMapper.matchingDebugLevel.Id}" AND LogType = "USER_DEBUG" AND TraceEntityId = "${depMapper.matchingUser.Id}"`,
  );
  expect(depMapper.updatedSObjectName).to.eq("TraceFlag");
  expect(depMapper.updatedTraceFlag.StartDate).to.be.lessThan(
    depMapper.updatedTraceFlag.ExpirationDate,
  );
+ const nowishDate = new Date(nowish);
+ const expirationate = new Date(depMapper.updatedTraceFlag.ExpirationDate);
+ expirationate.setSeconds(0);
+ expirationate.setMilliseconds(0);
+ nowishDate.setSeconds(0);
+ nowishDate.setMilliseconds(0);
+ expect(expirationate.getTime()).to.eq(nowishDate.getTime());
});

Now that our tests are failing again, it’s time to finish up the basic functionality:

// in trace.ts
const existingTraceFlag = this.getSingleOrDefault<{ StartDate: number; ExpirationDate: number; Id: string }>(
  await orgConnection.tooling.query(
    `SELECT Id
      FROM ${TRACE_SOBJECT_NAME}
      WHERE LogType = '${DEFAULT_LOG_TYPE}'
      AND TraceEntityId = '${user.Id}'
      ORDER BY CreatedDate DESC
      LIMIT 1
    `
  )
);

if (existingTraceFlag !== null) {
  existingTraceFlag.StartDate = Date.now();
  existingTraceFlag.ExpirationDate = this.getExpirationDate(
    new Date(existingTraceFlag.StartDate),
    traceDuration
  ).getTime();
  this.log('Updating trace flag, expires: ' + new Date(existingTraceFlag.ExpirationDate));
  orgConnection.tooling.update(TRACE_SOBJECT_NAME, existingTraceFlag);
}
// ...
getExpirationDate(startingDate: Date, durationExpression: string): Date {
  const minutesInMilliseconds = 60 * 1000;
  let durationModifier = 0;
  if (durationExpression.endsWith('hr')) {
    const hours = Number(durationExpression.slice(0, durationExpression.length - 2));
    durationModifier = hours * 60 * minutesInMilliseconds;
  }
  return new Date(startingDate.getTime() + durationModifier);
}

If I run npm run build && npm run link and navigate to a valid SFDX repository, running sf apex trace yields the following in my console:

Updating trace flag, expires: Fri Mar 15 2024 14:06:15 GMT-0400 (Eastern Daylight Time)

Continuing To Build Functionality

Let’s quickly build out a few more tests.

Adding Support For Minute-Based Trace Durations

it("updates an existing trace flag for the current user with minute duration", async () => {
  const depMapper = new FakeDependencyMapper();
  Trace.dependencyMapper = depMapper;
  depMapper.traceDuration = "15m";
  const nowish = Date.now();

  await Trace.run();

  const expirationate = new Date(depMapper.updatedTraceFlag.ExpirationDate);
  expirationate.setSeconds(0);
  expirationate.setMilliseconds(0);
  const nowishDate = new Date(nowish);
  nowishDate.setSeconds(0);
  nowishDate.setMilliseconds(0);
  expect(expirationate.getTime()).to.eq(nowishDate.getTime() + 1000 * 15 * 60);
});

That’s trivial to implement:

getExpirationDate(startingDate: Date, durationExpression: string): Date {
  const minutesInMilliseconds = 60 * 1000;
  let durationModifier = 0;
  if (durationExpression.endsWith('hr')) {
    const hours = Number(durationExpression.slice(0, durationExpression.length - 2));
    durationModifier = hours * 60 * minutesInMilliseconds;
  } else if (durationExpression.endsWith('m')) {
    durationModifier = Number(durationExpression.slice(0, durationExpression.length - 1)) * minutesInMilliseconds;
  }
  return new Date(startingDate.getTime() + durationModifier);
}

Adding Support For Setting A Max Of 24 Hours Tracing

It isn’t valid for TraceFlag records to have an ExpirationDate that’s more than 24 hours after the StartDate. We can easily get a failing test for that going:

it("sets a max of 24 hours tracing time when more than that is passed in as the trace duration", async () => {
  const depMapper = new FakeDependencyMapper();
  Trace.dependencyMapper = depMapper;
  depMapper.traceDuration = "50hr";
  const nowish = Date.now();

  await Trace.run();

  const expirationate = new Date(depMapper.updatedTraceFlag.ExpirationDate);
  expirationate.setSeconds(0);
  expirationate.setMilliseconds(0);
  const nowishDate = new Date(nowish);
  nowishDate.setSeconds(0);
  nowishDate.setMilliseconds(0);
  expect(expirationate.getTime()).to.eq(
    nowishDate.getTime() + 1000 * 60 * 60 * 24,
  );
});

And then back in the implementation:

getExpirationDate(startingDate: Date, durationExpression: string): Date {
  const minutesInMilliseconds = 60 * 1000;
  let durationModifier = 0;
  if (durationExpression.endsWith('hr')) {
    const hours = Number(durationExpression.slice(0, durationExpression.length - 2));
    durationModifier = hours * 60 * minutesInMilliseconds;
  } else if (durationExpression.endsWith('m')) {
    durationModifier = Number(durationExpression.slice(0, durationExpression.length - 1)) * minutesInMilliseconds;
  }

  let expirationDate = new Date(startingDate.getTime() + durationModifier);
  const twentyFourHoursInMilliseconds = 24 * 60 * 60 * 1000;
  if (expirationDate.getTime() - startingDate.getTime() > twentyFourHoursInMilliseconds) {
    expirationDate = new Date(startingDate.getTime() + twentyFourHoursInMilliseconds);
  }
  return expirationDate;
}

Setting Traces For Another User

If we want to set a trace for a user other than the currently authorized one:

class FakeDependencyMapper implements DependencyMapper {
  // ...
  public username: string;

  getDependencies(
    options: Input<FlagOutput, FlagOutput, ArgOutput>,
  ): Promise<Dependencies> {
    // ...
    return Promise.resolve({
      targetUser: this.username,
    });
  }
}
it("allows another user to be set instead of the current running user", async () => {
  depMapper.username = "someotheruser@test.com";

  await Trace.run();

  expect(depMapper.queriesMade[0]).to.eq(
    `SELECT Id FROM User WHERE Username = '${depMapper.username}'`,
  );
});

And then in the implementation…

// in dependencyMapper.ts
export type ExpectedFlags = {
  "debug-level-name": OptionFlag<string, CustomOptions>;
  "target-org": OptionFlag<Org, CustomOptions>;
  "trace-duration": OptionFlag<string, CustomOptions>;
  //  👇 new!
  "target-user": OptionFlag<string, CustomOptions>;
};

export class ActualMapper extends SfCommand<void> implements DependencyMapper {
  public async getDependencies(
    options: Input<FlagOutput, FlagOutput, ArgOutput>,
  ): Promise<Dependencies> {
    const passedFlags = options.flags as ExpectedFlags;
    const { flags } = await this.parse(options);

    return {
      debugLevelName: flags[passedFlags["debug-level-name"].name],
      org: flags[passedFlags["target-org"].name],
      traceDuration: flags[passedFlags["trace-duration"].name],
      // 👇 and again!
      targetUser: flags[passedFlags["target-user"].name],
    };
  }
  public async run() {}
}

// in trace.ts
//                      👇 attribution in the actual repo
let traceUser = Trace.escapeXml(targetUser);
// pre-factor here to allow for the autoproc trace in a future flag
let whereField = "Username";
if (!traceUser) {
  traceUser = orgConnection.getUsername() as string;
}
// ...
const [user, fallbackDebugLevelRes] = await Promise.all([
  orgConnection.singleRecordQuery<{ Id: string }>(
    `SELECT Id FROM User WHERE ${whereField} = ${this.getQuotedQueryVar(
      traceUser,
    )}`,
  ),
  orgConnection.tooling.query(
    `SELECT Id FROM DebugLevel WHERE DeveloperName = ${this.getQuotedQueryVar(
      Trace.escapeXml(debugLevelName),
    )}`,
  ),
]);

Publish / Installation

At this point, I think you get the idea. It’s not hard to add functionality following the “red, green, refactor” TDD mentality. I’ll spare you the rest of the play-by-play in terms of developing the plugin. What’s left, beyond that, is actually publishing it. SF CLI plugins can be installed in three different ways:

  • by pointing to a valid NPM package (produced by running npm publish on a valid CLI plugin directory)
  • by pointing to a valid GitHub repository where the package’s files can be located
  • by running sf plugins link for local development

I’ll end up giving people options for all three. You can find the source code on my GitHub, as well as further installation instructions.

Wrapping Up

Writing CLI plugins is a great way to extend the native functionality of the platform; it’s always nice to work with such an expressive language as TypeScript. You can also use the plugin-dev plugin to bootstrap the writing of CLI commands — it comes with a lot out of the box, but all of those abstractions also obscure how the actual wiring is done when setting a command up from scratch, so I’m hopeful this post will show you how easy it is to get started, and how much is possible when it comes to creating plugins.

As always, thanks for reading the Joys of Apex, and thanks to you all who continue to support me on Patreon — in particular, Arc and Henry Vu. Till next time!

In the past three years, hundreds of thousands of you have come to read & enjoy the Joys Of Apex. Over that time period, I've remained staunchly opposed to advertising on the site, but I've made a Patreon account in the event that you'd like to show your support there. Know that the content here will always remain free. Thanks again for reading — see you next time!