Custom Metadata Decoupling
Posted: July 31, 2023

Custom Metadata Decoupling

Table of Contents:

  • A Strongly Coupled Example

    • When Flows & Triggers Collide
  • Introducing Decoupling

    • Creating The Automation Handler
    • Notes
    • A Laundry List Of Enhancements
  • Wrapping Up

It’s common when discussing software engineering to encounter the phrase “decoupling.” I’ve talked about it in over ten articles on this site alone 😅. Decoupling itself is really a nod to one of the SOLID principles — namely, dependency inversion or the idea that objects shouldn’t be “strongly coupled,” instead relying on “loose coupling” where objects take in dependencies that are as abstract as possible. Why does this matter? Why should we consider how strong or loosely coupled the objects that we create and operate on are? Let’s dive in, and explore.

A Strongly Coupled Example

Let’s start off by looking at an area within a typical Salesforce project that experiences strong coupling: triggers. One of the few pieces of conventional wisdom that’s been successfully persisted within the Salesforce ecosystem is the Trigger Handler pattern:

trigger AccountTrigger on Account (before insert, after insert, before update, after update) {
  new AccountHandler().run();
}

Now, some decoupling is already typical at this point; the Trigger Handler pattern dictates a parent class that takes care of all the minutiae of interacting with triggers:

public virtual class TriggerHandler {
  public void run() {
    switch on Trigger.operationType {
      when BEFORE_INSERT {
        this.beforeInsert(Trigger.new);
      }
      when AFTER_INSERT {
        this.afterInsert(Trigger.new);
      }
      when BEFORE_UPDATE {
        this.beforeUpdate(Trigger.new, Trigger.oldMap);
      }
      when AFTER_UPDATE {
        this.afterUpdate(Trigger.new, Trigger.oldMap);
      }
      // etc ..
    }
  }

  protected virtual void beforeInsert(List<SObject> records) {
    // all of these end of being no-ops by default
  }
  // etc ...
  protected virtual void beforeDelete(List<SObject> records) {
  }
}

That way, by the time we make it to the AccountHandler in the object hierarchy, all of the hard work is already out of the way and we’re free to simply override the methods we care about; the only other big “ask” being that the trigger file itself needs to declare the correct trigger contexts in order for everything to work properly:

public class AccountHandler extends TriggerHandler {
  public override void beforeDelete(List<SObject> records) {
    // do something with the records.
  }
}

In this example, beforeDelete on our AccountHandler wouldn’t successfully start firing until we’d modified our trigger to include the before delete context in it.


All of this amounts to a system that is strongly coupled to the object hierarchy within TriggerHandler, which is itself strongly coupled to Salesforce’s order of operations for things like triggers. You need to be able to remember that before save flows, for example, will run prior to your triggers running; you need to remember that validation rules will run after your before flows and triggers but before your after flows and triggers.

Let’s start to look at how evolving conditions in a codebase can raise the visibility and importance of decoupling. An aside, prior to digging in (and thanks to Jeferson Spencer Chaves for pointing this out): what follows is a purposefully contrived example that is meant to show when the cost of complexity is greater than the cost of indirection (as loose coupling introduces indirection). When that accurately describes a situation, the “cost” of indirection is less than the cost of introducing complexity, and thus it is warranted. As developers, we should be continually refining our cost benefit analyses, weighing the pros and cons of adding indirection into a system. For simple automations, adding indirection may unduly increase complexity, making a process harder to understand without tangible benefits. With that being said, let’s dive in.

When Flows & Triggers Collide

Let’s say that we have the following code in our AccountHandler:

public class AccountHandler extends TriggerHandler {
  private static final Integer VIP_REVENUE = 1000000;

  public override void beforeUpdate(List<SObject> records, Map<Id, SObject> oldRecordMap) {
    List<Account> accounts = (List<Account>) records;
    List<Task> tasksToCreate = new List<Task>();
    for (Account acc : accounts) {
      Account oldAccount = (Account) oldRecordMap.get(acc.Id);
      if (acc.AnnualRevenue >= VIP_REVENUE && oldAccount.AnnualRevenue < VIP_REVENUE) {
        tasksToCreate.add(
          new Task(
            ActivityDate = System.today().addDays(1),
            Subject = 'VIP Revenue Account Detected!',
            WhatId = acc.Id
          )
        );
      }
    }
    insert tasksToCreate;
  }
}

We create a task for our sales team to follow up with VIP accounts as they grow in size. This business logic could have been created years and years ago — simple logic like this tends to be added to over time, though. As your own business and team changes over time, it’s not unusual to end up with a hodgepodge of declarative automation and code-based automation within an org. It can be challenging for business leaders and stakeholders to get great visibility into all of the different touchpoints for different records as they flow through the system.

Ultimately — perhaps many years down the line — it’s not unusual to hear about old automations “breaking” by virtue of new automations being introduced. Typically changes are well-intentioned:

  • let’s say a record-triggered flow is introduced on the Opportunity object when the need surfaces to decrement the parent Account’s annual revenue for Opportunities that are closed lost in a particular Stage
  • for some accounts, if the Annual Revenue field is updated again to exceed 1 million dollars, a duplicate Task is now created. The touchpoint for having congratulated the VIP account (let’s say) has already occurred. Whereas before it simply wasn’t possible for somebody to decrease an Account’s annual revenue, now a piece of automation has brought two parts of the system into conflict
  • technically, there’s nothing “wrong” per se with either of these practices, but every time this happens, the Sales team complains about having to close out a duplicate task; maybe some emails are being sent automatically from your company’s ESP and the duplicate congratulations email is embarrassing

This sort of situation is exactly how The Road To Tech Debt Is Paved With Good Intentions came about. Nobody meant to create a regression, and it’s not somebody’s fault that a regression occurred.

There was an outage at Slack last year, which resulted in this excellent RCA being published; that article’s take on blameless post-mortems heavily influenced my writing in The Life & Death of Software. More to the point, towards the end of the article there’s a link to a short paper named “How Complex Systems Fail” which I can’t recommend enough — number 14 in How Complex Systems Fail being particularly appropriate here:

When new technologies are used to eliminate well understood system failures or to gain high precision performance they often introduce new pathways to large scale, catastrophic failures.

There are so many gems in that paper — there’s something for everyone reading it!

Introducing Decoupling

Let’s look at how decoupling our flows and triggers from each other can help by examining the role that Custom Metadata Types (CMDT) can have within development.

Let’s create a custom metadata type called Automation with seven fields:

  • Automation API Name (Text 255)
  • Context (Picklist with the Trigger Context values like BEFORE_UPDATE; you can use friendly labels like “Before Update”)
  • Is Active (Checkbox)
  • Is Flow (Checkbox)
  • Namespace (Text 15)
  • Object API Name (Text 255)
  • Running Order (Number, 0 decimal places)

Note that it’s absolutely possible to create an even more loosely coupled CMDT structure for what we’re trying to do here — an obvious example would be to have Automation have an entity definition lookup to another Custom Metadata Type called Automation Object, thereby removing the Object API Name field from Automation itself. You can also simply make the Object API Name an entity definition field on this piece of Custom Metadata, but that brings additional complexity into the overall solution. Let’s leave those as exercises to the reader, and concentrate on showing how this simple object can create a one-stop-shop for viewing automations within a Salesforce org.

Creating The Automation Handler

First things first — let’s create a new class, AutomationHandler:

public virtual class AutomationHandler {
  protected String apiName;
  protected String namespace;

  // putting these here so you can envision
  // how to "unit test" these properties
  @TestVisible
  protected System.TriggerOperation context {
    get {
      if (this.context == null) {
        this.context = Trigger.operationType
      }
      return this.context;
    }
    set;
  }

  @TestVisible
  protected List<SObject> records {
    get {
      if (this.records == null) {
        this.records = this.context == Trigger.BEFORE_DELETE ? Trigger.old : Trigger.new;
      }
      if (this.records.isEmpty()) {
        // LOVE a good hack - technically this can
        // be any record that can't be triggered on
        this.records.add(new Organization());
      }
      return this.records;
    }
    set;
  }

  public virtual void run() {
    // getter below provides safety on the blind index call here
    String currentObjectName = this.records[0].getSObjectType().getDescribe().getName();
    // If you use the repository pattern, obviously
    // that's preferred to this raw SOQL
    List<Automation__mdt> matchingAutomations = [
      SELECT
        AutomationName__c,
        IsFlow__c,
        Namespace__c,
        ObjectApiName__c
      FROM Automation__mdt
      WHERE Context__c = :this.context
      AND ObjectApiName__c = :currentObjectName
      AND IsActive__c = TRUE
      ORDER BY RunningOrder__c
    ];

    for (Automation__mdt automationConfig : matchingAutomations) {
      Automation matchingAutomation = automationConfig.IsFlow__c ? new FlowAutomation() : new TriggerAutomation();
      matchingAutomation
        .setup(automationConfig.AutomationName__c, automationConfig.Namespace__c, this.context)
        // you can create a getter for oldMap too, if you'd like
        .pairRecords(Trigger.oldMap)
        .run();
    }
  }

  protected virtual Automation setup(String apiName, String namespace, System.TriggerOperation context) {
    this.apiName = apiName;
    this.namespace = namespace == null ? '' : namespace;
    this.context = context;
    return this;
  }

  protected virtual Automation pairRecords(Map<Id, SObject> oldRecordMap) {
    return this;
  }

  private class FlowAutomation extends Automation {
    private final List<Map<String, Object>> awkwardlyTypedFlowInputs = new List<Map<String, Object>>();

    public override void run() {
      Invocable.Action action = Invocable.Action.createCustomAction(
				Automation.class.getName(),
        this.namespace,
				this.apiName
			);
      action.setInvocations(this.awkwardlyTypedFlowInputs);
      action.invoke();
    }

    protected override Automation pairRecords(Map<Id, SObject> oldRecordsMap) {
      this.awkwardlyTypedFlowInputs.add(
        new Map<String, Object>{
          'Record' => this.records,
          'Record_Prior' => oldRecordsMap
        }
      });
      return this;
    }
  }

  private class TriggerAutomation extends Automation {
    private Map<Id, SObject> oldRecordMap;

    protected override Automation pairRecords(Map<Id, SObject> oldRecordsMap) {
      this.oldRecordMap = oldRecordMap;
      return this;
    }

    public override void run() {
      TriggerHandler handler = (TriggerHandler) Type.forName(this.namespace, this.apiName)?.newInstance();

      switch on this.context {
        when BEFORE_INSERT {
          handler.beforeInsert(this.records);
        }
        when AFTER_INSERT {
          handler.afterInsert(this.records);
        }
        when BEFORE_UPDATE {
          handler.beforeUpdate(this.records, this.oldRecordsMap);
        }
        when AFTER_UPDATE {
          handler.afterUpdate(this.records, this.oldRecordsMap);
        }
        when BEFORE_DELETE {
          handler.beforeDelete(this.records);
        }
        when AFTER_DELETE {
          handler.afterDelete(this.oldRecordsMap);
        }
        when AFTER_UNDELETE {
          handler.afterUndelete(this.records);
        }
      }
      handler.andFinally();
    }
  }
}

What have we achieved? Here’s what we have to update:

  • the TriggerHandler — now that the switch statement for the trigger context lives here, TriggerHandler itself can become very simple indeed, with only the virtual context methods stubbed out. A class can override more than one handling method, when appropriate, so long as it registers an Automation__mdt CMDT record for each appropriate context.
  • record triggered flows to be Auto Launched flows that have Record and Record_Prior input variables.

In exchange, we’ve now completely decoupled flows and triggers from their existing execution order. We get full configurability on when those flows and triggers run; we also get to see all automations in one place for any one given SObject.

In this example, the CMDT itself is the dependency that’s injected — by decoupling the classes and flows that are going to be used by automations, the AutomationHandler itself becomes all about orchestrating the overall flow through the system for records as they’re created, updated, deleted, and even (gasp!) undeleted.

Notes

It’s worth taking down a few notes on the AutomationHandler that’s shown:

  • this is an extremely naive take. It’s not meant to be exhaustive — rather, it’s meant to be indicative of just how much is possible with a few lines of code
  • there’s no error handling: neither on the creation of flow/trigger-based automations, nor when each individual automation runs
  • there’s no observability. A good object hierarchy simply works. A great object hierarchy has observability measures put in place, giving insight into metrics like “how often does this process fail?” and “how long does this process take?” For more on observability and measurability, I recommend reading up on “Service Level Objectives” (or SLOs, for short)
  • there’s no concept of bypassing automation runs (either by profile/user/org or by downstream processes)
  • shown, but without context, is the passing of a namespace, which would allow packages in different namespaces to register their own automations on your own objects
  • the run method is an example of the Command pattern, and it helps a lot with cutting down complexity downstream in our process. The reason for that is because it allows both our flow and apex handlers to react to the same message without having to cast them to a specific type. I’m a big fan of using Command towards the top of an object hierarchy for that specific reason!

More on some of these notes later on. For now…

A Laundry List Of Enhancements

Grouping automations in one place can help larger Salesforce orgs by increasing observability at the expense of increased complexity. Decoupling can help to manage complexity, but itself carries some complexity weight. Caveat emptor: remember that one size does not fit all. The rationale for doing something should never be “because it works for x” (x being some other company). Rather, decoupling exists as a tool in your arsenal; it should be used as a tool accordingly. Prematurely introducing too-complex a solution can slow development and fracture internal teams — get buy-in before making big changes!

With that being said, there’s plenty of interesting ways to expand upon this example to provide additional safeguards when it comes to automations:

  • you could set up a scheduled job using SOSL that periodically scans for Apex classes extending something like TriggerHandler that don’t have matching Automation__mdt entries
  • that same job, or another one like it, could also scan for record triggered flows that don’t use the framework
  • as well, you can scan for Automation__mdt entries without their corresponding flow or Apex implementations

At a former employer, we eventually built something like that third bullet point to monitor asynchronous Apex implementations — entries that didn’t conform with any of our existing async frameworks were identified while CI was running and any given build would fail as a result. This was an example (though I didn’t know it at the time) of the emergent “Shift Left” movement, which I discuss more in Uplevel Your Engineering Skills.

In general, once you have a framework, you also get the ability to tack on extra pieces that aid & abet your ability to iterate faster, identify areas of the system that don’t conform to the same “shape” as the rest of it, and get fast feedback.

Wrapping Up

If this intro to decoupling looks familiar, it absolutely should. It’s meant to be an extremely simple take on the Trigger Actions Framework. As a full-fledged framework, it comes with all the bells and whistles you might expect from the Notes section above. I like to use the trigger handler pattern as an abstract example of how decoupling can be achieved because most orgs will use that pattern.

Using Custom Metadata to decouple automations from code is widely used within Salesforce codebases across the world, such as:

  • Apex Rollup uses this pattern in a number of different areas, allowing subscribers to build plugins for rollups, as well as to configure the rollups themselves
  • Nebula Logger makes extensive use of this pattern, allowing subscribers to add things like data masking rules, perform additional automations on Logs and Log Entries (itself a mini trigger handler pattern!), add tags and scenarios, etc…
  • NPSP to customize data imports and custom notifications, among other things

The list could go on and on. Despite that, I frequently see and hear from people who are looking for examples of when to use Custom Metadata Types — it’s my hope that this article will help with that. Of course, it’s also useful to give a counter-example: times when CMDT may not be the correct avenue for decoupling. If you need to do role-based, criteria-based, or ownership-based decoupling of any kind, you will probably find custom objects a better fit for decoupling code from business logic due to the strong support for sharing of record access within Salesforce.

Of course, a huge thanks to Henry Vu and the rest of my patrons on Patreon. Your support means a lot, and always gets me excited about what to write next!

In the past three years, hundreds of thousands of you have come to read & enjoy the Joys Of Apex. Over that time period, I've remained staunchly opposed to advertising on the site, but I've made a Patreon account in the event that you'd like to show your support there. Know that the content here will always remain free. Thanks again for reading — see you next time!