Uplevel Your Engineering Skills
Posted: June 17, 2022

Uplevel Your Engineering Skills

Table of Contents:

  • Getting Feedback (Shift Left)
  • The Fallacy of Problem-Solving (And How Triangulation Can Help)
  • Create Proof Of Concepts & Be Prepared To Throw Them Out
  • Balancing “You Aren’t Going To Need It” (YAGNI) With Extensibility
  • Balancing The Need To Contribute With The Importance Of Listening
  • Becoming An Expert At Reading Code
  • Wrapping Up

Want to improve yourself as an engineer? Looking for tips & tricks to empower yourself as a teammate or individual contributor? There are a few things that you can do at any point in your career that will help. There’s a reason that professional athletes frequently preach the importance of fundamentals — while it may seem like common sense, now, it’s only been recently through the application of science to sports that experts have begun to understand how practice quite literally makes perfect by improving (through neurobiology) our bodies’ ability to perform simple tasks. These fundamentals aren’t meant to be a secret — yet without acknowledgment, they continue to be elusive within our industry. Let’s dive in and learn a little along the way!

Broadly speaking, we’ll be covering a few specific areas where it’s easy to make mistakes — and also easy to course correct:

  • when you get feedback (and how you gather it) -> the importance of “shifting left”
  • the fallacy of problem-solving -> the importance of triangulation
  • rapid prototyping -> the importance of being able to scope and throw away discovery work
  • over-abstraction and pattern-matching -> the importance of balancing YAGNI with the principles of extensibility
  • feeling pressure to contribute something to every discussion -> the importance of learning to listen
  • becoming an expert on writing code -> the importance of being able to read code, especially code written by others

None of these areas are mutually exclusive; think of them more as peaks in a contiguous range of mountains. They come from the same place, and they’re frequently involved with one another.

Getting Feedback (Shift Left)

Feedback comes in many forms. Sometimes that means a failing test. Other times, it means a production issue. Sometimes, it means an automated build failure. It can be something as simple as a customer or stakeholder asking for a tweak to the UI that — while not in the current spec — is much better than what was originally produced. We get feedback from colleagues, from our clients, from automation, etc … And yet, too frequently we are getting feedback too late. Entire industries have been setup to help people automate the process of getting feedback on their website, for instance, with live replayability of customer sessions that end in exceptions. While bugs are inevitable and unavoidable, we have recourse when it comes to identifying issues — getting feedback earlier in the process.

“Shifting left,” or controlling the quality of the software we produce earlier on in the process, is the phrase that’s been coined to identify how much faster and how much more precise we as teams and individuals can be when we get that feedback at the right time. Here are some examples of when shifting left (continuing with the usage of -> as the change signifier) in order to get fast feedback can help improve our ability to deliver a quality product faster:

  • we want to make changes to a system but have had little exposure to it and don’t understand the pieces of the software we’re seeing -> writing a failing test (sometimes, writing tests to try to self-document what the existing system is doing already if the possibility of introducing regressions is already high)
  • we’ve been asked to do something but are removed (physically, mentally) from the people who will be using it, or have identified a previously unseen issue with what’s been requested -> getting input from the people who will actually use our product, or the people that work with them and represent them (subject matter experts, or SMEs).
    • Sometimes, this comes about organically while working on something that previously appeared to be a well-defined problem — that’s normal!
    • We have an ongoing kanban list we use on each piece of work where we denote questions that have come up along the way, in “To do”, “Doing” and “Done” columns. Many stories, this kanban list is never used — but it’s invaluable as a tracking tool when questions do come up mid-work, and we try to move items through that list as soon as possible — by getting stakeholders involved
  • we’re stuck working on a problem and haven’t made any progress -> summarizing the work done thus far and asking questions in any kind of public forum where other experts might be able to help. For example, if you’re doing Salesforce development or administration, I will always recommend the SFXD Discord

The alternative, for the record, isn’t necessarily failure. It could be more iterations, or success achieved later on down the line. Some of the “best” (if we were to rate things by their educational value) bugs I’ve experienced came from issues experienced in production — but that doesn’t mean that I would endorse the existence of bugs, or a non-ideal UI, or any other number of other issues, simply in order to learn from them. On the contrary, it’s nice to be able to incorporate feedback as soon as possible in the process of building; it’s helped us a team discover the need for hidden data migrations, better UI interactions, sensible defaults for fallback values, etc. We also use Nebula Logger to highlight potential error paths within applications, particularly when we first start working within them. Logging unexpected errors has also helped us to “shift left” by highlighting the frequency by which errors occur, which helps us to respond to customer feedback faster and to document issues even prior to getting that feedback.

There are, then, different levels to feedback, that become possible (from lowest to highest):

  • informational logging coupled with analytics (helps with understanding usage and happy path application states). These are periodically purged. You can think of this as “is it on?” feedback.
  • ongoing feedback gathered during the active development on our projects. Sometimes this is done with stakeholders; other times it’s aggregated and conveyed to us via our Product Owner
  • raised levels for error logging (in our case, through the usage of the Nebula Logger Slack plugin). We have SLAs for triaging issues reported through error logging and for when those issues have to be resolved
  • intake route(s) to hear from people firsthand about issues they’re experiencing, which we respond to as a team and which follow the same SLA process outlined above

It could be that you write flawless code, and that you have no issues with identifying the culprit when tracking code through multiple applications. For everybody else, a great place to start shifting left is by implementing application logging (you’ll note that simply having access to logs opens up two of the “shift left” paths above). I’ve already spoken about this in A Year In Open Source, but implementing a simple version of Nebula Logger within Apex Rollup led to a massive improvement in my ability to document, track, and respond to issues experienced while using the framework. The number of times that the reason for a particular issue (or insight into how to fix it) was simply included by the logging output I’ve made into a toggleable feature within this repo makes me really happy.

For another example of shifting left, make sure you don’t miss Custom Metadata Decoupling.

The Fallacy of Problem-Solving (And How Triangulation Can Help)

One of the downsides to spending all day problem-solving is that it becomes convenient to spend time trying to solve a particular problem instead of recentering. Because we do this for a living, and are used to starting something with no knowledge of what the finished product will look like, it’s not always easy to recognize when the time commitment we’ve put into something has gotten us further away from the actual problem we’re trying to solve. There’s a reason the idiom “losing sight of the forest for the trees” exists — but what can we do to avoid it?

Here’s a recent example from one of our mobbing sessions:

There we were, examining a frontend requirement to display a few new things while a record was being created. We started to prototype out how exactly we might do that. When we went to plug our new “Hello World” component in to the rest of the system, it didn’t work. It was actually too new — the interface we intended to use didn’t support the framework we were using. We knew we could use an earlier framework but that was extremely unappealing — in fact, the very thought of it was emotionally draining. Somebody, at that moment, asked if they could clarify what we were trying to do. Wouldn’t it be possible to … skip everything we’d just looked at? There was another way — a much easier way. Everyone breathed a sigh of relief. The solution we would use would forego everything we’d just talked about and tried to stand up, and it would be simpler, too.

This technique is known as triangulation, and it comes to us from data science (vis a vis polling). Triangulation technically means to take in data from a variety of sources before commencing analysis; for our purposes, it means drawing on an available list of solutions comprehensively. Make time for this throughout your day, particularly in the discovery part of building solutions or architecting. Sometimes this presents itself as the XY problem, but that’s not always the case. Triangulation can be used to spot when you’re in an XY problem, in other words, but it’s much more broadly applicable than simply spotting things like thats.

Triangulation can also identify something that’s just about the opposite of the anecdote from above — because sometimes the appropriate solution is actually more complicated than what you’re currently trying to do. This can happen for a variety of reasons — the most common one I see is when the code being worked on actually needs to be callable not only from what’s currently being designed, but also at a higher point of abstraction that can’t (or shouldn’t) know anything about the code you’re working on. Recognizing when solutions can benefit from a more general approach (genericism) can be achieved through triangulation.

How does one triangulate, you say? It’s easy!

  • compare what you’re doing to things you’ve already done
  • frequently ask yourself — and others, if it’s socially acceptable — while investigating a problem if the work you’re currently doing is yielding results, or if another approach might be more helpful
  • agree (with yourself, or with your team; make an agreement, if you don’t have explicit working agreements) on how much time you think the discovery period for something should be. Some teams refer to this time as a “spike”, though I’ve found “discovery” to be far more common outside of internal engineering teams for this same process. Anchoring to a timeline for something like this may help with forecasting, but for our purposes it’s more useful as a mechanism for subdividing the time within that period into triangulation periods.

I told a story that involved triangulation in The Life & Death of Software when I spoke about us, as a team, being excited about using a piece of technology that ultimately wasn’t well suited to the job. The reason we were able to identify that tech as being incompatible with our end goal was through triangulation. In our case, that particular piece of triangulation was a part of our backlog refinement session; we recognized that there were two probable paths to our end goal, and each of those paths produced a “spike” … and only one of those spikes amounted to anything. We were lucky, in one sense; throwing away work is never fun, but what’s even less fun is thinking that you’ve identified two possible solutions and ended up with no possible solution, with only more discovery in front of you.

Here’s another example of triangulation:

We had installed a prebuilt component onto a page. This component is part of Salesforce’s platform; it’s literally something drag and drop. The expectation going into the installation of this component was that it would be relatively easy, as a result. When the “complicated” part of a task is putting something into source control, life is good. Except it wasn’t. Something wasn’t working, and we didn’t know why. We spent some time trying to diagnose the issue, without success. It was brought up that we’d recently made use of this same component without issue elsewhere. A cross-comparison of source control quickly brought us to a subtlety missing from our current setup. The issue was resolved — and we, as a team, learned about a previously unknown requirement.

Stay focused, and triangulate when necessary.

Create Proof Of Concepts & Be Prepared To Throw Them Out

Figuring out where to draw the line when working on Proof Of Concepts (POCs) can be a tricky thing to navigate. This goes back to triangulation, in the sense that we have to restrain ourselves from building out every part of a system or new feature. This is where a little bit of architecture can go a long way. For instance — if we know that an API call is necessary to fetch certain information, the ceremony that goes into making that API call might be better off being stubbed within a POC (unless this is actually the first time you’re doing HTTP-related behavior in an application). Broadly speaking, HTTP is a great example of where it’s easy to go too deep when doing a POC. Why?

When performing API calls, there’s a whole orthogonal logic structure that needs to be applied to what we’re doing, and it may involve completely separete architectural concerns (which may or may not be handled for you, already), like:

  • what sort of data is necessary to provide to the API? How will we source that data?
  • what kind of data is received by that API? Are there pre-existing models for that, or will we also have to represent that?
  • what kinds of status codes does the API send, or document, or do we suspect we’ll need to be aware of?
  • is this a “hot” API code path? Do we need to have a retry mechanism in place? What about sequential backoff?
  • is the API rate limited? Are there cost implications to using it? How will the answers to those questions impact our solution?

Hopefully that demonstrates the sort of thing I’m talking about. Provided that we have a pre-existing framework for these sort of things, plugging our new code into that framework during a POC may be overkill. We may simply want to say something like “and here, we’ll get data from X and continue onwards.” X, in this sense, doesn’t even have to exist. The point is specifically not to make this perfect — it’s to enable some other part of our application. Especially when (as in the triangulation story) our POC is meant to show a possible path towards a desired end-state, keeping things properly timeboxed helps us to make decisions.

At the same time, we already know about our friend triangulation, and that (hopefully) helps us to build up experience and a mental model that prevents something like confirmation bias when timeboxing things. If a teammate is familiar with framework Y but people are having trouble picking it up within the confines of a timeboxed session, triangulation helps us to break out of the “we need to learn Y at least rudimentarily by end of day/week” mindset and into asking: “is it really realistic for us to deliver this feature using Y if we can’t get a handle on it by {timeboxed end period}?”

Don’t shoehorn; don’t commit to unproven technologies unnecessarily. The above, in my opinion, also helps to insulate us from committing, emotionally, to solutions that were only meant to be demonstrative. Again, this is a tricky balance to get right; the difference between a sloppy POC that has to be rewritten, anyway, versus one that’s intentionally light, isn’t always going to be so clear-cut. On the whole, though, I would say that I’ve thrown away more work over the past four years than I would have thought possible in my first three years of development. I don’t keep an actual running tally of such things, but it’s something that I talk to my wife frequently about, particularly within the realm of my open source work. There are ideas that I’m still excited about, but haven’t been able to get off the ground while prototyping — this is especially true of some refactorings that I’d like to apply. Triangulation occasionally has been responsible for seeing some of those refactorings eventually lifted into production, but it’s increasingly more common for a method to be tried and abandoned (or saved for later) when it becomes clear that there’s no way within the timeframe I’m working in to actually make any kind of meaningful progress on something.

All of that is to say — don’t be afraid, or disheartened by throwing work away. It’s not the end of the world. Our second, third, fourth or however many passes it takes to do something the right way frequently are better off for us having had the experience of building things the first, second, third, etc … time.

Balancing “You Aren’t Going To Need It” (YAGNI) With Extensibility

Of all the challenges in software engineering, committing to the right abstraction is frequently the hardest thing we are tasked with doing. The engineering equivalent would be having to design and imagine the building blocks necessary to make a bridge, such that a bridge could be built over any amount of water, in any kind environment. That’s … not a reasonable ask, yet because so much of our shared backgrounds as programmers come from the school of thought that drills into us that duplication is bad (and it is, mostly), we become better and better at pattern-matching within development. We draw parallels between pieces of code; we build up a mental model for our software that helps us quickly identify similarities between code paths.

This is a good thing — but like our second “problem” (meta-commentary: I’m drawing parallels between these two problems, and run the risk of accidentally experiencing the same fallacy I’m here to talk about …), where without triangulation it becomes too easy to get lost in the weeds by virtue of always trying to problem-solve, here the risk is in unncessarily coupling unrelated concepts together when they don’t really belong together. I think this can best be demonstrated with a code sample — the first in this article! — utilizing the excellent Command pattern:

public interface IExecutor {
  void execute (Object data);
}

public class HttpWrapper implements IExecutor {
  public void execute (Object request) {
    new Http().send((HttpRequest) request);
  }
}

public class Sorter implements IExecutor {
  public void execute (Object comparableImplementation) {
    ((List<Object>) comparableImplemetation).sort();
  }
}

The execute sub-pattern for Command isn’t an uncommon one, and sometimes it makes perfect sense. However, if everything object-wise can be “executed”, we’ve over-abstracted our business domain, and made it harder for each of our objects to differentiate themselves. Unless we’re truly working with objects generically, where this pattern can really shine, the two implementations shown hopefully demonstrate the kind of over-abstraction I’m talking about. In this context, our IExecutor interface isn’t meaningful; it adds nothing and has the unfortunate side-effect of increasing the complexity of the underlying implementations by virtue of having another layer of indirection.

While the above classes are loosely coupled from one another, they’re tightly coupled in other senses (here, in both cases, to the standard library). When we talk about wanting to build up class libraries of loosely coupled objects, our interfaces should be enabling us to code around dependencies without having to care about the underlying implementation — whereas here, we’re not only tightly coupled to the underlying implementation, we’re really ready (in a bad way) for a runtime exception if our types don’t correspond exactly.

Over-abstraction, and its dangers, have frequently been “countered” by the YAGNI principle, which advocates for only building exactly what you need. YAGNI advocates mean well, but keeping things “as simple as possible” in the present only works if every single time we build we’re re-evaluating for consolidation opportunities. Pinning to the wrong abstraction is dangerous because it reduces the clarity of our codebase; always keeping things simple is dangerous because it tends to eliminate helpful refactorings that serve to encapsulate our specific business domains within the code. There’s space for middle ground here — it’s almost like making blanket statements that are extreme means we miss out on business opportunities! 🙃

Balancing The Need To Contribute With The Importance Of Listening

Note that this is not specific to engineering at all, but when it comes to acting as a force for good (sometimes referred to as a force multiplier) on a team, or within an organization, balancing the desire to have an impact with how important it is to truly lend your attention (our most powerful method of being present) to hearing others out can be challenging. Advancing as an engineer requires soft-skills (I recently recommended Software Engineering - The Soft Parts for precisely this reason), and the most important skill amongst them is the ability to combine the other things I’ve talked about here with your full, undivided attention. Listening — really listening, not just being there — to your stakeholders is part of “shifting left;” getting fast feedback instead of just eagerly diving into building something that doesn’t fully fix a problem.

Within engineering, our ability to disassociate from our ego becomes a central part of the ability to continue “leveling up,” so to speak. If our options are:

  • feel threatened by not knowing something, especially if it’s being brought to you by somebody you consider more junior (note that I say, in particular, “you consider”, as this isn’t really about the role another person has, but is about how we react to not always knowing the answer)
  • welcome feedback from somebody, regardless of their role, and best consider how to proceed in light of new information

Then the person that doesn’t act like a complete asshole will (nine times? ten times out of ten?) be the one who ends up being able to advance because their peers will prefer to work with them. To be clear, it would be easy to read that and cynically assume that I’m saying that acting nice is the only thing that’s important, and that you can go on your merry way up the corporate ladder — but that would be missing the point entirely. The point is to assume that everyone has the best intentions; that you may not be the most knowledgeable; that different perspectives will lead to a more comprehensive solution. This is what the simple act of listening allows us to do, and those that listen well will learn more. It’s assumed that you, as an engineer, will figure things out — that’s our baseline for simply doing our jobs. If you can also help others along the way, the sum will tend to be greater than the whole of its parts.

I’ve noticed this even within the confines of a single year; I speak less, now, than I ever have before. This runs contrary to my instincts, from time-to-time, but particularly with remote work I know that it’s easy to accidentally cut somebody off mid-sentence … it’s much harder to spend time and attention ensuring everyone has spoken their piece before using the other tools outlined here. If you want to consciously foster an environment of learning, it’s also important to make your role much less about giving answers or dictating solutions, and more about asking guiding questions. The worst possible scenario for any person is to become the single point of failure. Asking guiding questions (which relies on your ability to listen, pay attention, and comprehend fully where another person is coming from) enables you to grow the people that you work with. These things take time, and along the way you’ll have to occasionally throttle yourself and make space for others. I think you might also be pleasantly surprised by how often you learn something in the process.

Listen. Learn. Grow. These are simple words, but they’re also satisfied ones. Satisfaction is a known indicator for continued success in a job, and it’s also consistently been shown to impact how fast teams and individuals are able to produce working software while minimizing defect rates.

Becoming An Expert At Reading Code

I’ve spoken about this before, in `Naming Matters In Apex — that reading code is a totally separate skill than being able to write code, and frequently we’re only flexing the mental muscle necessary for the latter in the early stages of our career. I say this ruefully, as somebody who now longs to be able to read the code of the first few systems I worked on; at the time I was completely tunnel-visioned on fixing bugs and adding functionality, and as a result missed out on what I imagine were some excellent learning opportunities. I also have to laugh, as I’ve heard on not one but two recent Changelog podcasts (one with Jessica Kerr, one with the founders of Graphite) about how much people hate reviewing code.

This is a position I’ve heard about frequently, off of podcasts, as well: that code review is a dismal assignment, and that people dread it. That’s one way to look at it. I see there being so much apprehension — if not outright dislike, at times — for code review as an opportunity. It’s an excellent way to distinguish yourself as an engineer, and a lot of that boils down to coming to enjoy the drastically different process that comes with reading code. While tech is always advancing, it’s also a lot easier to practice your code-reading skills (in my experience) these days; since GitHub became such a big player in the open source realm, it’s now trivial to go look up how idiomatic code is written in the language of your choice. For example, a cursory search for the almost-entirely academic language of ATS turns up an enormous number of results — and this is for a completely niche language that I only know about because one of my friends at Red Hat used it in their undergrad courses.

It’s possible that within the scope of your own job you have opportunities to read code written by people you trust and admire — that’s always an excellent opportunitiy for learning. As with most things, I recommend an approach that combines critical thinking with understanding; I know people who are very well-steeped in “how we do things,” and have learned some excellent habits along the way, but fall short in being able to explain why things are done a certain way. That kind of learning is still valuable, but too often it’s framework-specific (the equivalent of “vendor lock-in”, but for your mind!) and when they have to venture off the rails, progress becomes a lot slower. Contrast this with the sort of learning that I’m advocating for, which would encourage a big reading deep-dive into how a framework works such that transferring that understanding becomes easy and increases your own value in the process.

As an example, it would be like somebody using JQuery, React, Vue, LWC, Svelte, etc … without having taken the time to learn HTML or CSS. The former is framework-specific; the latter is transferable (and sets you up for success in learning a different framework). That doesn’t mean that if your job is working within React, you shouldn’t be reading framework-specific repositories if you want to increase your reading comprehension within that framework. It does mean that you should also be incredibly familiar with the React developer docs (and, as an example, how lifecycle methods work under the hood & the differences between class and function-based components). Having a comprehensive foundation in any subject like that will improve your reading comprehension skills by default; it will also help you to identify commonly-made mistakes (which we all make) when reading code.

That brings us back to the code review topic, then. If you can differentiate yourself as a skilled and thoughtful reviewer of code, that’s an incredible plus — both within your organization, and on your resume. You’ll help others get unblocked, and learn a ton in the process. Don’t forget to flex your reading muscle!

Wrapping Up

Don’t take my word for everything. I hope that this article serves as a fun and thought-provoking starting-off point — or continuation — as you grow in your own career. I’ve had a few days away from this article since I last wrote something for it, and I thought this last anecdote might prove insightful and funny to you all. As I’m falling asleep, I tend to do my best mental work. Some of the most obvious bugfixes, ideas for patterns, articles and refactors have come to me just as I’m about to drift off. I’ve spoken before — in A Year In Open Source — about how I frequently record items within the Notes app on my phone, and the vast majority of those bullet points are written after bolting upright in bed with a sudden thought. I mention this specifically because this is an example of something specific to me; I don’t view it as a skill, and I would never go to somebody else and say:

Well, you know, I think the best thing you can do about that problem you’ve been telling me about is to think about it unconsciously and then, right before you fall asleep, try to come up with the solution

We all work in different ways. We think in different ways. This article, if nothing else, is about trying to consolidate the commanalities we do have out — for your enjoyment and thought. Let me know what you think, and thanks for reading along. Till next time!

In the past three years, hundreds of thousands of you have come to read & enjoy the Joys Of Apex. Over that time period, I've remained staunchly opposed to advertising on the site, but I've made a Patreon account in the event that you'd like to show your support there. Know that the content here will always remain free. Thanks again for reading — see you next time!