//

“Effective Tech Leads Empower Developers to Ship Projects Faster with Higher Quality”: Dominique Simoneau-Ritchie, Chief Technology Officer at Affinity (Video + Transcript)

December 26, 2023
VIDEO

In this ELEVATE session, Dominique Simoneau-Ritchie (Chief Technology Officer at Affinity) will share proven techniques to set up projects in such a way that everyone on your team ships with high confidence and quality. She will talk about establishing the automated testing frameworks, data, and examples required to make it mandatory to add automated tests as part of every code change.


WATCH ON YOUTUBE

Dominique Simoneau-Ritchie discusses the importance of the technical project leadership role in engineering teams. She highlights four key practices that effective tech leads use: understanding technical debt and defects, establishing automated testing frameworks, aligning with technical decisions, and setting up scaffolding for quick feedback loops.

Transcript:

Dominique Simoneau-Ritchie:

Thank you. Hi everyone. Throughout my career, I’ve led, coached and mentored hundreds of engineers leading projects, and at Lever, Wealthsimple and now Affinity, I’ve introduced a technical project leadership role. So today I want to talk to you about that and to provide… And the reason that I love to do this is because it provides engineers with experience leading projects and true ownership. And I believe that ownership helps us ship the highest impact features for our customers. Because as engineers, we’re uniquely positioned to understand how to implement features really quickly while keeping our codes simple and maintainable. And as such, I’ll highlight four key practices that effective tech leads use, many of which I learned by observing teams and tech leads and leading projects myself.

So first, a quick note on the role. Like many titles in tech, Tech Lead lacks a common definition. It may not even have the same role from company to company or even from team to team. I personally prefer establishing this as a temporary role for the duration of a single project. It creates more experience and more opportunities for people to gain that experience leading projects from a technical perspective, which in my opinion is a mandatory skill for wanting to progress from one level to the next. At Lever, we already had tech leads and actually a couple of them had burnt out because they led every single project at the time. And so I named this role Project Lead to really imply that it’s meant to last for the duration of a project. At Wealthsimple, we were familiar with the term DRI so I called it Tech DRI. And now at Affinity, I’ve introduced the role as a Tech Lead. In some companies you might be a team lead leading a project or in a smaller team, an engineering manager leading a project. So regardless of your title, if you’re leading a project, then some of these are your core responsibilities here and this talk will be relevant for you.

And for this talk, I’m going to choose to focus on the technology aspects of the role. And the reason for this is that no one else is going to ask you to do this. It might seem obvious that Tech Lead is literally the title, but many companies are very product driven and don’t naturally create the space or the expectation for engineers to invest in technology and so they end up shipping with lower velocity and lower quality as a result. As engineers, it’s tempting to start to plan work exactly the way that your product team thinks of it to get to customer value faster, because it seems faster, but often it’s not. And as someone leading a project, you have the greatest ability to ensure that we’re constantly improving our architectural foundations and the developer experience so that we keep being able to innovate and build quickly in the future and not just for one single project. Don’t wait for somebody to ask. This is a mindset that you can apply to any test that you’re working on, big or small.

So I’m going to focus on a few proven techniques to set up the technical aspects of your project in such a way that everyone on your team ships with high confidence and quality, by understanding tech debt and defects that will cause slowdowns and problems later, establishing automated testing frameworks, data and examples required to make it mandatory to add automated tests as part of every single code change, aligning with technical decisions across your organization to make progress against your engineering strategy, and finally, setting up scaffolding to ensure that all developers have quick feedback loops.

So let’s start with technical debt. It’s important to take a look at any debt related to the scope of a project that you’re about to kick off. In product management, we often do a thing called lit review, which is to look at all of the customer enhancement requests, user research, feedback that have come around this area related to the feature that we’re about to build to inform what we’re going to build in the scope. And here, from a technical perspective, we can do the same thing. So look at all the manual tasks to understand how to automate them, to remove the need or to build the feature into the product. So for example, at Wealthsimple we built a brand new mobile app that customers had to sign up and onboard into. And so as part of that, the identity team started looking at where they had issues with regards to identity and settings that they could fit into the scope of the project. And so we were able to move over tons of manual tasks related to customer profiles, both into the app, but also into internal tools when it didn’t make sense to have that workflow be part of it so that those tasks, instead of going to engineers, now were either completed by our CS team or customers directly, which also created a better customer experience.

You can also look at Rollbar, Datadog, Sentry, at your alerts and monitors to understand that there are a lot of performance issues, timeouts, 500 errors, things that are related to these areas as well because that will inform how you build your data models and what changes you might want to make as you’re building either on top or adjacent features to what exists.

Look at existing but also fixed bugs. Are there patterns with recurring bugs, like you’re constantly seeing the same one or something related to an area that might indicate that you haven’t really designed a state machine and you should because you keep having these one-off errors? Or are there maybe bugs that are hard to fix? We had so many bugs at Lever that had 50 customer requests that we had difficulty getting to because the feature hadn’t been designed initially to solve for something. It just wasn’t possible to do with the current architecture. And then similarly incidents, postmortems, maybe the action items that came from those. Are there any that are still not resolved or not done? And documentation. So maybe your internal documentation for helping developers get set up and work in that area of the code or public documentation on the feature. A lot of developers will Google, “How is this thing supposed to work?” when they’re working on a new feature for the first time. And looking at the customer documentation and updating it is a great way to do that.

Talk to your cross-functional peers and PMs. So don’t do this in a vacuum. Go do a quick review, see what exists, and then share your learnings with either the PM, the designer, your data scientist. They’ll be able to add missing context and they might have additional use cases from customers that help justify an increase in scope. And so this is really not a, “Oh, I’m going to go do all these engineering things all on my own.” This is truly about bringing your knowledge and the technical aspects to the table and making sure that you agree on what the right scope is for the project.

And also when it comes to tech debt, it’s a really good idea to pick one to three high priority bugs that multiple customers have requested and to fix them upfront. A lot of engineering managers actually already kind of do this without proactively planning for it. So they’ll know, “Oh, I think this person’s going to end up working on this area of the code next, so I’m just going to keep sending them bugs this way.” And so what I’m proposing is that you do this proactively when you kick off a project. And the outcome might be that you onboard really successfully and you get context in an area of the code that you didn’t understand before or didn’t know the complexities of, you realize why these bugs aren’t fixed, and you address that as part of the technical design for the feature that you’re now building. You didn’t design the state machine correctly maybe so there’s constant edge cases to address.

You might also just fix customer bugs that people have been asking for for a long time. And if you understand enough about where you’re going, you could potentially even be refactoring to make it easier to build on top of. And finally, sometimes the outcome will be, and I wish it wasn’t often the case, but you’ve identified something worth fixing, but it’s much bigger than you thought and it doesn’t really fit into the scope of the project. And that’s okay. That’ll happen sometimes, but at least you’ve learned and you’ve gotten context that’s going to inform what you’re going to be building as part of this project.

Now, automated testing, it’s also sometimes a debt if you haven’t done a lot of it. It deserves its own focus because it determines your ability to confidently make major changes in existing areas, but also to ship really quickly with high quality anything that you do within the scope of your project. You already probably know this, but these are some of the reasons that make it worthwhile to focus on automated testing, but not just to do it as part of your project, but to do it upfront. So not later, but really, really thinking about it proactively. You’re less likely to introduce regressions. You increase confidence refactoring your code. Your tests act as documentation for developers that are joining if you end up having developers joining later on, and then it’s easier to onboard developers because of that. And it’s required for continuous deployment, which even if you don’t do today, is eventually probably going to be required. And doing it now will help you as you increase your technical foundations.

And so when I say automated testing in terms of setting up your project for success, what I mean is everything required to make automated testing just part of every PR and part of every small piece of the feature that you implement. So that might mean an initial subset of stubs, mocks, connections to real data, at least one test of each type that you plan to support, maybe a unit test, an integration test, end-to-end tests. It could also mean test coverage. We’re building a brand new product, which is part of our core app and monolith at Affinity, and as part of that, we’ve decided to enforce a certain amount of test coverage because it’s net new, it’s really easy for us to enforce it pretty high right now. And so before we even build all of our features, we’ve put in place what’s required to make it possible to do all of these kinds of tests and to measure and get to, I think it’s a hundred percent actually, I don’t know if that’s too ambitious, but that’s what we’re setting it at to start.

And finally, and this is where it actually might feel not natural, put test cases in for existing features. And I have a really good example of this. At Lever, we started a project built on top of a feature called Offers, as in job offers to candidates, a recruiting tool, which was created over six years ago. Offers had very few automated tests, I think it might’ve been written by the founder. And getting into the right state to test required a lot of manual setup on your machine, which was really not obvious. And no one on the team had ever worked on Offers before.

So one of the first tasks that the tech lead assigned to someone on the team, so small aside here, you don’t have to do all of the things I’m talking about yourself, it’s really great to distribute this as part of the planning for your team, was to create two happy path end-to-end test cases for the existing feature. This had multiple benefits. The developers working on the tests learned how Offers currently worked. By running them automatically as part of continuous integration, we increased the confidence of all of the developers and also decreased the chance of introducing regressions. We established a pattern for our end-to-end tests, which made it easy to add new end-to-end tests as part of the project, finally, that’s the last thing we did. And so this is a really good example of the kind team leverage that a technique can generate when they empower everybody to contribute and they proactively plan for the type of technology and type of work that we’re going to want to do next.

So that brings me to engineering strategy. And you might think I work at a company that doesn’t have a strategy, but all engineering teams have a strategy, even if it’s accidental, not documented, or maybe only one team knows about it, maybe there’s one team making really forward-facing decisions and then they go in and they make these decisions. But the reality is your company’s probably in the midst of converting code to a new language, trying to standardize on a single design system, maybe they’re adopting microservices or componentizing a monolith. Everywhere that I’ve worked, we’ve been in some sort of transition. And I’d argue that if you’re not, you’re probably creating additional tech debt with everything you build. Technology changes and it’s faster and it’s easier to keep up than to have to invest in some sort of full migration later.

So here’s some example engineering initiatives to get you thinking about what you could consider as part of the scope of your projects. So migrating to a new language, whether that be TypeScript or adopting GraphQL APIs, upgrading to a new major version of a library which might introduce API breaking changes that need to be made, adopting a new design system. So at Affinity, we’re currently standardizing on a single design system. And so as part of every project, we determine whether or not we should migrate all the way and if we should reuse existing components that are in the design system or if we have to introduce new ones. Changing coding patterns across the code base, for example, updating JavaScript code to use promises instead of callbacks, or we’re abstracting a lot of our backend logic at Affinity into service objects instead of directly being part of our controllers.

Generally, one team will do some initial work to identify these initiatives, put in the foundations, and then maybe make a decision, but then we’ll need, at an engineering level, multiple teams to adopt these as we update the code. It’s really rare, even in big companies, that you’ll be able to have a dedicated team that could completely run initiatives independently. It’s just impossible. You could not update every single GraphQL API for a product with one team sitting over here because they don’t understand the product. It makes way more sense to do it while you’re working on the forward-facing feature than to go and rebuild the thing that already exists.

So let me tell you a story about three projects and three different tactics. A successful tech lead understands the initiatives that exist in the company and then the technical decisions that other teams have made, and they look for ways to integrate that work into existing project in a way that propels the project forward while still slowing it down as little as possible. And so a few years ago at Lever, we were approximately 50% done converting our CoffeeScript code to TypeScript. This is pretty basic. Now you would probably automate this in a much more efficient way. But, I think it’s a really good example of how different techniques empower teams differently. We were all in, we were definitely going to replace it all, but we were against tight deadlines. We had really small teams. We didn’t have the budget for a dedicated platform team at the time. And so we had three different projects and two different teams that used different approaches. In the first, we had a less experienced tech lead and we didn’t end up converting anything to TypeScript upfront. And actually, all of the changes were made in our CoffeeScript files. And I’ll tell you, it was slower because we had developers that didn’t have experience in CoffeeScript and then later we had to go back and change it all. So it felt faster at the time. It was not faster.

In the second, the tech lead set up a new TypeScript file for all of the new code and just referenced that whilst keeping the existing CoffeeScript code. That was made possible because it was a lot of new features that didn’t require modifications. And then the third, we decided to invest upfront and we converted the entire file to TypeScript and the team was way more efficient as a result. In the end, we finally invested in a single team getting ownership, but it was faster because of the work we’d already done.

That brings me to scaffolding. And scaffolding is all about generally setting everything up for your project when kicking it off. And the goal is to make any major refactoring changes, put in place what’s required for engineering initiatives you’re adopting, make it easy to do automated testing, and run locally and in production. And the most important thing is to think about what’s going to set up your team for really fast feedback loops. And that means a developer being able to test a single line of code and as quickly as possible validate that it works. The tighter the feedback loop, the faster and safer that code ships.

And some examples of this are manually testing locally and on staging, setting up your local test data with different states. At Shopify, when I worked on draft orders, we created a bunch of rate tests to create orders in multiple states. You could just run it locally and really easily see if what you had built worked. And then a whole lot of other things related to automated tests, like even just being able to run a single one as fast as possible locally will help. And then whatever’s required to push to production from day one, feature flags, etc. The goal of scaffolding is to reduce feedback loop time so that issues can be identified and rectified swiftly, enhancing the quality of the code and the pace of development. You go faster, but it requires that upfront investment.

To wrap up, a successful tech lead balances short-term engineering investments to boost team productivity with considerations for individual project impact and long-term maintenance and velocity. And when you do that, you take into account the tech debt that you already have, you establish automated testing patterns, and you align with technical decisions across the org to make progress against your engineering strategy. And finally, you set up the scaffolding to ensure that all developers have quick feedback loops while addressing all of the above. Thank you. I’m happy to answer, I think I have one minute for a question. I see there end-to-end and integration tests. I think you have to define these for your org. There’s no common definition. Thanks, Laura.

Amanda Beaty:

Thanks so much, Dominique, and thanks everybody for joining us. We are out of time, so we will see you all in the next session. Thanks.

Share this