<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Engineering Culture on Mathieu Mailhos</title><link>https://mathieu.coffee/tags/engineering-culture/</link><description>Recent content in Engineering Culture on Mathieu Mailhos</description><generator>Hugo -- 0.160.1</generator><language>en-us</language><lastBuildDate>Mon, 06 Apr 2026 09:00:00 +0200</lastBuildDate><atom:link href="https://mathieu.coffee/tags/engineering-culture/index.xml" rel="self" type="application/rss+xml"/><item><title>How I’d Revise Engineering Interviews for 2026</title><link>https://mathieu.coffee/posts/how-i-revise-engineering-interviews-2026/</link><pubDate>Mon, 06 Apr 2026 09:00:00 +0200</pubDate><guid>https://mathieu.coffee/posts/how-i-revise-engineering-interviews-2026/</guid><description>The signals have changed: how to interview software engineers with AI and avoid costly hiring mistakes</description><content:encoded><![CDATA[<p>A bad hire is incredibly expensive: salary, recruiting fees, technical debt, and toll on team morale. Yet, interviewers still rely on the wrong signals. After running over a hundred interviews in the past couple of years, I want to share the common traits I&rsquo;ve been looking for as an interviewer to help de-risk the hiring decision.</p>
<p>Software engineering was never about writing code; it&rsquo;s always been about solving business problems. Despite this, most companies I&rsquo;ve talked to in the past 12 months are still interviewing with LeetCode and Kubernetes trivia. If an AI agent can pass your technical test, not only is your process broken but you’re also sending a bad signal about your engineering culture.</p>
<p>Instead of testing if a candidate can outperform an agent, we should be testing if they can direct one. The best hires are pragmatic owners who identify the right problems to solve and leverage every tool at their disposal to ship them.</p>
<h3 id="2015-style-interviews-fail-in-the-ai-era">2015-style interviews fail in the AI era</h3>
<p>During my career, I&rsquo;ve been grinding LeetCode challenges, and spending evenings on <em>Cracking the Coding Interview</em>. I got tested on so many different types of questions, from Fizz Buzz to LeetCode hard questions, rapid-fire trivia to open-ended organizational discussions. Finally, I&rsquo;ve seen how they rarely transpose to our day-to-day job.</p>
<p>I&rsquo;ve also been on the other side of the table as I ran over a hundred interviews at Canva, and led final decision meetings. By 2025, I transitioned to AI-assisted interviews: very broad and deeply technical challenges to solve in just under an hour. This has completely changed my view on which signals to probe and look for.</p>
<p>Now, the AI coding workflows are becoming more mature and well adopted (more than <a href="https://survey.stackoverflow.co/2025">36% of Stack Overflow 2025</a> survey respondents use AI-enabled tools). Shipping fast is the norm, and coding language fluency is harder to probe and no longer a key differentiator. We expect engineers to be broader across domains and exercise critical thinking.</p>
<p>With this change of modus operandi, engineers spend more time reading code, whether that is text-completion or code reviews - than writing it. Being able to audit the code is a large part of the job now. One of my favorite types of interview is to ask the candidate to extend an existing system or do a code review exercise.</p>
<div class="pro-tip-banner" role="note" aria-label="Pro tip">
  <p><strong>💡 Pro-tip:</strong> use one of your company&rsquo;s open-source projects, and phrase a problem as seen by your users that relates to this project. Sharing a problem rather than asking for a specific solution is one of the best ways to probe for curiosity and adaptation to your business and to the existing code base. For some positions or if your process allows, you may even not ask for code, and keep the problem architectural only.</p>
</div>

<p>Value interviews are now immensely more important than before. They give extra time to probe on soft skills such as leadership, communication and strategy and are absolutely worth extra investment. Try to go deep into past experiences and key decisions, as any discussion too shallow is easily &ldquo;fakable&rdquo;.</p>
<h2 id="de-scoping">De-scoping</h2>
<p>The most expensive mistake is solving the wrong problem. At this stage, I am not looking for technical details, but for the ability to scope a problem to meet business needs. Can they find the right MVP to solve based on the known constraints?</p>
<p>The more senior the engineer, the more I keep the problem vague and high-level. I expect the candidate to probe and ask clarifying questions. Great engineers make the problem smaller and less ambiguous before diving in. They clearly question and state their assumptions out loud so we can validate them collaboratively.</p>
<p>I particularly appreciate candidates who explicitly call out &ldquo;out of scope&rdquo;, and overly expensive or complicated edge cases not worth optimising for.</p>
<p>While tighter scopes are expected for junior roles, we must stay mindful of the new floor: anything already well-defined can now be largely delegated to an agent.</p>
<h2 id="first-principles-design">First-Principles Design</h2>
<p>Once the scope is clear, we move to high-level design. This isn’t about choosing a specific technology, but about articulating trade-offs and laying down the fundamentals.</p>
<p>I do not care whether a candidate wants to introduce a new MySQL or Postgres database, as long as they can state why a relational database is required. I am looking for the ability to explicitly tie system properties - like consistency or durability - back to the business requirements we just defined.</p>
<p>Many interviewers fall into the trap of asking technology-specific trivia, expecting candidates to recite framework features like magic incantations. For example, I was once asked how I would scale Kubernetes pods based on non-native metrics. A question like this tests my ability to do a Google search. It also sends a negative signal about the company: what kind of micro-management or &ldquo;check-box&rdquo; engineering culture is actually happening there?</p>
<p>Frameworks and libraries are transient; architectural patterns and the trade-offs they imply are what stay constant.</p>
<div class="pro-tip-banner" role="note" aria-label="Pro tip">
  <p><strong>💡 Pro-tip:</strong> a quick diagram can go a long way here. I recommend the <a href="https://c4model.com/diagrams/container">Container view</a> from the C4 model. An LLM can easily generate mermaid diagrams for anything below this. If the candidate gets lost in details, this is your opportunity to elevate the discussion to an appropriate level of details.</p>
</div>

<h2 id="ai-leverage-with-ownership">AI leverage with ownership</h2>
<p>It&rsquo;s now time to implement a working solution. In a 60-minute exercise, we want to see the deliverable: running code. We deal with all sorts of constraints in the real world, and time is a very clear one here.</p>
<p>I like to spend a few minutes analyzing the candidate’s development environment itself. It’s a great learning piece: How tight is their feedback loop? Is their AI setup leveraging local tooling (MCP, Skills) to be more efficient? What is their reasoning for picking a specific model for this task?</p>
<p>I value engineers who can leverage the best tools, but I am specifically looking for how they <em>direct</em> those tools. Thoughtworks warns that <em><a href="https://www.thoughtworks.com/radar/techniques/complacency-with-ai-generated-code">complacency with AI-generated code</a></em> is a leading cause of technical debt and declining quality, and <a href="https://www.it-cisq.org/wp-content/uploads/sites/6/2022/11/CPSQ-Report-Nov-22-2.pdf">CISQ</a> has shown that the average developer spends roughly 1/3 of their week addressing tech debt. In short, if the candidate isn’t deeply owning the code during an interview, they aren&rsquo;t going to day-to-day, and the velocity will suffer.</p>
<p>The best candidates I’ve seen apply Test-Driven Development (TDD) principles to the AI workflow. They focus on reviewing test cases before looking at implementation details. They care about how the interface is exposed and consumed first, ensuring the contract is right before fixing lower-level logic.</p>
<p>Eventually, I start probing harder on the craft skills like attention to edge-cases and other domains such as concurrency, encoding, failure modes, space/time complexity, accessibility, security&hellip;</p>
<div class="pro-tip-banner" role="note" aria-label="Pro tip">
  <p><strong>💡 Pro-tip:</strong> if your interview process is asynchronous, ask for the candidate’s AI transcripts. It reveals how they iterate, the quality of the prompts they write, and the level of details they dive into. You can also ask for a Loom to see how comfortable they are walking through the code.</p>
</div>

<h2 id="final-take">Final take</h2>
<p>This isn&rsquo;t 2015 anymore. Asking an experienced candidate to answer framework trivia questions or white-board a sorting algorithm is a complete disconnect from the real world.</p>
<p>On the other hand, hiring someone who relies on AI without deep ownership or architectural understanding is shooting yourself in the foot in the long run as you&rsquo;re trading short-term speed for long-term debt such as bottlenecks in performance, reliability and maintainability issues.</p>
<p>Tools and frameworks are changing faster than ever. Embrace AI, but make sure to probe for the ability to understand the business problem, articulate the trade-offs they are making, and ship fast with ownership.</p>
]]></content:encoded></item><item><title>Stop Giving Me Solutions</title><link>https://mathieu.coffee/posts/stop-giving-me-solutions/</link><pubDate>Mon, 02 Mar 2026 09:00:00 +0100</pubDate><guid>https://mathieu.coffee/posts/stop-giving-me-solutions/</guid><description>How prescriptive leadership erodes engineering culture</description><content:encoded><![CDATA[<h2 id="its-just-a-quick-fix">&ldquo;It’s just a quick fix.&rdquo;</h2>
<p>A few years ago, shortly after joining a scale-up, I was handed a task as part of my onboarding: <em>&ldquo;Add an autoscaler to this ECS service to reduce the cost of unused compute. It shouldn’t take more than half a day.&rdquo;</em></p>
<p>At first, it sounded straightforward. The service was significantly overprovisioned with very little load. A rough estimate suggested we could reduce the fleet size by 80% and immediately cut costs. It looked like an easy win, so I implemented it.</p>
<p>Before calling the task complete, I ran a load test approximating production traffic. The autoscaling logic triggered correctly, and new tasks started launching. But then, requests started failing, and clients began seeing 502 and 504 errors.</p>
<p>Digging deeper, I found a few issues:</p>
<ul>
<li>The concurrency model (Gunicorn/gevent) lacked proper monkey patching.</li>
<li>Neither the client nor the Traefik proxy layer implemented retries.</li>
<li>New ECS tasks took up to 8 minutes to become ready to serve traffic.</li>
<li>Health checks and timeout configurations were misaligned.</li>
</ul>
<p>After talking more with the core maintainer, latency wasn’t the primary constraint as this service handled long-running jobs, but availability was.</p>
<p>In short: the system was not designed to scale horizontally in a safe way.</p>
<p>Instead of delivering immediate business value, I had to spend our limited time budget proving <strong>why</strong> the prescribed solution was insufficient, and investigating what actual architectural changes were needed. Ultimately, the cost of refactoring the service outweighed the immediate infrastructure savings, and we made the pragmatic business decision to move on to higher-priority initiatives. These machines are still burning money today.</p>
<h2 id="the-anti-pattern-prescribing-the"><strong>The Anti-Pattern: Prescribing the &ldquo;How&rdquo;</strong></h2>
<p>What went wrong here? I was handed a prescriptive solution that ignored the system&rsquo;s historical context.</p>
<p>When organizations prescribe exact solutions to their engineers, they unintentionally build a culture of dependency. We train teams to wait for instructions rather than think critically, shifting the engineer&rsquo;s role from a <em>problem-solver</em> to a mere <em>implementer</em>.</p>
<p>More importantly, when an engineer just executes a predefined task, they don&rsquo;t own the deliverable. If the solution fails, the accountability falls on whoever wrote the ticket. You cannot hold someone accountable for an outcome if they had no say in the methodology.</p>
<p>This is entirely incompatible with a culture of true engineering ownership, which a lot of companies claim to have.</p>
<h2 id="a-better-way-framing-the"><strong>A Better Way: Framing the &ldquo;Why&rdquo;</strong></h2>
<p>We could have achieved a much better outcome simply by phrasing the task differently:</p>
<p><em>&ldquo;In the Bar project, our image-processing machines are largely underutilized and costing us money. Could we aim to reduce this bill by 80%, or do the best we can within a 2-day timebox?&rdquo;</em></p>
<p>This approach provides context, states the business goal, and sets a measurable constraint. Most importantly, it delegates the <em>methodology</em> to the engineer.</p>
<p>In this case, simply optimizing the client to handle failures gracefully and setting a fixed, smaller number of nodes would have met the goal.</p>
<h2 id="influencing-through-inquiry"><strong>Influencing Through Inquiry</strong></h2>
<p>As a tech lead, it is tempting to push for the solution you see in your head. But the best way to ensure an implementation actually solves the root issue is to collaborate.</p>
<p>Instead of dictating, ask:</p>
<ul>
<li><em>What approaches have you considered?</em></li>
<li><em>How should we handle the edge case when X happens?</em></li>
<li><em>What do you see as the biggest roadblock here?</em></li>
<li><em>What is the engineering effort of solution Z compared to the business benefit?</em></li>
</ul>
<p>Beside, guiding an engineer to the solution themselves gives them a sense of achievement and total ownership.</p>
<h2 id="scaling-ownership-a-canva-success-story"><strong>Scaling Ownership: A Canva Success Story</strong></h2>
<p>While working as a Staff SRE at Canva, I led a year-long initiative to discover and remediate systemic reliability risks across a platform serving 250 million MAUs.</p>
<p>We built an automated system that parsed incident postmortems, Slack context, and Service SLOs using data science and LLMs. This allowed us to build a comprehensive and continuous backlog of reliability risks.</p>
<p>SREs translated these risks into well-framed problem statements for the owning teams. We ensured every ticket had:</p>
<ol>
<li><strong>Clear user impact</strong>, linked to previous incident metadata.</li>
<li><strong>Measurable outcomes</strong>, tied directly to observability metrics.</li>
<li><strong>A framing of business risk.</strong></li>
</ol>
<p>By the time I left, we had accumulated a backlog of over 80 well-framed reliability risks, with half successfully adopted onto product team roadmaps. We rarely saw pushback because we weren&rsquo;t telling them <em>how</em> to write their code; we were showing them a critical business problem and trusting them to solve it.</p>
<p>Even better, the engineers who were able to solve this risks could measure their impact through real business metrics. How good!</p>
<h2 id="conclusion"><strong>Conclusion</strong></h2>
<p>When people are trusted to design their own solutions, they become deeply invested in making them work. Solutions are often better and engineers grow in their craft and develop autonomy.</p>
]]></content:encoded></item></channel></rss>