<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[JordanMurray.Dev]]></title><description><![CDATA[JordanMurray.Dev]]></description><link>https://jordanmurray.dev</link><generator>RSS for Node</generator><lastBuildDate>Fri, 10 Apr 2026 00:13:01 GMT</lastBuildDate><atom:link href="https://jordanmurray.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Is Agile Dead?]]></title><description><![CDATA[We can build software faster than we can decide what to build
I’ve noticed something a bit uncomfortable recently.
I've found myself in a situation where I, like many other devs, can build things fast]]></description><link>https://jordanmurray.dev/is-agile-dead</link><guid isPermaLink="true">https://jordanmurray.dev/is-agile-dead</guid><dc:creator><![CDATA[Jordan Murray]]></dc:creator><pubDate>Thu, 09 Apr 2026 09:26:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6900cb8f25a318c977acc072/a24102c3-0615-46a7-877c-b71fefc24923.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>We can build software faster than we can decide what to build</h2>
<p>I’ve noticed something a bit uncomfortable recently.</p>
<p>I've found myself in a situation where I, like many other devs, can build things faster than my company can decide what it or its clients want. Not because we're particularly slow, but because the software part is now so fast!</p>
<p>With AI tools now, spinning up APIs, wiring things together, scaffolding services it’s all quick. Like, <em>really</em> quick. Stuff that used to take days is now hours, sometimes minutes.</p>
<p>But everything around that? Still slow.</p>
<p>Conversations. Requirements. Sign-off. Priorities. Back-and-forth. Clarifications.</p>
<p>It feels like development isn’t the bottleneck anymore, everything else is.</p>
<hr />
<h2>Planning takes longer than building</h2>
<p>This is the weird bit.</p>
<p>You can spend a few days planning a sprint, refining stories, going back and forth on requirements…</p>
<p>And then a dev can go off and build most of it in a day.</p>
<p>That doesn’t really add up anymore.</p>
<p>Agile made a lot of sense when writing software was the hard part. When iteration was expensive. When you needed structure to manage limited dev time.</p>
<p>Now the constraint has shifted, but the process hasn’t.</p>
<hr />
<h2>It’s not that Agile is “dead”</h2>
<p>I don’t think Agile suddenly becomes useless.</p>
<p>But I do think parts of it are starting to feel… out of sync.</p>
<p>We’re still treating development as the expensive, slow part of the system. But it’s not, not in the same way anymore.</p>
<p>The expensive part now is getting everyone aligned on what we’re actually trying to do.</p>
<hr />
<h2>Where this gets a bit sketchy</h2>
<p>The thing I’m slightly worried about is how AI starts getting layered into all of this.</p>
<p>You can very easily end up with something like:</p>
<ul>
<li><p>A client uses AI to write what they want</p>
</li>
<li><p>Sales uses AI to summarise it</p>
</li>
<li><p>Product uses AI to turn it into requirements</p>
</li>
<li><p>Dev uses AI to build it</p>
</li>
</ul>
<p>At every step, something gets lost or reshaped.</p>
<p>No one’s doing anything wrong, but the original intent just drifts.</p>
<p>You end up building something that technically matches the requirement… but isn’t really what anyone meant.</p>
<hr />
<h2>The job is changing a bit</h2>
<p>I don’t think developers are going anywhere.</p>
<p>But the job definitely feels different.</p>
<p>Less time writing boilerplate and glue code.<br />More time trying to properly understand what’s needed in the first place.</p>
<p>If anything, it feels like the value is shifting toward:</p>
<ul>
<li><p>asking better questions</p>
</li>
<li><p>spotting gaps in requirements</p>
</li>
<li><p>understanding trade-offs</p>
</li>
<li><p>knowing when something doesn’t quite make sense</p>
</li>
</ul>
<p>The actual coding part is becoming the easy bit.</p>
<hr />
<h2>The real problem now</h2>
<p>If developers can move 10x faster, the rest of the business kind of has to keep up.</p>
<p>Otherwise you just build the wrong thing… very efficiently.</p>
<p>It feels like we’ve sped up one part of the system massively, but left everything else as-is.</p>
<p>And now that mismatch is starting to show.</p>
<hr />
<h2>Not sure what the answer is yet</h2>
<p>I don’t think the solution is just “remove process” or “let AI handle it”.</p>
<p>If anything, it probably means we need <em>better</em> ways of keeping things aligned, not fewer.</p>
<p>Clearer requirements. Better feedback loops. Less interpretation between steps.</p>
<p>Because right now, there are a lot of opportunities for things to drift without anyone really noticing.</p>
<p>This is something I’ve been thinking about a lot recently.</p>
<p>I’ve actually been building a small Jira app called DRIFT to try and tackle a tiny part of this — mainly around keeping requirements and implementation aligned as things move through the pipeline.</p>
<p>It’s early, and it definitely doesn’t solve everything, but it’s been interesting seeing where things break down in practice.</p>
<p>I’ll write more about that soon.</p>
]]></content:encoded></item><item><title><![CDATA[Why Understanding Business Requirements Is More Important Than Ever in an AI-Driven World]]></title><description><![CDATA[Lately, I’ve noticed something uncomfortable about the way I work.
I can ship software faster than ever.
Agents can scaffold entire services, generate APIs, wire up integrations, and produce working code in minutes. Iteration is cheap. Regeneration i...]]></description><link>https://jordanmurray.dev/why-understanding-business-requirements-is-more-important-than-ever-in-an-ai-driven-world</link><guid isPermaLink="true">https://jordanmurray.dev/why-understanding-business-requirements-is-more-important-than-ever-in-an-ai-driven-world</guid><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[Software Testing]]></category><category><![CDATA[AI]]></category><category><![CDATA[AI Engineering]]></category><category><![CDATA[requirements engineering]]></category><category><![CDATA[DeveloperExperience]]></category><category><![CDATA[#TechLeadership]]></category><category><![CDATA[ai agents]]></category><category><![CDATA[Futureofwork]]></category><dc:creator><![CDATA[Jordan Murray]]></dc:creator><pubDate>Thu, 05 Feb 2026 11:26:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/xG8IQMqMITM/upload/55bc315196bbd83529d37ae59f4eac8f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Lately, I’ve noticed something uncomfortable about the way I work.</p>
<p>I can ship software faster than ever.</p>
<p>Agents can scaffold entire services, generate APIs, wire up integrations, and produce working code in minutes. Iteration is cheap. Regeneration is trivial. The bottleneck has clearly moved.</p>
<p>And that is exactly why understanding business requirements has never mattered more.</p>
<p>Speed has not removed the need for clarity. It has exposed the cost of not having it.</p>
<h2 id="heading-when-iteration-is-cheap-ambiguity-becomes-dangerous">When Iteration Is Cheap, Ambiguity Becomes Dangerous</h2>
<p>Earlier in my career, vague requirements slowed everything down.</p>
<p>Unclear acceptance criteria meant long conversations, back-and-forth with stakeholders, and repeated rework. You would hit friction quickly. Something would feel wrong. Progress would stall until someone clarified what the system was actually supposed to do.</p>
<p>That friction was annoying, but it served a purpose. <strong>It forced understanding</strong>.</p>
<p>AI removes much of that friction.</p>
<p>Now, I can prompt past uncertainty, ship something plausible, and tell myself I will iterate later. The system compiles. The tests are green. Something exists.</p>
<p>That is the trap.</p>
<p>When iteration is cheap, ambiguity no longer blocks progress. It silently compounds.</p>
<h2 id="heading-agents-do-not-fix-ambiguity-they-amplify-it">Agents Do Not Fix Ambiguity. They Amplify It.</h2>
<p>Agents do not understand the business in the way humans do.</p>
<p>They do not know which rules are critical, which edge cases matter, or which failures are unacceptable. They optimise for success signals. If success is not clearly defined, they will choose a reasonable interpretation and move on.</p>
<p>I have seen this first-hand.</p>
<p>The code looks clean. The architecture is respectable. Everything passes. And yet, the behaviour is subtly wrong in ways that only show up when someone asks, “Is this actually what we wanted?”</p>
<p>From the agent’s perspective, it succeeded.</p>
<p>From the business’s perspective, it did not.</p>
<p>This is not an AI problem. It is a requirements problem.</p>
<h2 id="heading-the-real-bottleneck-has-moved">The Real Bottleneck Has Moved</h2>
<p>As agents take on more of the implementation work, the bottleneck in software delivery is no longer code.</p>
<p>It is understanding.</p>
<p>When writing code was slow, misunderstandings revealed themselves early. You would get stuck. The design would feel off. You would naturally slow down and ask questions.</p>
<p>Agents remove that feedback loop.</p>
<p>They will happily build on top of ambiguity. They will fill gaps with assumptions. They will keep going until something compiles and passes whatever checks exist. By the time a misunderstanding surfaces, it is no longer a small correction. It is embedded behaviour spread across generated code.</p>
<p>I have had moments where undoing the mistake took longer than building the feature would have taken in the first place.</p>
<p>That is the shift developers need to recognise.</p>
<p>The hard part is no longer turning requirements into code. The hard part is making sure the requirements are precise enough that the code cannot be confidently wrong.</p>
<h2 id="heading-why-just-get-something-out-there-is-riskier-than-ever">Why “Just Get Something Out There” Is Riskier Than Ever</h2>
<p>I used to be fairly relaxed about “getting something out there.”</p>
<p>In a pre-AI world, that usually meant a bit of mess, some technical debt, but broadly correct behaviour. You could clean it up later.</p>
<p>In an agent-driven world, “we will iterate later” often means incorrect behaviour that looks correct.</p>
<p>Agents are extremely good at producing systems that appear reasonable. Without clear constraints, assumptions harden quickly. Those assumptions get copied, reused, and reinforced across generated code. By the time the mistake is noticed, it is no longer isolated.</p>
<p>Velocity hides the error until undoing it is expensive.</p>
<p>Speed without clarity is not agility. It is chaos with better tooling.</p>
<h2 id="heading-where-tests-actually-fit-now">Where Tests Actually Fit Now</h2>
<p>This is where my thinking on testing has changed the most.</p>
<p>Tests are no longer primarily about catching regressions or validating syntax. Their most important role is to express business intent in a way a machine can verify.</p>
<p>When I am working with agents, a test answers a very direct question.</p>
<p>Did you do what I meant?</p>
<p>If the test does not make that clear, the agent will still pass it. It just might pass it for the wrong reasons.</p>
<h2 id="heading-behaviour-over-implementation">Behaviour Over Implementation</h2>
<p>When agents write most of the code, I care far less about implementation details than I used to.</p>
<p>What matters are the things that must never be violated. Business truths. Architectural constraints.</p>
<p>Things like:</p>
<ul>
<li><p>A frozen account must never initiate payments</p>
</li>
<li><p>Daily limits must be enforced across concurrent requests</p>
</li>
<li><p>Idempotent operations must not create duplicates</p>
</li>
<li><p>Domain logic must not depend on infrastructure</p>
</li>
</ul>
<p>These are not preferences. They are rules.</p>
<p>If I do not make them explicit, the agent will not magically infer them.</p>
<h2 id="heading-the-skill-that-matters-most-now">The Skill That Matters Most Now</h2>
<p>As implementation becomes cheap, misunderstanding becomes the most expensive mistake a team can make. The developers who thrive in this world will not be the ones who generate the most code. They will be the ones who invest the most effort in getting the requirements right before anything is generated.</p>
<p>If an agent cannot tell whether it is finished, that is not an AI failure.</p>
<p>It is a failure to define success.</p>
]]></content:encoded></item><item><title><![CDATA[When AI Makes You Fast… But Also Makes You Forget How Your Own Code Works]]></title><description><![CDATA[I had one of those weeks where you suddenly realise you’re not quite as on top of things as you thought. The kind of week where you think, “Oh yeah, this is why humans still have jobs.”
And the embarrassing part?I got absolutely blindsided by code I ...]]></description><link>https://jordanmurray.dev/when-ai-makes-you-fast-but-also-makes-you-forget-how-your-own-code-works</link><guid isPermaLink="true">https://jordanmurray.dev/when-ai-makes-you-fast-but-also-makes-you-forget-how-your-own-code-works</guid><category><![CDATA[EngineeringProcess]]></category><category><![CDATA[AI Engineering]]></category><category><![CDATA[AI coding]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[Cloud Development ]]></category><category><![CDATA[debugging]]></category><category><![CDATA[developer experience]]></category><category><![CDATA[clean code]]></category><category><![CDATA[ai agents]]></category><category><![CDATA[System Design]]></category><dc:creator><![CDATA[Jordan Murray]]></dc:creator><pubDate>Mon, 24 Nov 2025 11:50:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/mZ5D2T5rVG4/upload/69bd04a9cba0dc16d116c00b4944a5cd.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I had one of those weeks where you suddenly realise you’re not quite as on top of things as you thought. The kind of week where you think, “Oh yeah, this is why humans still have jobs.”</p>
<p>And the embarrassing part?<br />I got absolutely blindsided by code I wrote. (”wrote” might be generous.)</p>
<h3 id="heading-the-week-i-took-ai-a-bit-too-literally">The Week I Took AI a Bit Too Literally</h3>
<p>We recently updated our process, and without even noticing it, I had drifted into the habit of letting the AI handle more than it should. The output looked great: clean code, passing tests, tidy diffs. My brain basically shrugged and said, “Looks fine, let’s get on with the next thing.”</p>
<p>Spoiler: it wasn’t fine.</p>
<p>Everything behaved perfectly locally.<br />Then I deployed it to the cloud and walked straight into a wall of chaos.</p>
<p>Managed identity acting up.<br />Connectivity was doing as it pleased.<br />Authentication randomly failing.<br />All the usual cloud quirks that somehow hide until the moment you tell someone the work is finished.</p>
<p>Suddenly, I was hunting down “bugs” that weren’t actually bugs.<br />The real problem was that I had no mental map of what the system was supposed to do, because I hadn’t really built it the way you normally do through repetition, frustration, and slowly wiring the whole thing into your brain.</p>
<p>It genuinely felt like trying to debug a system I’d only heard rumours about.</p>
<h3 id="heading-when-you-dont-write-it-you-dont-know-it">When You Don’t Write It, You Don’t Know It</h3>
<p>This was the big realisation of the week.<br />AI generated a lot of the heavy lifting. I skimmed it, thought it looked correct, and cracked on. But because I never really went through the grind of writing it myself, I didn’t have the usual intuition you rely on when debugging.</p>
<p>There were moments where I was staring at a piece of code thinking, “Did I do this… or did I hallucinate reviewing it at 5 pm on a Friday?”</p>
<p>I spent more time reconstructing how things worked than I would’ve spent writing the code manually. And that was a wake-up call.</p>
<p>AI can give you clean code, but it can’t give you the internalised understanding that makes debugging efficient. Without that, you’re basically wandering through your own system like a tourist with a blurry map.</p>
<h3 id="heading-ai-isnt-magic-its-a-power-tool">AI Isn’t Magic. It’s a Power Tool.</h3>
<p>This week forced me to re-ground myself.</p>
<p>AI absolutely helps productivity, but it doesn’t remove the need for the fundamentals. You still need to understand how the system fits together. You still need to think about the environment you’re deploying into. You still need to consider failure modes, configuration differences, and behaviour outside of happy-path tests.</p>
<p>Cloud issues don’t care how tidy your code looks; they care whether it’s built to behave correctly in a messy world.</p>
<p>A chainsaw is brilliant when used properly. It’s less brilliant when you’re half paying attention.</p>
<h3 id="heading-the-quiet-fear-that-sneaks-in">The Quiet Fear That Sneaks In</h3>
<p>If I’m totally honest, I think part of why I leaned on the AI too much is because of that little voice developers have been hearing lately:</p>
<p>“This thing writes code way faster than I can… where does that leave me?”</p>
<p>But the moment you hit a real-world problem involving identity, connectivity, latency, environment differences, or system design, you’re reminded very quickly that raw code is only a small part of what engineering actually is.</p>
<p>AI can generate code.<br />Engineers design systems, spot subtle issues, understand how components behave together, and know where failures hide.</p>
<p>They aren’t remotely the same skill.</p>
<p>And that difference is exactly why we still matter.</p>
<h3 id="heading-the-reset-button">The Reset Button</h3>
<p>So I’ve taken a breath and reset this week.<br />Back to using AI as a tool, not an autopilot.</p>
<p>It’s there to support my thinking, not replace it.<br />It helps me move faster, but it doesn’t free me from needing to understand what I’m building.<br />It can generate solutions, but it can’t be accountable for them.</p>
<p>This reminded me that the real value in engineering isn’t typing speed.<br />It’s the thinking, the planning, the understanding, and the ability to navigate the unpredictable parts of the real world.</p>
<p>AI handles the easy stuff.<br />We handle the stuff that actually matters.</p>
<p>And honestly, that’s a comforting thought.</p>
]]></content:encoded></item><item><title><![CDATA[Agents Don’t Replace Your Process; They Run It]]></title><description><![CDATA[I read a great post from David Fowler recently that rang true.

“Some teams are seeing massive productivity gains from AI agents. Others… not so much.
The best teams don’t treat agents like magic coworkers. They treat them like part of the engineerin...]]></description><link>https://jordanmurray.dev/agents-dont-replace-your-process-they-run-it</link><guid isPermaLink="true">https://jordanmurray.dev/agents-dont-replace-your-process-they-run-it</guid><category><![CDATA[AI]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software development]]></category><category><![CDATA[aiagents]]></category><category><![CDATA[warp.dev]]></category><category><![CDATA[Developer Tools]]></category><category><![CDATA[#FutureOfCoding ]]></category><category><![CDATA[agentic-coding]]></category><category><![CDATA[agentic AI]]></category><dc:creator><![CDATA[Jordan Murray]]></dc:creator><pubDate>Tue, 11 Nov 2025 16:38:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/HBGYvOKXu8A/upload/8532b6710d1dbedc983c3dd30113f069.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I read a great post from <strong>David Fowler</strong> recently that rang true.</p>
<blockquote>
<p>“Some teams are seeing massive productivity gains from AI agents. Others… not so much.</p>
<p>The best teams don’t treat agents like magic coworkers. They treat them like part of the engineering system.”</p>
</blockquote>
<p>Couldn’t agree more.<br />That line sums up exactly what I’ve experienced using <strong>AI Coding Agents</strong> over the last few months.</p>
<p>A lot of people expect an AI agent to “make them faster.”<br />They drop it into their workflow, start prompting, and then wonder why it’s not a 10x upgrade.</p>
<p>The thing is, <strong>agents don’t improve your process by themselves</strong>.<br />They amplify whatever process <em>already exists</em>.</p>
<p>If your workflow is messy, inconsistent, and full of context switching, your agent will simply move faster through the chaos.</p>
<h3 id="heading-designing-the-system-around-the-agent">Designing the System Around the Agent</h3>
<p>Where it <em>clicked</em> for me was when I stopped treating Warp (My AI coding partner of choice) like a chat assistant and started treating it like part of my engineering system.</p>
<p>Instead of typing ad-hoc prompts, I started building a <strong>library of reusable, structured prompts</strong>, each one mapped to a real engineering task — things like:</p>
<ul>
<li><p>🔍 Fix SonarCloud Issues</p>
</li>
<li><p>🧾 Prepare Pull Request Summary</p>
</li>
<li><p>💬 Review Code Against Jira Ticket</p>
</li>
<li><p>🧠 Implement Feature Based on Requirements</p>
</li>
<li><p>🧪 Write or Improve Unit Tests</p>
</li>
</ul>
<p>Each prompt ties into a set of <strong>MCPs</strong> (Model Context Protocols) that give the agent the tools it needs —<br />Jira, Azure DevOps, SonarCloud, Context7, Microsoft Learn, plus a few of my own internal ones.</p>
<p>That combination turned Warp from “an assistant that helps me write code” into a <strong>co-pilot that runs my development workflow</strong>.</p>
<h3 id="heading-the-feedback-loop">The Feedback Loop</h3>
<p>The best part? Every time I notice the agent repeating a manual step or misunderstanding a pattern,<br />I don’t “fix the prompt”, I <strong>improve the system</strong>.</p>
<p>Maybe that means refining the MCP schema, updating how it fetches context, or adding a new reusable prompt for that scenario.</p>
<p>It’s like building CI/CD pipelines; the more you invest in the process, the more leverage you create.<br />That’s when the compounding starts.</p>
<h3 id="heading-the-role-shift">The Role Shift</h3>
<p>I’ve found myself spending less time writing code directly and more time <strong>engineering the system the agent operates within</strong>.<br />That’s a very different mindset, but it’s also incredibly freeing.</p>
<p>Instead of juggling Jira, IDEs, docs, and test pipelines, I focus on shaping the structure, rules, and tools that let my agent execute all of that seamlessly.</p>
<p>It hasn’t made me redundant.<br />It’s made me more <em>strategic</em>.</p>
<h2 id="heading-the-takeaway">The Takeaway</h2>
<p>The teams that are <em>flying</em> with AI agents aren’t just good at prompting; they’ve learned to design for them.</p>
<p>They think about:</p>
<ul>
<li><p>What infrastructure supports the agent</p>
</li>
<li><p>What tools can it access</p>
</li>
<li><p>How does it get feedback</p>
</li>
<li><p>How it fits into the team’s engineering rhythm</p>
</li>
</ul>
<p>The real difference isn’t in the model.<br />It’s in the <strong>system</strong> it’s part of.</p>
<p>#AI #SoftwareEngineering #AIagents #WarpAI #MCP #DeveloperTools #FutureOfCoding #AgenticCoding</p>
]]></content:encoded></item><item><title><![CDATA[🚀 Agentic Coding: The Moment I Realised the Game Has Changed]]></title><description><![CDATA[I had one of those “hang on… did that really just happen?” moments this week while fixing a P2 production issue using Warp's AI Agent Mode.  
What used to take me a few hours of context switching and manual debugging was done in less than an hour, an...]]></description><link>https://jordanmurray.dev/agentic-coding-the-moment-i-realised-the-game-has-changed</link><guid isPermaLink="true">https://jordanmurray.dev/agentic-coding-the-moment-i-realised-the-game-has-changed</guid><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[warp.dev]]></category><category><![CDATA[AI]]></category><category><![CDATA[Tech Innovation,]]></category><category><![CDATA[agentic-coding]]></category><dc:creator><![CDATA[Jordan Murray]]></dc:creator><pubDate>Tue, 28 Oct 2025 14:04:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/cckf4TsHAuw/upload/e29ebd98d6d60bd2724f5febd88518cd.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I had one of those “hang on… did that really just happen?” moments this week while fixing a P2 production issue using Warp's AI Agent Mode.  </p>
<p>What used to take me a few hours of context switching and manual debugging was done in less than an hour, and I didn’t even open an IDE.  </p>
<p>Six months ago, I’d have done the usual dance: copy something from ChatGPT, paste it into my IDE, fix the import errors, adjust for project conventions, run the tests manually, debug the failures, rinse and repeat.  </p>
<p>This time, I just described the problem.<br />- The agent read through the codebase<br />- Investigated the issue<br />- Showed me what it found<br />- Proposed a fix<br />- And wrote the tests to prove it worked  </p>
<p>It traced through 20+ files, created a failing test to reproduce the bug, fixed it properly (with clean error handling), added four edge-case tests, and even optimised the logic for performance.  </p>
<p>Then it:<br />- Created a Jira ticket<br />- Generated commits with detailed messages<br />- Cherry-picked to a hotfix branch<br />- Tagged the release  </p>
<p>All while I just guided the direction and reviewed what it was doing.  </p>
<p>The thing that surprised me most wasn’t the speed; it was the confidence I felt as I watched it work. Not blind trust, but confidence that came from seeing its reasoning step-by-step, watching it apply proper red–green–refactor, checking that all the tests passed, and realising it even followed our git workflow.  </p>
<p>It still feels like cheating somehow, but the best kind. The kind that makes engineering fun again. I found myself thinking more strategically instead of mechanically typing.  </p>
<p>The skill now isn’t about writing every line yourself; it’s about asking the right questions, evaluating the reasoning, understanding the architectural implications, and knowing when to step in.  </p>
<p>This isn’t the future of software engineering. This is already happening.<br />And it’s only getting better from here.</p>
]]></content:encoded></item></channel></rss>