Every 25% increase in AI adoption led to a 7.2% drop in system stability

To quote from it:

Willison defines “good code” as code that works, that we know it works, that solves the right problem, handles errors gracefully, is simple and minimal, protected by tests, documented appropriately, affords future changes, and meets the relevant “-ilities”: accessibility, testability, reliability, security, maintainability, observability, scalability, usability.

Agent tools can help with most of that list. But there remains a substantial burden on the developer to ensure the produced code is actually good. The stochastic nature of LLMs means you can’t just trust the output. The word “stochastic” matters here: it means the same input can produce different outputs each time. A test that passes doesn’t mean it’s a good test. Code that compiles doesn’t mean it’s correct.

This is, paradoxically, what makes LLMs so powerful for coding compared to other domains. We have compilers: either it compiles or it doesn’t. We have test suites: either the tests pass or they don’t. We have type systems, linters, static analysis. Software gives us verification tools that most other domains lack.

But verification requires knowing what “correct” looks like. And that’s where the 10% that went up 1000x lives.

Google’s 2024 DORA report confirms the paradox from the other direction: 75% of developers reported feeling more productive with AI tools, but every 25% increase in AI adoption showed a 1.5% dip in delivery speed and a 7.2% drop in system stability. Meanwhile, 39% of respondents reported having little or no trust in AI-generated code. The tools make us feel faster. The data suggests we’re not, unless we change how we work. More on that in following chapters.

Here is another one:

A lot of skills I see require some “global” api key to work, which renders it almost impractical in enterprise setups for non-developers, environments where there is no way you can provide those. You can’t even provide “envs” with secrets in Chatgpt/Claude.ai regular web interfaces if you use a skill there. Some tools work around this limitation by wrapping their own CLI tool, which in turn handles authentication, ideally via OAuth. This works for local dev tooling, but again, not for skills being used through web interfaces. with MCP connection, this works out of the box, as long as the MCP provider supports this.

From https://medium.com/@alonisser/mcp-is-dead-or-mcp-vs-skills-revisited-daaa51b9a519