![rw-book-cover](https://ferd.ca/static/img/llm-as-its-own-mcp-server.jpg) --- > investors in the industry already have divided up companies in two categories, pre-AI and post-AI, and they are asking “what are you going to do to not be beaten by the post-AI companies?” > The usefulness and success of using LLMs are axiomatically taken for granted and the mandate for their adoption can often come from above your CEO. Your execs can be as baffled as anyone else having to figure out where to jam AI into their product. Adoption may be forced to keep board members, investors, and analysts happy, regardless of what customers may be needing. - [View Highlight](https://read.readwise.io/read/01k48ded2pp2yd774m8k3y835c) --- > These interactions show that engineers have internalized a complex set of heuristics to guide and navigate the LLM’s idiosyncrasies. That is, they’ve built a mental model of complex and hardly predictable agentic behavior (and of how it all interacts with the set of rules and artifacts and bits of scaffolding they’ve added to their repos and sessions) to best predict what will or won’t yield good results, and then do extra corrective work ahead of time through prompting variations. This is a skill that makes a difference. > That you need to do these things might in fact point at how agentic AI does not behave with cognitive fluency,[5](https://ferd.ca/the-gap-through-which-we-praise-the-machine.html#footnote-5) and instead, the user subtly does it on its behalf in order to be productive. - [View Highlight](https://read.readwise.io/read/01k48dw46ska4yn22pgcsmcqt1) --- > people who speak of AI replacing engineers probably aren’t fully aware that while engineers could maybe be doing more work through assisting an agent than they would do alone, agents would still not do good work without the engineer. - [View Highlight](https://read.readwise.io/read/01k48dym06507ch00wh6zp8cxw) --- > what is imagined is powerful agents who replace engineers (at least junior ones), make everyone more productive, and that will be a total game changer. LLMs are artifacts. The scaffolding we put in place to control them are how we try to transform the artifacts into tools; the learning we do to get better at prompting and interacting with the LLMs is part of how they transform us. If what we have to do to be productive with LLMs is to add a lot of scaffolding and invest effort to gain important but poorly defined skills, we should be able to assume that what we’re sold and what we get are rather different things. - [View Highlight](https://read.readwise.io/read/01k48e24eq8sbz90nggrz8emwd) --- > we have to ask whether the amount of scaffolding and skill required by coding agents is acceptable. If we think it is, then our agent workflows are on the right track. If we’re a bit baffled by all that’s needed to make it work well, we may rightfully suspect that we’re not being sold the right stuff, or at least stuff with the right design. - [View Highlight](https://read.readwise.io/read/01k48e3f123k4r06edbawtqjd7) --- > In a fundamental sense, LLMs can be assumed to be there to impress you. Their general focus on anthropomorphic interfaces—just have a chat!—makes them charming, misguides us into attributing more agency and intelligence than they have, which makes it even more challenging for people to control or use them predictably. [Sycophancy](https://arxiv.org/html/2411.15287v1) is one of the many challenges here, for example. - [View Highlight](https://read.readwise.io/read/01k48e57zwrda5acbdsnf08nhq) --- > Key problems that arise when we’re in the current LLM landscape include: > • AI that aims to improve us can ironically end up deskilling us; > • Not knowing whether we are improving the computers or augmenting people can lead to unsustainable workflows and demands; > • We risk putting people in passive supervision and monitoring roles, which is known not to work well; > • We may artificially constrain and pigeonhole how people approach problems, and reduce the scope of what they can do; > • We can adopt known anti-patterns in team dynamics that reduce overall system efficiency; > • We can create structural patterns where people are forced to become accountability scapegoats. - [View Highlight](https://read.readwise.io/read/01k48e8fey12zj9p9y8a3h8m5m) --- > Some people may hope that better models will eventually meet expectations and narrow the gap on their own. My stance is that rather than anchoring coding agent design into ideals of science fiction (magical, perfect workers granting your wishes), they should be grounded in actual science. - [View Highlight](https://read.readwise.io/read/01k48ec0c52wxfn5z6g0p9qtr5) ---