Mentoring Was a Soft Skill — AI Made It a Hard Skill

Working with GitHub Copilot to develop software, I was struck by how surprisingly human AI can feel.

When you give Copilot a task, it does not produce a perfect answer in one step. It makes a plan, follows it, checks its own work, notices mistakes, and tries again. This loop of planning, acting, evaluating, and adjusting is the same way humans work.1

For a long time we imagined AI as something that would make no mistakes. Early hallucinations challenged that idea. But with agent-style workflows, the problem becomes manageable in the same way it is with humans. We create checks for correctness, break big problems into smaller pieces, and work around limited memory or context.

We also like to think that humans reason from first principles. In reality, we mostly reuse ideas we have already heard. AI works in a similar way.

The main differences are speed, endurance, and focus. AI does not get tired or distracted.

Working with AI agents also feels similar to delegating work to coworkers. First you make sure you both understand the task. Then you set guardrails so things do not go in the wrong direction. You do not want to micromanage, but you also do not want to discover too late that everything has drifted off course. If you have ever delegated work to a junior colleague, you already have an advantage when working with AI.

In fact, working with AI is teaching techies a new skill: mentoring. What was once a soft skill is now a hard skill.

The unsettling part will come when AI is no longer the junior partner. When Copilot starts taking real initiative and becomes your mentor, what will that look like?

More

https://www.oneusefulthing.org/p/three-years-from-gpt-3-to-gemini?

  1. Note that it’s not that surprising. The agent mode was designed like this by human. The loop isn’t an emerging property of LLM. ↩︎

The Great AI Buildout

The ongoing AI buildout has similarities with the railroad expansion of the 20th century. Both are capital intensive undertakings with the potential to reshape the entire economy. Just as railroads transformed how we navigate physical space, AI is poised to transform how we navigate the information space.1 It’s obvious that railroads were useful and AI is no different.

During the railway boom, railroads proliferated amid intense competition. Overcapacity was common, some companies went bankrupt, and the industry took years to consolidate. Eventually, railroads became commoditized.

The same dynamics may play out with AI. Semiconductors and datacenters are the tracks and rolling stock. AI applications are the railway companies operating the lines. The coming years will reveal which segments of the AI ecosystem are truly profitable.

At the peak of the railroad era, rail companies accounted for roughly 60 percent of market capitalization. Today, AI makes up about 30 percent of the stock market. Such valuations are only justifiable if AI adoption becomes widespread. For semiconductors and datacenters, this means continuing infrastructure buildout. For AI applications, this means acquiring enough users to finance that growth.

The investment in AI is enormous—around $220 billion per year. But it does not need to replace all labor to be justified. Global labor is about $60 trillion per year, and information work accounts for roughly 10–20 percent of that. By this math, AI only needs to replace 1.8 to 3.7 percent of information work per year to pay off the investment.

At the individual level, that is about one or two days of work saved per information worker per year. With AI agents, improving information work—searching, aggregating, writing, and generating information—is already within reach. This means the current investment is economically justified even if AI only captures a small portion of information work.

More

  1. The metaphor is not as stretched as it seems. Large language models literally encode information in multi-dimensional vector spaces, computing distances between vectors to find similarities. ↩︎