Why I Won't Use AI

Posted on June 17, 2025

If you are a programmer in 2025 you have likely been pressured into adopting generative artificial intelligence tools into your workflow. Maybe you have already been ordered to. You have probably wondered if next year will be the year that you will be replaced with an AI. Or maybe in the next five years. Weren’t we supposed to be replaced already?

I do not doubt that people using these tools enjoy using them. If you are one of those folks this post isn’t aimed to, “yuck in your yum.” If you find yourself reading this for some reason, thank you, and don’t worry — your golden goose is going to be fine.

I do intend to cover the moral and ethical reasons why I don’t use AI tools in my work. Perhaps you have also considered these issues. Perhaps you can empathize with them.

On Labour

I’m a labourer. A well-compensated one with plenty of bargaining power, for now. I don’t make my living profiting from capital. I have to sell my time, body, and expertise like everyone else in order to make the profits needed to support me and my family with life’s necessities.

There is no social policy or system that protects me from my current or future employers from trying to replace me with AI. The myth of capitalism around disruptive technology suggests that I will find new skills and employment in other areas of the economy. Unfortunately things don’t always go this way.

The Luddite movement is an interesting piece of history. People often remember the part about breaking machines in factories. Today, people refer to those who refuse to use new productivity-enhancing technology as, Luddites. But the movement was not about sabotage. At least, it had a purpose and sabotage was only one strategy used by people to try and enact change and gain bargaining power.

You see, there were no social policies or reforms in place to protect the rights of labourers during the industrial revolution in which the Luddite movement had formed. The people involved in the movement were skilled workers who used the machines they were destroying. They weren’t destroying the machines because they wanted everyone to make textiles by hand: they were protesting the fact that capital owners were extracting the wealth from their labour with this new technology and weren’t reinvesting it to protect the labourers displaced by it.

Today, AI technology is being used to replace labour power with capital. The knowledge work we do is being replaced with machines and algorithms by capital holders who want to own and rent out access to that knowledge. It’s cheaper, produces more value, and that new wealth is not turning into shorter working hours or supplementing any labourer’s income. That wealth is going into the hands of the ultra wealthy.

On Productivity

Do you use a language with static type checking? Do you feel it makes you more productive as a programmer than using a dynamic language? What about all other programmers in the industry at large? Should they use statically typed programming languages now?

Did you know there’s no conclusive evidence that static type checking has any effect on developer productivity?

We are terrible at defining what productivity is and measuring it. In a post, What We Know We Don’t Know, Hillel Wayne gives us an overview of research into developer productivity. The conclusion may not be that surprising if you’ve been a programmer for long: there isn’t enough evidence to know what works for almost any practice or tool you can think of.

Except for sleep and exercise.

Regardless it’s not uncommon for a software developer or a team of them to use statically typed languages or test driven development. We don’t know for certain that writing the tests before writing the program reduces errors, we know it doesn’t eliminate them, yet we do it anyway because we believe it may reduce some of them. And if we take the swiss-cheese approach to reliability then we need more than one tool to prevent errors.

So why not AI? There aren’t many studies either. In one completely biased study of Copilot use (some of the authors are employed by Microsoft and the developers studied worked at Microsoft, a company producing CoPilot) the results conclude that there is suggestive evidence that Github Copilot raises the productivity of software engineers. While another independent study suggests that these tools increase the rate of new errors introduced into a code base by as much as 41%. Does it make us more productive or not?

The current model of AI tool use expects there to be a human involved to guide the AI algorithm’s output. The problem with this method is that code review, based on what we know from the few empirical studies done, is not an effective practice for reducing error rates in software. We can detect some errors but the effect is small and disappears completely if we read too much code. AI models can generate a lot of code if you prompt them to. If it is true that they increase the rate of new errors to 41% there is little chance that people reviewing the AI output will prevent those errors.

It turns out we don’t use code review to prevent errors. We use it to help other developers understand and accept our proposed changes to the source code. It is a practice that helps ensure that someone has checked each change being made: do the tests make sense, does the change itself make sense, have all the normal checks been run? Occasionally we catch errors but it’s not enough to consider code review a practice for preventing errors.

Perhaps we can use AI to review the code! It seems likely that an AI will be better at catching errors than a human. After all they are trained on a much larger corpus of data than a team of humans. They can also hallucinate claims about the code. And if we rely on AI to do the code review we’re missing out on the primary benefit: making sure a human understands and accepts each change.

Most conversations on productivity with AI tools focuses on output. Which devolves into discussing line count. We know from decades past that how many lines of code you generate is not an effective indicator of productivity. Yet, this is what AI tools are bringing back. Developers that use AI tools output more code and are being compared favourably to their counterparts who do not use AI code generation. How many lines of code you output as a measure of productivity has largely been debunked… so why are we being forced to consider it?

On Enjoyment

I enjoy learning. And learning is a difficult chore. There is no golden road to knowledge. I am constantly frustrated when my assumptions and theories are thwarted. I can’t get the program to compile. The design of this system I was sure was right is missing a key assumption I missed. I bang my head against the wall, I read references, and I ask people questions. I try a different approach and I verify my results. Finally, eventually, the problem gives way and the solution comes out.

It is during the struggle that I learn the most and improve the most as a programmer. It is this struggle I crave. It is what I enjoy about programming.

Especially the boring parts. Working on the boring, rote code is where you learn patterns and understand when and how to refactor. I refactor constantly. I make it work. Then I make it fast. Then I make it better. I do this because every line of code is a liability. I want less of it. I want it to accurately represent the solution to the problem. Rote, boring code is noise. It obscures intent, hides errors, and make code harder to read. AI cannot understand what code you will find understandable and easy to maintain – it has no theory of your mind.

And using these tools… I just find boring.

On Ethical Training

Building AI models requires input data to train on.

The source of that data has been a point of contention. Companies developing AI tools have been scraping that data from across the Internet with complete disregard for intellectual property laws (ironically, the same laws they will cite if people try to use their data or reverse engineer their APIs). They argue that their business couldn’t exist if they couldn’t break these laws. It’s the Silicon Valley model: move fast, break the law, make them depend on you.

The effect of this has had a material impact on me and people I know who host their own websites and services. Countless hours spent trying to keep sites online as servers are bombarded by the requests from AI companies’ web scrapers. Everyone’s hosting bills are impacted. Content is being ripped from our sites to train new generations of AI models… there appears to be no limit to their demands for more data and no line they won’t cross to get it.

In the end these AI systems are generating code based off of the work of other human beings without attribution or remuneration. LLM’s do not generating novel code. They generate code based on other people’s work. They hallucinate Stack Overflow answers directly into your code. You can use that generated code without compensating the original authors.

On the Environmental Impact

Training models takes a large amount of energy. Inference also requires a significant amount of energy. With the addition of AI agents and APIs for serving those agents, AI tools use a lot of energy. A figure that is difficult to quantify.

The net result is that the demand for AI has put pressure on the energy system to add new supply to the grid. Fast. And that has been coming in the form of rising demand for methane power plants. And coal.

Companies running these data centres that have brought new power onto the grid are also getting preferential access to electricity during hurricanes and other natural disasters.

All of these new data centres need cooling. Many use fresh, potable water to do so. Billions of litres of it. They’re building these data centres in water-stressed regions. And they’re using that water even during droughts.

All of these factors are creating stress on the environment and are having an increasing impact on people, animals, the environment, and our climate goals.

No amount of, “but what about X industry? They use more!” should deter us from managing this use of precious resources. Yet this is not what countries around the world are doing. Instead they are allowing AI companies to continue building more data centres, faster, in order to meet “demand,” and secure those precious tax dollars.

The biggest tech companies in the world are run by oligarchs that don’t care about human rights let alone the environment. Regulation is the only tool we have that prevents companies from making the worst decisions possible in the name of profits. Game theory and prisoner’s dilemma aside, tech companies building these products have little regulations and ignore legal cases made against them. Using these tools doesn’t necessarily mean that one condones or supports these behaviours but these companies do thrive as long as there is “demand” for it.

I don’t intend to contribute to that demand.

On Profitability

Companies are having a hard time making a profit by providing LLM-based tools. The good models are large and expensive to train. And you have to be training the next, big model. Otherwise you’re renting.

It’s so unprofitable that these huge, well-funded companies are seeking government subsidies and contracts in addition to all of the private sector funding they’ve already taken.

AI has dubious impacts on programmer productivity, an outsize impact on the environment and it’s not even profitable.

Are we supposed to trust Anthropic’s CEO that it’s going to get worse before it gets better… but trust the billionaire capital-class, it will get better?

There Is No Singularity

There is no evidence to suggest that AI will improve indefinitely. Adding more parameters used to realize significant gains. That is no longer the case… we need to add exponentially more to realize similar gains in prediction and accuracy. Context windows can only get so large before results risk becoming incoherent.

Many of the capabilities people are using to generate code… aren’t even understood. We don’t understand how or why they work. Models are supposed to predict the next word. But they can also appear to solve simple constraint problems. Sometimes. If we don’t know how they are doing this we can’t improve it.

There are physical limits to things. We can only scale the number of chips on a board and power so many boards so far. We can only generate models so big. Thermodynamics will limit growth at some point. And most of these improvements are showing signs of slowing down.

What Makes Us Better Programmers?

I can only echo what distinguished computer scientists have been teaching us. We have to think. What makes us better programmers are programming languages. Languages that allow us to express our specifications more directly, precisely and concisely are what will make us more effective. Programming languages are the best tool we have for specifying a program. Compilers are great at generating code from those specifications! The problem with programming is not a paucity of code. We need tools that help us think and keep us honest. We need to write less code. We need to be able to verify the code we do write. And delete more of it when we can.

We need to prevent companies from replacing our labour with capital. Programming is knowledge work. There is no royal road to knowledge. It is won from the frustrating and difficult process of learning, testing theories, and refining those theories through practice. The risk of replacing human labour with capital is a weakened economy that further widens inequality between the capital and labour classes. We are not a privileged class of temporarily embarrassed millionaires. Weakened labour power means we will have less influence on our work, how that work is used, and that will directly impact the quality of technology over time which affects everyone.

And we need to pressure our governments to introduce better regulation of AI technology. Safety in AI is not about protecting society from an imaginary monster. It’s about managing resources effectively and protecting the economy to build the society of tomorrow.

For now there is no sign that AI tools will ever be reasonably energy efficient enough for me to consider using them.

While the technology has improved there’s no evidence that it hasn’t reached it’s limits already.

I don’t think humans are good enough at guiding AI to generate the vast quantities of code while keeping error rates in check.

And prompting them to generate the code is simply not interesting or enjoyable.

I’ll stick with learning, writing, mathematics, and thinking for myself.