On AI and Code: Outsourcing Muse

February 23, 2025 from Brigham Campbell's Notes

I spend a lot of time thinking about problems and solutions. Most of the time, I’m just stumped. Unfortunately, I was never really a programming prodigy. I didn’t start programming when I was 5. I’ve never “dreamt in code”. In most science, engineering, and math contexts, I’m slower than my peers.

I’ve learned not to let these facts bother me. After all, if you’ve solved every problem you’ve ever tried to solve, maybe you should be working on harder problems. Time is the only fundamental scarcity. General consensus says the fact that time is scarce is good reason to be paranoid about how you spend your time. I assert the opposite: The simple act of spending time is just as important and expressive as what you might do during that time.

For a long period of time about a decade ago, I considered how to generate mazes of any size programmatically. I remember thinking about possible solutions for days. I tried a couple ideas in Java and never quite got the results I wanted. This problem haunted me… until I simply moved on. It bested me.

I wonder, if AI had been developed a decade ago, could it have helped me come closer to a solution? The resounding answer from OpenAI, Google, GitHub, Facebook, Nvidia, Elon Musk and friends is, “Yes! AI can help you code better! Our digital assistant can make you 137% more efficient!1” It’s only natural that they would want the rest of us to be excited about AI, considering their stake in the technology.

Efficiency

Let’s assume that AI can, in fact, help me write code of a higher quality in less time than I could alone. Completeness and quality are always good. Efficiency sounds nice, but is it really all that important? After all, not all those who wander are lost. Even if AI lives up to the futuristic promises, is it worth the cost?

AI is often criticized for frequently giving wrong answers. In my own experience, this is absolutely true, but can we really blame it for that? After all, I’ve heard professors, co-workers, and peers make deeply incorrect claims in the same authoritative voice that AI perpetually writes in. Maybe it’s especially frustrating when AI does it because we expect cold, hard computers to be precise and exact, but there are more convincing arguments against AI:

Real Intelligence

Unlike the bots’, our intelligence isn’t artificial. We’re perfectly capable of finding answers to our own questions and reaching out to each other when we need help. I’m going to try to seek out my own experience and information more, instead of relying on AI. It’s just not worth the costs of outsourcing such a deeply creative labor to math and carbon.

A decade after puzzling over maze generation algorithms, I finally implemented one in Python4. It was written entirely by myself, in an effort to answer a question I had.

maze generator output


  1. Motion, an AI secretary, made this claim when I accessed their homepage February 20, 2025. Your guess as to how their marketing team came up with 137% is as good as mine… ↩︎

  2. See https://www.southernenvironment.org/news/elon-musks-xai-facility-is-polluting-south-memphis/ ↩︎

  3. See https://lwn.net/Articles/1008897/ ↩︎

  4. See the source here ↩︎

Questions or comments? Send me an email.