When speaking with friends and colleagues I’m sometimes accused of being anti-AI and falling behind when it comes to progress because I sparsely use popular LLMs and I completely avoid generative AI for making assets.

I think I have good reasons. My experience with machine learning dates back to 2019. I started using ChatGPT immediately after it came out in the end of 2022. My peak of using LLMs for writing code was at the start of 2025.

Since then, I’ve completely dropped code-focused LLMs out of my work process. Fundamentally, generative AI is useless for the way I do art. I still often use LLMs in areas they’re strong in, I’ll also go into that.

This post purely focuses on currently popular general AI services like ChatGPT and Sora. It doesn’t concern specifically trained AI models that, for long time, have been successfully tackling problems like finding disease in patients and controlling NPCs in games.

What I write here are my subjective opinions and experience.


During the birth of commercial LLMs

My experience writing ML algorithms

The first time I applied machine learning at my job was for data classifiers back in 2019. Using TensorFlow in Python, I trained a bunch of models to correlate specific attributes to natural language in text. The algorithm was a bit of a black box for me. I thinkered around, watching rag-tag articles, lectures and YouTube videos, until it got the job done.

I don’t have specific in-depth knowledge of how the most modern AI works, but I got a good intuition about 3 very fundamental aspects:

  • The ML algorithms are damn good at finding patterns if tuned well.
  • The real-world performance is never 100% accurate.
  • The final result is never as consistent as a pure algorithm.

Of course, for scenarios in which a specific AI model needs to be trained, an algorithm would be consistently bad, or impractical to make. Still, the resulting model output is not based on pure logic the same way a sequence of statements in a program is.

Defending ChatGPT in 2023

As ChatGPT came out in the end of 2022 I loved playing around with it. I knew it was an algorithm for expressing broad patterns in language, and I treated it as such. I never expected specifically correct responses to anything.

I often threw ideas at it, looking what type of a response I’d get. Like throwing a rock in a small pond, looking at the ripples it forms as they bounce off the surface, I was fascinated with how my ideas interact with an ocean of raw language.

A lot of people hated and dismissed it back then, making fun of its incorrect responses to historic dates, logical/math problems, and coding abilities. In turn, I would always defend its ability to find abstract patterns, which would sometimes be practically viable.


Skepticism of popular opinions

I think there are fundamental flaws in some of the common AI opinions I see online.

The LLM promt-review agentic cycle for complex projects

What I write here is strictly addressing Linkedin opinions that engineers in large and complex projects can be reduced by applying a prompt-review cycle for producing programming logic.

The idea to iteratively prompt an agent for small tasks, review, commit and repeat, sounds very appealing. I used this approach to make a few small projects with Cursor and some tasks at work with Copilot.

It didn’t stick for a complex project, I wrote all the code for Bush 1022 by hand.

One of my core goals for Bush 1022 was for it to be able to run well on any beat up laptop that can launch Linux. This means I have to be very precise with what is being executed.

For some busines cases, it’s fine for a quick and dirty LLM-produced script to be executed for seconds… minutes… sometimes even hours. I don’t have this luxury.

Aiming for consistent 60FPS, I have 17 ms to calculate flight simulation for each frame on a wide range of user hardware: from beast gaming PCs to budget laptops. Knowing the inherent inconsistencies of coding agents, I’d have to very carefully review every single line of code to achieve that with a prompt-review cycle. It’s much faster to just write it myself.

I believe the same issue stays for any project that requires precision and cohesion:

  • Micromanaging an agent is useless overhead when the task requires precision and you know what you have to do.
  • If you don’t know what to do, it’s a lack of vision and skill, no AI can compensate for those.
  • If there’s boilerplate, it’s a framework selection problem.
  • If there’s heavily branching logic, there could be a wide array of problems ranging from scope managment to unclear specifications. You really don’t want AI solving those.

The final product of strong engineers is code, but that’s very far from the fundamental value they bring to a project, which lies in bringing skill and vision to the process of building a strong solution. By removing engineers, that process is damaged. Such engineers can apply their vision into using AI agents as tools when suitable, but their vision itself cannot be replaced by any machine.

But what if they don’t see my CV?

I often see posts on Linkedin about how it’s crucial to use AI tools to make your CV reach whoever is hiring. This is mostly related to corporate jobs that I’m not interested in, but here are my two cents anyway.

When I format my CV I put great care reducing it to only the most relevant parts of my experience in a single page, making it clear and easy to read. If somebody is too lazy to read it, I wouldn’t want to do business with them anyway. If they determine I’m not a good fit, then we both save time. It’s a win-win.

Not every employer or client is worth working for.
I’d rather respect my integrity.

The illusion of cheap pricing

A lot of current opinions on AI focus on how cheaper it is to produce LLM or generative AI output, compared to hiring professionals to do it. I’ll later go into the artisctic shortfalls of this. For now I’ll focus purely on the financial side.

Business units that provide LLM or generative AI services are operating at a great financial loss. It’s a common pattern for investors to absorb costs while competing for market share in an emerging field, hoping for greater returns later. This creates a false sense that this technology can cheaply produce useful output. It doesn’t. Data centers and AI engineers are very expensive.

As soon as the technology gets widely adopted and integrated into workflows, they can jack up the prices and aim for actual profitability. This creates a risk. Businesses that currently integrate third-party AI services in their core will become targets for aggressive pricing strategies in the future, once they’re dependant and switching out is difficult.

Intellectual property

I recently saw an article titled “OpenAI Plans to Take a Cut of Customers’ AI-Aided Discoveries”. This opens a lot of questions about intellectual property ownership when using output from a third-party hosted AI model. Who actually owns that output?

AI is currently in a legal grey area. There are no mechanisms to ensure that the results from your prompts are your property. There are also multiple ongoing lawsuits about the unlicensed use of material for training the models.

These companies have a terrible track record when it comes to ethics. A mallicious change in terms of service could suddenly give another entity grounds to claim ownership of your IP or a cut of your profits. This is a fundamental threat to any business.

It’s especially dangerous if such a change is retroactive and suddenly affects work that wouldn’t be subject to previous terms. This won’t be new. The 2023 Unity retroactive runtime fee controversy can be used as an example for such change of terms. This article is a good summary of it. It led to a CEO resignation and long-term brand damage for Unity.

Maybe a greedy gen AI provider could find brand damage to be an acceptable price for gain in IP and profit cuts of other companies.

Using LLMs for general purpose writing

One thing I’d never use LLMs for is writing. If you have a tought and can’t write it down yourself, using AI doesn’t solve the core issues in your lack of ability to articulate it or the lack of specific attributes that make it up.

“Yeah… I know it… but I just can’t explain it…” That’s just lazy thinking. It’s outclassed by anyone who can effectively communicate their ideas in real life discussions. Using LLMs to formulate ideas erodes that ability.

LLMs for brainstorming

Soon after ChatGPT came out, I started prompting it about additional ideas for projects I was planning. The ideas it would generate looked great, I was fascinated by them. After a few failed small game projects in 2025, I stopped doing that.

A project doesn’t become good due to having many good ideas. It’s about the few ones that matter and the ability to execute them well. Using LLMs to come up with extra stuff is just unnecessary scope creep. There’s the addiotional problem that LLMs often make mediocre ideas sound better than they are.

The fact that an idea is novel for your mind doesn’t make it valuable. Experience and vision are what give us the ideas that actually matter. No AI can give you that.

Mediocre at best - why I avoid generative AI assets

I my opinion, the lack of ability to execute an idea correlates to a lack of vision. No AI or tool can compensate for this lack of vision. And then again, if I have the ability to execute an artistic vision myself, I don’t need AI for that.

There’s also the problem of saturation. Due to the ease of access to generative models, anybody can easily make low-effort, sloppy and lazy content with them. There’s no full control over the result, no amount of prompt “skill” or time spent can ensure that the final result won’t exhibit the same mediocre qualities.

Making art with real tools gives full control. Mediocrity can be overcome through skill and authenticity.


Where AI is strong

In my experience LLMs are great for 2 main things:

  • Writing code for low-risk tasks with no performance requirements.
  • Expressing patterns in language.

Browsing and research

Doing technical research with AI has easily been the biggest time saver for me when applying this technology to solve actual problems. Especially for more obscure things that I don’t know the specific semantics of, LLMs are great in finding the connecting patterns in language.

As a specific example, Godot uses a modified version of GLSL for shaders. I had written some shaders back in 2020 for Unity, but had completely forgotten the semantics of programs that run so close to the GPU. Within a few seconds of asking ChatGPT about a specific scenario, it gave me code snippets, links to the Godot shader documentation, and related page numbers in the 240 page GLSL manual.

The code snippets were wrong, but they vaguely resembled what I needed. Using the documentation links, I quickly remembered the semantics of shaders, saw what was wrong with the snippets, and adapted them into the project. It felt a bit like solving a programming problem to get my mind back into the context of a specific field.

Had I done this a few years back, I would’ve probably needed a few hours to get all pieces of relevant documentation for my problem. ChatGPT got me that in a few seconds.

Code snippets

Considering LLMs often make subtle mistakes when it comes to code, prompting for specific snippets is a more risk-averse way to use them. The context and content that the snippets contribute to remain outside of the LLM’s responsibilities.

I last used this approach for making my minimalist personal site, prompting ChatGPT for HTML with responsive sections. There’s probably some framework that’s more streamlined, but this was faster than doing the research and deploying something new.

I didn’t waste any time on remembering HTML or CSS. I could entirely focus on writing a meaningful introduction and displaying my content. That took more time than I’d be proud to admit for such a small site. But I believe it’s what gives it authentic value.

You could argue AI actually helped me express myself as an artist by removing a layer of toil that I’ve done a million times and would just slow me down.

Presentation-focused applications

Some applications only serve a presentation purpose. Whether a landing page for a company, or a business report to be viewed by analysts, the output is often consumed by people and isn’t part of a data pipeline in a precise, high-performance system.

As long as the underlying content has integrity and value, having low-quality code in such applications is fine. Their output doesn’t impact anything else and their performance margins are huge. Sometimes it’s acceptable for business reports to take hours to complete.

Legacy projects

Sometimes you inherit an absolute mess of a project that has grown huge and didn’t have any standards from the get go. Yeah, anything that solves the problem and just makes the stupid thing work goes for those, unless there’s an organized quality improvement effort.

In my experience, AI has been partially successfull for such projects. Sadly, when it fails, I have to resort to the age-old approach of getting drunk with whiskey and beer so I can reduce my state of mind and work without analyzing the broader logical context of the code, which consists of a bunch of thought spaghetti that jumble up with each other and mostly lead nowhere.

I’m deeply thankful for the few times Copilot saved me from having to fill my body with copious amounts of alcohol.


My bet

I’ll stay conservative on adopting AI. I’ll keep using it where it solves my problems without putting my business at risk or harming my integrity as a human and artist.

I believe the accessibility to making mediocre content through AI will be a great oppurtunity for creators of authentic art to shine, regardless of quality.

So, I’m betting on who I am as a person, my authentic writing, music, drawing, with all its flaws. It’s my human condition that no AI can ever replicate.


It took me 10 hours to write 3 iterations of this post, doing many proof-reads and small changes in the process. I could’ve given ChatGPT a bunch of bullet points to “produce” a similarly themed piece for 10 seconds.

Had I done that you’d not be able to connect with my words, who I am wouldn’t shine through, and this would look like every other LLM-generated article online.

But I believe my most important takeaway is articulation. I’ve picked every word written here carefully and I learned some semantical nuances in the process. The next time I speak to somebody in a real conversation, I’ll be a stronger communicator than I was prior to writing this post.