Good February morning on the 25th, Anno Domini 2025!

This very morning and this exact moment is unique. It will never return. It’s like the flapping of a butterfly’s wings flying past. So let’s fill it with purpose and use it as an occasion to demystify one thing or another. Namely, Youtube is full of scenarios about how generative artificial intelligence will make coders redundant. For this reason, this particular moment in history that we are living in right now makes us face the following question:

Will AI Make Coders Redundant?

And the short answer is: not in your dreams! I can swear that it will not happen. Quite the opposite. The artificial intelligence will only make coders even more necessary than they already are today!

AI Powerhouse

What Does “Coder” Actually Mean?

Let’s stop for a moment and think about what a “coder” is. And what is “coding” anyway? Maybe we shouldn’t talk about coding at all. In the late 1990’s and early 2000’s, most people still talked about “programming”. Schools had programming courses, not “coding” courses. Apparently, “coding” is a narrower concept than “programming”. It looks like “coding” rather means writing in a specific programming language. In comparison, ‘programming’ can be understood as making the computer work the way you want.

So Is “Coding” Going To Disappear?

Perhaps 50 years ago, “coding” meant punching holes in cardboard cards. Indeed, that activity has largely disappeared with the development of technology. Thus, it can be assumed that coding as we understand it today will indeed change its form. The word itself will hardly disappear, but it will come to mean a somewhat different thing. Or we just go back to talking about programming instead of coding, like in the old days!

Face the New Reality: Neovim

So let’s forget the dark moods and storm clouds. Download the geekiest tool of all time: The Neovim editor. It’s really terrible. And it’s really genius. It’s really genius because you can configure it into a real AI monster for yourself. I recommend. So that you have a job in the future as well. Let’s get started!

Demystifying AI

Let’s simplify AI into easy-to-chew chunks for just anyone who has ever tried to write any code. Then it becomes evident that you don’t have to be a genius to benefit from AI. You don’t even need to really understand how large language models and other AI tools are created and developed.

Because when you configure an auto-completer using local artificial intelligence in your Neovim editor, you are already firmly involved in a self-reinforcing loop. It’s just that simple!

Why Local Artificial Intelligence?

Really, it’s probably okay to use LLM API services provided by third parties. It’s also the only way to access really powerful models that use hundreds of billions of parameters. But also lighter local models can be very useful in practical coding.

There are also use cases when it’s not okay that you share every single piece of code that you type with who knows whom. It is important to take this aspect into account, especially if your client doesn’t want you to share their code with all the world. There are many customers who think this way. Many!

At the end of the day, isn’t it also a cute idea that your own little machine gets its own little AI and thinks about stuff together with you?

:)

Local AI Auto-Completer for Neovim

I wrote in one of my earlier posts about how to configure Avante plugin to use a local LLM. But I still wasn’t completely satisfied. I wanted to have the Neovim editor complete the code on the fly with AI assistance as I type it, instead of just generating the requested code from a separate prompt. A bit like the Continue plugin in VS code. But I didn’t really find anything. I was even so frustrated that I considered switching back to VS Code.

Fortunately, I have really good colleagues. I got a tip from a friend about the llama.vim plugin. It requires you to install a local llama.cpp server on your machine.

And it works!

On MacBook, it’s really simple using Brew:

brew install llama.cpp

And then create this unbelievably simple Neovim configuration file:

{
    'ggml-org/llama.vim',
}

Installation on Linux

This easy-to-use AI coding plugin turned out to be so useful on Mac that I wanted to install it on my Ubuntu laptop as well. It was a bit trickier though.

The difference is that on Linux you have to build llama.cpp yourself. So let’s do it now!

First, install essentials:

sudo apt update
sudo apt install build-essential git cmake libcurl4-openssl-dev

You definitely want your language model to use the GPU. So let’s install the Nvidia CUDA toolkit:

sudo apt install nvidia-cuda-toolkit

Clone llama.ccp repo and switch into it:

git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp

Build the project to work with CUDA to use GPU and curl enabled so that it can download models:

cmake -B build -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build build --config Release

Finally, start the server. If you have more than 16 gigabytes memory, you can use the bigger model:

./build/bin/llama-server --fim-qwen-7b-default

If you get an ‘Out of memory’ error, try a less ambitious model:

./build/bin/llama-server --fim-qwen-3b-default

Restart your neovim editor now and enter the New Era.

Happy coding with AI!