Texts | Christofer Sandin

01001100 00101101

Using AI based on your curated sources is an excellent use of technology

There are so many mentions of AI in technology and software today that you can hardly do anything without having to opt out or opt in to some AI feature in every application you open, no matter what the purpose of the application is.

I’m one of the people who have a hard time slapping AI onto everything and, by doing so, also think that the application automatically got better than before. In certain situations, AI can be of excellent service, especially in doing mundane tasks that are easily described, but there are a lot of places where AI does not add anything substantial at all, except marketing jargon.

At the same time, the Large Language Models (LLMs) are continuously improving, and can be a great help in processing information and helping us organize it.

During the summer, I spent some time trying out AI-powered services to educate myself and explore cutting-edge technology. But using AI for research and to help you build things is not as easy as you might think. If you want to develop good-quality solutions, the hallucination effects (when AI presents false information confidently) of AI agents can be devastating to your research, and the "good enough" outdated technical answers can also bite you if you’re not careful and good at assessing the responses.

It’s a thin rope to walk.

Since AI today more or less makes up things and presents them as facts without being a bit embarrassed, we need to start checking the source material and not just assume that AI always knows. Going down this route made me take a look at NotebookLM from Google since it is one of the services that gives you references to the data it confidently presents.

NotebookLM

NotebookLM is a service where you can add multiple sources in the form of PDFs, Markdown, and text. But also links and YouTube videos, as well as Google Docs and Sheets.

When done adding sources, you let Google’s Gemini 2.5 AI process the material, and then you can chat and ask questions, create an audio podcast where two AI hosts discuss your preferred topic, create a presentation, and get access to a min-map-like outline of the subject, as well as FAQs and Study questions.

Another thing is that you can get the results in 50 different languages, so even a small language like Swedish is supported. I understand that it’s easier than before thanks to AI, but it’s still nice to see Google include minor languages like that.

The option to interrupt and ask questions while listening to the audio playback is an elegant feature. Like calling into a discussion on the radio and having the hosts target the discussion around your question.

Being a "free" Google product, you can’t be sure it will be around in a year or two, but trying it out right now is fun. Give it a bunch of source material that you know is good and that you like to work with, sit back, and then enjoy the audio discussion based on this.

An Obsidian MCP server

Another thing that piqued my interest was the ability to install local MCP servers. The Model Context Protocol, or MCP for short, is a way to connect AI agents to sources other than the LLM they already use.

This made it possible to hook up an AI application like Claude Desktop to Obsidian. Obsidian is the app where I do and store most of my writing today, and the content is saved in Markdown format.

With the MCP server in place, I was able to add my texts written about various subjects, book recommendations, and research to Claude Desktop and have that material accessible in the AI-based chat context.

That made it possible to ask questions like;

GitHub Copilot in Zed and Neovim

At work, we use PhpStorm from JetBrains. Even though I’m not writing code every day anymore due to other responsibilities, I still do from time to time. Therefore, I’m not as heavily invested in the IDEs and editors as ten years ago, but I think the AI Assistant in PhpStorm is rather good, and I’ve used it in a few situations. (They also have a new, updated AI coding agent called Junie, which I haven’t tried yet.)

But I’d wanted to explore using AI in an editor like Neovim, where I had to dig a little deeper before getting it to work. I installed the Neovim copilot.lua and Copilot Chat plugins and gave them a try. I have to admit that while it worked well enough, the experience was not up to par with the one in PhpStorm (probably because of my somewhat moderate Neovim skills), but that made me try the same thing in Zed.

And I got to say, Zed might be a sweetspot when it comes to being able to use the Vim motions that I’m starting to be accustomed to when editing code, as well as giving me a high-speed editor and a GUI for the things where this makes sense.

(Vim motions refer to keyboard shortcuts that let you move around and edit text efficiently without reaching for the mouse.)

So, what would you have AI help you out with when you have it in the editor?

If you exclude the option to have the AI generate actual code for you from scratch, which, in my experience, seldom ends up exactly the way you want it, you can use it as a colleague.

And a final one, which I haven’t used yet but can see the potential in, write commit messages that accurately describe changes.

The main thing is to make sure that the AI agent has access to the whole project and your code base so that it can provide appropriate advice. Context, again, is significant for making the AI helpful rather than adding confusion and complexity.

Resources