Post

Tomorrow's commom sense

Every now and then we find ourselves living in a moment that future generations will study in history books. Some of these moments I wish I hadn’t lived myself: a global pandemic, numerous triggers for a possible WW3, or the last episode of “How I met your mother”.

Other moments like Brazil’s first Academy Award, Chapppel Roan’s music and the rise of large language models (LLMs) makes me feel grateful to be alive now. This post isn’t about pop culture, though, it’s about how exciting and, to be honest, frightening, it feels to watch LLM-powered solutions take shape in real time.

LLMs aren’t just a tool, a programming language, a paradigm, or a new framework, it’s all of those things at once, and probably more. The truth is, we just haven’t entirely figured it out yet, and that’s (mostly) okay. Living through times like this means getting comfortable with uncertainty, working on ideas that are still taking shape, and making decisions before all the questions are answered.

Working with this technology often feels like you take one small step forward, then three steps back, because solving one problem can create even bigger ones. Sometimes it’s like jumping without knowing if there’s a safety net, you try something new and don’t know if it will fail or lead to something great. Other times it is like building a new kind of ladder, only to realize you could’ve just taken an broader look and stood on a chair. And finally, there are unfortunate moments where you fell like you just invented the best VHS player, just in time for streaming becoming a thing, everythng you worked hard suddely feels obsolete.

Yeah, it’s not something for the faint of heart, but every challenge teaches you something, and with each leap, we get a little closer to building something truly powerful and new. I used to think “fail fast” was just a catchy phrase, now it’s a survival advice.

(Yes, I know, too many metaphors in just few lines. Not even ChatGPT was able to help me find one single line of though to represent the feeling of working with LLMs now. That says a lot on the experience we are living. Now, back to the main content.)

Not only the general capabilities of this technology are uncertain, we struggle to find words to use as well. Take “AI agent” for example, it’s a term that is everywhere, but if you ask 10 people what it means, you’ll probably get 10 different answers (or maybe 12 or 15). Honestly, if you asked me, I might give you two different answers myself in the span of 5 minutes. That’s where we are: exploring, debating, shaping the ideas as we go.

The terminology evolves alongside technology, and meanings shift as our understanding deepens, which is actually nothing new. Take “objects” in programming: the term started with LISP atoms in 1960s AI research, shifted through Simula’s models, and helped shape languages like Smalltalk, C++, and Java. Its meaning changed along the way, what began as a basic idea grew into a rich set of concepts like classes, inheritance, and dynamic binding, refined as the field matured.

We are in the same messy and exciting time with LLM-powered AI agents. Like with objects in OOP, we are still making the words and tools that will feel obvious in the future. The definitions will solidify over time, but for now, we’re in that creative chaos where ideas collide, patterns emerge, and tomorrow’s common sense is being invented in real time.

(This post was writen by a human, but made a bit prettier with the help of ChatGPT)

This post is licensed under CC BY 4.0 by the author.