AI, existentialism and meaning

Artificial Intelligence has taken the world by storm to a point where it invades most discussions, be it technological, societal, cultural, creative. To me that pervasiveness has been surprising, since even before ChatGPT blew up, I played around with AI Dungeon - an online RPG game that lets you collaborate with a language model to create a D&D-style adventure. It quickly became apparent that, while fun, the system was prone to what today we call hallucinations, and most of the output was fairly generic to begin with. It was impressive for the time, but as limitations became obvious it was not more than a Marvel movie generator of a sort. Those limitations have been reduced since then, but core issues around them have not disappeared.

This is why I was so taken aback when people in my own industry, who should be most sceptical of any new technology that is heralded as the ultimate game changer, bought into the hype. With the startup and tech industry being so dependant on insane valuations, never ending promises of almost-here-but-not-quite-yet technologies akin to Full Self Driving, you would expect us to be more cautious after so many past burns. Yet suddenly top-down and bottom-up pressure appeared to introduce a technology that shows mixed results, and whose implementation is often prescribed rather than described - it is simply a demand to "do more things with AI" since everyone else is already "doing things with AI".

What happened? How is it that simultaneously the grassroots, the C-suite, the wider population and governments were bewitched by the same genie in a bottle (at least to a certain extent, there is still plenty of opposition)? I do feel like the emergence and direction of artificial intelligence have a lot to do with our perception of progress and knowledge, which makes it the perfect candidate for us to obsess over.

Populist politicians might seem like they have charmed their supporters, often it is the other way around, their supporters simply found a messenger they have long been waiting for. So it seems can be the story of AI.

Knowledge and insight

The magic of using a language model to answer a question rests on its ability to generate a seemingly-original, mostly-accurate response that also strips all context from the information it is based on while presenting the output with a degree of legitimacy owing to its tech backing. As some people have put it, it is "basing its view on ingesting the whole knowledge of humanity", which is partially true as the models have been trained on an extremely large dataset. The complete obfuscation of the process lets our imagination run wild as to what is actually happening inside the box, regardless how true to reality it is. With the focus on building up datasets, we get the impression of knowledge as a resource that we just need to pour into the machine to get good answers.

Chatbots do not work that differently from other ways information is disseminated, whether it be documentaries, podcasts or YouTube videos: the author (or authors) would collate information, provide a confident voice (often backed up by high view counts for legitimacy) and strong opinions on the matter. In the case of social media that often gives us an a simple take we can then repeat to others. Thousands of people collect knowledge as if it was a resource, repeating the opinions and facts imparted on them, without deriving any insight. I have caught myself in conversations realising that the someone's opinion of a political issue or historical event is identical with that of a video I've seen, repeated back as one's own. No extra personal insight is added in, it is mere signalling that one has successfully collected knowledge to give back.

Knowledge should not be something to be collected, memorised and repeated, but instead should raise questions and mix itself with what we already know to build up disbelief, concerns, doubts and new conclusions. It should generate insight, and the ability for imagination to recreate or lead back to the original piece of info even should we forget it - although would that then be even considered forgetting? Instead, culturally, via our early experiences of exams and tests, we are drawn towards this form of knowledge-resource gathering and memorisation (how many history classes are all about memorisation of arbitrary facts without creating any connection to the wider context?). We naturally end up feeling dwarfed by a system that can gather and memorise information at such an unmatched pace, and we completely ignore the lack of imagination such systems possess to derive any insight or originality from combining the various fields they are trained on.

But it was insight and imagination that allowed polymaths like Leonardo da Vinci to use their science and art background to blend the two together, designing inventions way ahead of their time with far less information than we have now. It is very human to, if educated in different fields, have an "a-ha moment" and connect different, seemingly unrelated, areas together. Language models have been able to repeat what others have done, we are yet to see them create any new philosophical concepts. Why would they, considering they operate only on language without any inner life of their own? Language by itself does not constitute knowledge, it evokes it in our consciousness.

AI is then merely an automation of this process of knowledge-collection that we value in detriment to being able to derive insight from it. It follows that AI would be seen as an authority, able to repeat back what it has been trained on in an instant, even if it does not have any imagination of its own. Chatbots have been thrown onto fertile ground where already a lot of people repeat opinions that they had accepted, without consideration. It mirrors back at us the misplaced value of knowledge-gathering, making us look so tiny in comparison.

There is more to knowledge than that. There are plenty of concepts that I could describe in a way that would fool others into thinking I have experienced them. Mind's emptiness in meditation practice, experiencing "the way" in Taoism, nirvana in Buddhism are all concepts that are intrinsically human and cannot be brought into our mind via a simple information gathering exercise. Or more to be more down to earth, in the Mary in the black and white room thought experiment Mary had access to all the written knowledge about colour, but she lacked the experience of colour. Only when she sees red does she connect what she has learned with her new-found insight.

LLMs are not known to see colour, or meditate, or live in a state of flow. They merely arrange words in a way that makes statistical sense, and represents the training set's communication patterns and concepts. It is left to us to exercise our imagination and will to take this information, and turn it into something that has an impact on our inner life.

Standarisation

If anything speaks about our lack of imagination, it is how few areas in our lives allow the freedom of unencumbered expression. The wow-factor of an LLM's ability to generate generic emails, summaries and ideas speaks more of our non-uniqueness in communication. Our lives are becoming standarised. Therapy speak has infected our lives: we start to communicate using abstract therapy concepts rather than by building up our own vocabulary or methods of expression. Corporate language has become void of meaning, a collection of standard phrases to repeat in emails or job advertisements, and its false politeness is beginning to seep into our daily speech. We now have "the right thing" to say for more and more situations. On top of that, the school->university->work->children->retirement path is geared towards ensuring all of us do the same things at the same age. All of these processes abstract away our lives, they remove our ability to invent, and turn our language and lives into input-output systems.

That is why the common joke is that with LLMs one can write down bullet points, ask AI to translate them into a work email, only for the recipient to decode the email back into important bullet points using another LLM. We are excited for a technology that helps us remove all expression and simply send our inputs in the email for someone else to process and provide their outputs. Anything in-between is nothing more than fluff, protocol to follow, euphemisms to use. Why learn how to be kind, better understood or diplomatic if an LLM can make any speech dull, boring and corporate enough to shield us from negative consequences?

Since today companies are run by executives and management that understands only standard and abstract rules of business, AI is a god-send on the path towards streamlining. No wonder it is now doing rounds in game development, where the creative process is seen as something to optimise away. The end product and a process to get to that product are the only goal of modern economy, regardless of the goods manufactured and however it trumps over our inner life. In other words, the more we can optimise Beethoven out of the nine symphonies, the better for delivering those symphonies to the market on schedule. In the vein of an earlier comparison, we no longer care if the creators can really see colour and experiment with it as long as we have something that pretends it does. Then, when some form of a satisfactory, reliable art-(re)creation pipeline is achieved, what remains is optimisation of the process down to hours, minutes, and ideally seconds. It is input-output all over again.

Things cannot get any more standard than that. Thanks to language and image generative models we are able to escape the clutches of creativity and simply mindlessly copy other people's methods of expression using machines that pretend they know things. No different from how we already adapt political, therapy or corporate phrases without criticism or deeper understanding, hoping that they mean something to others even if they don't mean anything to us.

Ideal of progress

I also see the allure of linking LLMs to the perception that via the virtue of our ancestors' decisions, and the collective advancement of our race, somehow we personally, without any effort of our own, progress with it. Solely owing it all to the seats we get as passengers on the advancement train. This can be easily seen as false, technological progress has revamped the world and indirectly gave birth to new philosophies, but it does not directly give answers to intimate questions of meaning, purpose or ontology. If anything, it has helped nations destroy philosophies as they brought their own on the sharp end of bayonets. It might have brought us direct access to work that can help us expand our views, but it also added in layers and layers of disinformation along the way.

A farmer in the middle ages might have not known much about biology, genetics, engineering, but they could put their observations, intelligence and passed on knowledge to use to erect farmhouses, cultivate crops and harness nature. Their lack of understanding of theoretical physics or mathematics did not stop them from building up a deep understanding of nature, and the phenomena around them. In contrast, most of the world's middle class is now educated in sciences, but more proficient in the corporate, marketing and legal systems we bolt on top of the world. As those systems lose relevance to the world at large, so do people cling to the idea of our superiority thanks to the global march onwards.

On an individual level we are no more familiar with the world around us, nor are necessarily more introspective or rich philosophically than our predecessors. We have more opportunities to take advantage of knowledge left by those who came before, but personal human progress is constantly reset as men and women die, and children are born, who are then forced to repeat the same journey towards death. For them the scientific and technical progress is an environment they are born into, similar to volcanoes, mountains and rivers that came to be seemingly on their own. It does not inherently put them ahead in their life's journeys just by virtue of existing.

The sheer fact that we can access knowledge with a few click of a button does not mean we will. That Plato, Hume, Heidegger or Zhuangzi have lived has prepared the environment for me, but their past existence does not instantly mean I personally have instantly got closer to solving all the philosophical questions they tackled. I might be able to read interpretations of their work and listen to debates that their thinking has sparked, or be influenced by popular culture that has borrowed from their ideas, however that still requires some level of thought and contemplation. It does not mean that I will do anything more with their ideas than memorise them and treat them as simply something-to-be-repeated-back.

The trap of AI is the illusion that now we can somehow capture all this existential insight into a box, as if it was an objective and experience-independent concept, completely detached from the context of times, culture and personal circumstances. By doing that we practice a humanity-wide form of avoidance: finding a technological solution to an existential issue. We allow a group of people to funnel money, time and effort into creating that box, while the rest of us can simply sit back and ignore our personal journeys, waiting for the system that will somehow provide the answers, as if other people can provide that the same way they deliver food, electricity and running water.

Our affluent modernity is often juxtaposed with the toilers of times past, barely keeping starvation at bay, surviving through abuse and exploitation, almost begging us to find some solution that would give meaning to their suffering. Advent of Artificial Intelligence gives the illusion that by harnessing the God-in-the-box we can somehow see our history as merely a build up, stepping stones towards a bright future. It reminds me of parents who might not solve their own journey towards meaning, but instead direct their efforts towards making their children's lives easier hoping they would get a better stab at solving existential issues. Except with technology it is the whole of yet unborn humanity that we can push our angst on, tempted by the vision that progress will make that struggle much easier. Why bother figuring anything out for ourselves, if our descendants will be able to do it without the burden of having to wake up for work each day, and with a system that can increase their thinking's productivity tenfold? Seems almost unfair that we should try ourselves!