A loss function is a method of evaluating how well a machine learning algorithm models the given data. It is used in the context of mathematical optimization and decision theory and recently became a widely known concept thanks to the advent of AI. An intuitive way of understanding is that it is the term that we are trying to minimize in an optimization problem.
The loss functions give us a mathematical equation that is much easier to work with than with abstract concepts. But finding this situation is in many cases as difficult as the problem itself. The idea is that, just based on belonging to the human species, there has to be a set of goals that coincide, independent of culture, age, or any other variable. Hopefully, in finding this function we can begin to work together more and better since our incentives are aligned. I will try my best to be conscious of my own biases and attempt to find the most common terms.
I would first like to go over some critiques of this approach, the first of them being that it’s impossible to find a goal that can satisfy all humans at once. One could argue that we are purely independent beings that can interact with one another. I don’t agree with that, as humans are social animals for the good and the bad, but I can also recognize that we are living in one of the most individualistic times in history, and it’s getting worse. Similarly one can say that our only goal is to reproduce and everything else is just context, but I think we passed that time long ago, and we are consistently finding that more developed societies tend to have fewer children and less selection.
Nassim Talebs argues that in empirical reality, real losses often are not mathematically nice and are not differentiable. This is a purely theoretical exercise that would probably fall apart in practice, and to be honest, I’m not even going to write the equation, this is just a way to write down some thoughts and things I´ve read about this topic.
One important thing for this experiment is that one needs to think long-term, infinitely long-term, to face this problem correctly. This raises another conflict, as this line of thinking can lead to catastrophic consequences in the present since the potentially infinite gains will always outweigh any bad effects in the future, which can softlock humanity. I also don’t think that there should be a “cap” on how far we can go, so I’m placing a horizontal asymptote at 0 and we can just keep approaching with diminishing gains forever. What reaching 0 represents is up to you, but I will present a few suggestions later. It is also related to Zeno’s paradoxes that have always fascinated me and models many of the unsolved misteries of our universe.
A naive first approximation, but useful nonetheless, is a purely utilitarian one. We should strive to maximize the happiness and well-being of as many people as we can as much as we can. The more happiness and less suffering there is the better. As with most utilitarian arguments, it falls apart under some pressure, but the premise is strictly correct. Common criticisms include the impossibility of actually measuring happiness and suffering, or how it completely ignores the concepts of justice and morale.
It’s also important to ask whether is it total or average happiness that we seek to make a max out. Using total happiness falls victim to the repugnant conclusion and even the very repugnant conclusion while using the average can lead to other similar unwanted consequences (it’s much easier to distribute the resources of the earth when there are fewer people). I enjoy the discussions around trying to make utilitarianism work with modifications, and I would like to explore them in a later post, but to me it makes murdering people too easy to justify.
Utilitarianism raises an interesting point, as centering our efforts on optimizing one metric without taking the context into account can deceive us into thinking that everything is going okay when it isn´t. In the context of training machine learning models it’s known as overfitting, and we need to avoid it at all costs. This is often called Goodhart law: “When a measure becomes a target, it ceases to be a good measure”. I agree in principle with it, but if the metric is an evaluation function beyond the complexity that humans can model it would probably break that law. I can’t do that myself but I wonder how that could be accomplished and if we would even want that.
Science fiction has plenty of examples showing rogue AIs that end up exterminating humanity in exchange for paper clips. In the realm of fantasy, a personal favorite comes from the Stormligt Archives books by Brandon Sanderson, where the king of Jah Keved, Taravangian, in a visit to a pseudo-god known as the Nightwatcher, wished for the capacity to save the world. As a result, his intelligence fluctuates from day to day, sometimes making him idiotic and sometimes a genius, with his compassion being inversely proportional to how smart he is. His solution, written in a compilation called The Diagram (that he wrote in a day were his intelligence was at its peak), made him betray the human race in exchange for the salvation of the few he ruled over. He does realize his mistakes, but the damage is mostly done. As usually happens with Brandon Sanderson´s books, many comparisons can be made to real life.
Maximizing happiness is a good starting point, but it’s too general and difficult to measure for it to fit in the equation, so we need to find proxies for it. Maslow´s pyramid of needs provides a good framework for thinking about this (but definitely isnt perfect). Anyone with their basic needs uncovered (physiological and safety) represents a failure as a community and should not be rewarded in the function. I don’t feel as confident to extract conclusions from the upper levels as those can change depending on the circumstances (peacetime/wartime for example), but generally speaking the higher we can get the more amount of people the better.
But do we want the most amount of people? Maybe from an evolutionary standpoint that made sense, but does it still hold in a society where there no longer exists a selection process, neither for groups nor for individuals? Problems such as global warming or the overpopulation of Earth are not on the scope of this post, and if we are truly thinking long term we need to assume a fair amount of space exploration.
The reason why I would always prefer more population over less has to do with the uniqueness of the human being, and in particular the concept of creativity. Later on in the post, I want to talk about the importance of artistic expression, but just the addition of a unique consciousness, with its unique set of experiences, has some innate value to me. Added to that value is our level of intelligence as a species, which for the time being is the highest around. One could question if having more intelligence does add more value to that consciousness. That has some crude implications, whether you consider that animals are conscious or not. To avoid answering that question I’m attaching more value to humans because they are my group, and I’m not taking intelligence into account as that would imply an amount of eugenics that I am not comfortable with.
Another good reason why we should generally want more people, from an economic point of view, is explained by Brian Kaplan:
But suppose the world population will reach 10.1 billion by the end of this century. Would that be a good or a bad thing? Arguably a good thing, on several grounds. One is that it would enable greater specialization, which reduces costs. Second is that it would increase the returns to innovation by increasing the size of markets, though an offset is that innovation can produce immensely destructive as well as constructive technology. Third, the more people there will be, the more high-IQ people there will be, and hence the faster the growth of knowledge will be;
I agree with him save for the last part. Brian, among others like Eric Weinsteinn, holds a view that I still haven’t made up my mind about. They claim that most of the progress and innovation, especially in science, stems from a few extremely brilliant minds, and that we should invest more effort in finding and providing the resources they need to maximize results. The lack of important scientific discoveries in the last 20 or so years is difficult to explain, and one explanation is that we don’t have the conditions that allowed for that innovation to happen in the first place. Another possible explanation is that all the “easy” challenges have already been solved and now we are left with the more difficult work. I disagree with the implication that most of the scientific work being done is of negligible importance, but it´s certainly a problem worth thinking about. One easy target is the institutions, such as the Ivy League, that have been steadily losing prestige for quite some time, and that from my personal experiences studying CS in Spain, deserve a lot more criticism.
Death probably shouldn’t be rewarded in the function, which means that we have to try to live as long as we can. Immortality seems perfectly fine to me as long as we keep an opt-out button around. And if we ever get to that, I’m sure there will be a fair amount of people arguing that you shouldn’t have a right to press that button. To prevent that, we should introduce some amount of freedom to the formula. Some capitalists turned thinkers, like Peter Thiel, advocate for “authentic human freedom as a precondition for the highest good”. I agree with him, though probably not to the same extent. One always has to balance between liberty and security (that we have to guarantee as the second step of Maslow´s pyramid), and while I don’t think you can ever completely escape that trade-off, we can improve the rates at which we exchange them with a concept borrowed from the greek: Thymos.
The original meaning of the word was something close to “spirit” or “spiritedness”. The Greeks envisioned it as a righteous anger, a need and desire to fight against the perceived injustice of the world. The word has some violent undertones so that fits nicely. Maximizing this trait would allow us to restrict much less freedom to attain the necessary amount of security. I don’t think that the capitalist reality in which we live today helps in promoting this, as individualism and competition don’t tend to reward those who go out of their way to seek justice. I believe that the technological progress curve directly opposes this one, and we should find ways to manually increase it.
Both Marc Andreessen and Curtis Jarvan agree on the importance of this curve, and a general sense of order is necessary for a good functioning society. But as with everything else, we have to be careful with how much. A less obvious problem is that with too much order, even if not “forced” on the population, we could stagnate. Total stability is one of the top 3 worst endings for humanity, probably just before wireheading and extinction. A certain amount of chaos and uncertainty is necessary for us to not lose our humanity and keep us alive. Reaching zero in the function might mean that we become some kind of angels, that are incapable of taking wrong choices and are stuck at that place for eternity. If we become Turing Machines we lose.
Another trap that we can fall into depicted by Lenin’s distinction between formal and actual freedom, claiming that some societies only contain formal freedom, “freedom of choice within the coordinates of the existing power relations”, while prohibiting actual freedom, “the site of an intervention that undermines these very coordinates.”
That’s precisely why we should value creativity a lot more, be it in an engineering context or a purely artistic one. Innovation stems from finding new ways to affront problems, and consequences of technological progress have shown throughout history to be overly positive (though this can be discussed). Independently of that, I also believe in the importance of technology for technology´s sake. Science and technology go hand in hand every step of the way, and for me, one of humanity´s main quests is the understanding of the laws that govern our reality. While I can’t add that directly into the formula (too abstract), it has a direct impact as it allows us to develop the tools needed to make more and happier humans.
I believe that a good metric to represent our advancements in technological fields is the amount of power we consume. It also has a pretty good correlation with our quality of life if we divide it per capita. It also gives a reason to worry as it has stopped rising as much in most of Europe and even the EEUU. The main culprit is climate change, which I get to skip since on the time scale I’m looking at there is terraforming even our original planet. Mine is an optimist position on this front, and I’m usually a pessimist. I’m not at all saying that we should disregard climate change as an issue, I just think that we will be able to overcome it before it turns fatal.
As a proponent of using more energy, I am a fan of nuclear. I don’t dislike renewables but they won’t ever be nearly enough to satisfy humanity’s growth. I also believe that we may find a kind of “loophole” in the universe (like the quantum energy teleportation protocol or Dyson spheres)that allows us to obtain energy at a much lower cost. A measurement such as the Kardashev scale kind of assumes this as well. I find this one of the only escapes from capitalism, as the end of scarcity should stop the cogs of the machine, as long as its not artificially generated.
We can also use ways of measuring alien civilizations on ourselves, for example assembly theory. While I don’t claim to understand everything about it, I have been fascinated with the ideas Lee Cronin presents, particularly regarding non-determinism, but for our purpose, we only need to understand the assembly index.
Assembly theory outputs how complex a given object is as a function of the number of independent parts and their abundances. To calculate how complex an item is, it is recursively divided into its component parts. The ‘assembly index’ is defined as the shortest path to put the object back together.
This index is useful to identify life (though there is valid criticism against it) as only living systems can produce complex molecules that could not form randomly in any abundance. The more complex a given object, the less likely an identical copy can exist without the selection of some information-driven mechanism that generates that object. The assembly index on its own cannot detect selection, but the copy number combined with the assembly index can. And by finding selection, we can identify life.
I like this approach as it also gives us real numbers that I can use in the equation as a way to measure our advancements. As a bonus, if aliens are searching for life elsewhere in this universe, they will probably be looking for the same sort of thing, as it’s the most straightforward way of measuring complexity.
I have already given a couple of hints throughout the text on the issue of growth, but it needs to be addressed directly. All this time I have been working under the assumption that constant growth is the only way forward, since as far as history goes, this growth has come with gigantic benefits for all of humanity. But is there a point at which growth stops being worth it? We are seeing a bit of that in the way that the global markets were set up after WW2, where, as long as you keep growing, everything would be fine. It’s commonly said that technology and economic advancements always trickle down to the lower stratum, and that has probably held up until now, but it doesn’t have to keep going that way.
I hope that we will be able to recognize the situation if it ever comes to that and adapt accordingly. Movements like effective acceleracionism argue that life is this sort of fire that seeks out free energy in the universe in order to grow, and growth is fundamental to life. They also believe that pure capitalism is the way to find the most optimal growth configurations. I don’t know if I agree with growth being a requisite for life, but I definitely don’t agree with the anarcho-capitalist utopia they chase. Even then, we are far from having to answer that question, and at the moment growth is the best strategy toward our goals, if there even is one.
Lastly, the question of free will is definitely an important aspect to take into account, but I not planning to explore it as of now. I am working under the assumption that it does exist in some form and that we are not deterministic machines as some claim.
Edit: It took even shorter than usual for me to hate this post. My intention was to provide some thoughts on how we should measure the advancement of humanity, but in triying to fit them to my narrative I have corrupted much of the usefull parts. Nonetheless the post will remain unchanged as most of the content still categorizes as interesting. Also fertility2023-12-202023-12-20