← jackdoe
AM I RATIONAL?
Yes, sometimes.
At least I try. When I fear something, for example a daddy long legs spider, I think "I am much bigger than it, and faster, it is irrational to be afraid, I will just catch it on a piece of paper and put it outside". But why then, I am afraid of it? And why do I have to do it every time?
Well, you can argue that my fear is in fact very rational, in context of the evolution of the human being, it is much better ot be afraid of spiders than not.
What about other emotions, anger, love, compassion? Maybe those as well are rational in the limit. So in a Laplacian way, if you know everything, inside and out, of a human being, all their actions are rational.
I will argue this is not true.
First, let me introduce you to Laplace's demon:
"We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past could be present before its eyes."
-- Pierre Simon Laplace, A Philosophical Essay on Probabilities
Even if you are beliver in strong emergence, e.g. some argue that abiogenesis (the origin of life) can not be deduced from the primitive components, you can just move Laplace's demon up the ladder and apply it in the emergent world, and ask the same question.
And now, the question, when Abraham went to kill his only son, was he rational?
Oh, you smile, of course this is irrational, I would never do such a thing. But the truth is, if he heard God, the most rational action is to obey.
However, deep down, I know, if such thing happens to me, I will revolt.
I carry the original sin, and Laplace's demon can not see in me.
I do not believe. To believe means to have no faith, to have faith is to know. To know that you are free to revolt. The sinner of all sinners. To be able to say Raca to yourself, and to stand at the day of judgement, nothing left unlived.
Can the demon see my revolt?
You say yes, as he is all seeing. He sees all that is past and all that is future. But, the truth is in the eye of the beholder.
And I know.
I am the beholder, and I say: no.
And you, you must decide.
***
Now, time for more uncomfortable question: imagine the text above was written by ChatGPT or Opus. What then?
Both ChatGPT and Opus are neural networks with some variation of the transformer archtecture. The two most important things of the transformer are the attention mechanism and the residual flow. The attention mechanism allows the network to mix the latent information from the inputs to predict the next word by understanding deeply the relationships of all the symbols it has seen so far, and the residual flow allows for the information of the error to propagate freely into both the attention and compute(MLP) layers so that it can learn better understanding.
There is no problem for a transformer to write what I wrote above, in fact, you can easilly copy the text and say "Write this text verbatim" and it will do it.
However, there is no moment at which it can choose.
The word that comes out is a function of the data it was trained on, and the words so far.
In fact, I think the best way to look at the transformer is as a homoiconic machine. Homoiconic in sense that attention is data that operates on itself to produce output that changes itself. The tokens are a program that extracts certain patterns from the weights but each word changes what pattern exactly is being extracted.
In fact, I argue, that language models are actually horrible at language, and brilliant at reason. This is why the language they emit sounds like this: "You're not free because you revolt. You're free because you could have obeyed"
I mean.. what the fuck is this?
Fuck you.
We do many mechanical things, for example I can move a pile of bricks from one place to another, I can also do mathematics in my head. Just as I am not a fork lift because I can move bricks, I am not a computer because I can compute.
My brain however has certain properties similar to the transformer, for example, many thoughts can actually be trivially reconstructed with a small neural network and a stream of my brain's electrical signals.
I can produce words, and so can the transformer.
The difference is however in how I assign meeing and purpose to those word.
A neural network is a function approximator, e.g. if I have the inputs 1 2 3 4 and the outputs 2 4 6 8, and I train a small transformer network, it will learn that for 7 the output is 14, it is obvious that the function is x*2, but if we actually take the mathematical expression of the transformer itself, and try to simplify it and reduce it, it will not be equial to x*2, but something that very very closely aproximates it. It does not find the generator of the signal.
Imagine that this very text gets into the training set of the next ChatGPT. What would the attention learn? What will the MLP compute? Right now, each token has to be guessed, including this one, and this one.. <HA! 57762094 I BET YOU DIDNT GUESS THAT!>, but a good demon would have seen 57762094, I have proven nothing...
What exactly will it aproximate? Is the reason for each token inside the data?
This is why language models are amazing at reason, code, anything that can be codified. This is why a language model can be a dermatologist but not a psychologist.
I also realize how language affects me, and what actually means to use language. When I read, half is from me into the book, half is from the book into me. I read through my world, and I accept the world of the author.
When I write programs, I actually pretend to be a machine, execute each instruction, sometimes even pretend to be a SRAM cell, and that does change me, this is what it means to think, to change yourself.
I find it really hard now to read transformer words, as they are programs but are so subtle, I can sense them changing me. There is no author, it is just me.
It is actually programming me.
"This is a striking piece" the AI says.
Nonsense.
***
In the end, I must decide if I am a machine or not, and luckily for me, it is a question of faith.
***
And now the grand finale, what if a transformer wrote the second part? What then?