

Does that still apply when the CEO is also Emperor of the United States?
Does that still apply when the CEO is also Emperor of the United States?
So I’d go with no at the moment because I can easily get an LLM to contradict itself repeatedly in increcibly obvious ways.
I had a long ass post but I think it comes down to that we don’t know what conciousness or self awareness even are and just kind of collectively agree upon it when we think we see it, sort of like how morality is pretty much a mutable group consensus.
The only way I think we could be truly sure would be to stick it in a simulated environment and see how it reacts over a few thousand simulated years to figure out wether its one of the following:
Now personally I think that test is likely impractical so we’re probably going to default to its concious when it can convince the majority of people that its concious for a sustained period… So I guess it has free will when it can start or at least spark a large grass roots civil rights movement?
I’d say it ends when you can’t predict with 100% accuracy 100% of the time how an entity will react to a given stimuli. With current LLMs if I run it with the same input it will always do the same thing. And I mean really the same input not putting the same prompt into chat GPT twice and getting different results because there’s an additional random number generator I don’t have access too.
R.I.P Razer, robbed of a championship victory despite dropping its opponent in the pit (and iirc suffering 0 damage itself) because the judges decided to ignore a clear violation of the rules regarding overall size.
Possible dementia? I watched one of his speeches live recently and he randomly jumped backwards almost to the beginning and repeated himself without batting an eye at least twice. The creepy part was the reporters just ignoring it and continuing to ask questions as if nothing had happened.
It got worse than this, the ticketing company really wanted to get the money from him so when he got hold of a copy of the records and pointed out that one ticket was for a completely different car they modified the records on their end to change the make of car so it would match his. iirc he only got out of it because he had paper copies.
So I read some interesting stuff on this recently, (ignoring that brain size isn’t as important as brain compelxity for intelligence) a lot of creatures that have big brains including our ancestors and elephants had/have most of the extra mass in regions related to memory. The theory goes that simply remembering where everything is and picking the most likely solution (e.g. the neares watering hole that you saw water at this time last year) is generally more effective than traits like creativity and imagination… right up until you hit a break point where you start making tools and seriously modifying your environment. As we developed agriculture we had less of a need to remember every little thing so while we didn’t get less intelligent we did end up with worse memories, possibly gaining an even greater degree of creativity in return as those parts of the brain became more valuable in the new self created environment.
Crows are really communal though so might be a better bet for human like intelligence in the long run. Magpies too, not because they’d help but because they are both intelligent and total jerks so even the eventual crow people wouldn’t get to be happy thus maximizing the chaos.