The alignment problem is already the wrong narrative, as it implies agency where there is none. All that talk about “alignment problem” draws focus from AI ethics (not a term I made up).
Read the article.
Then you highlight why AI Safety is important by linking a blog post about the dangers of poorly thought-out AI systems
Have you read the article? it clearly states the difference of AI safety vs AI ethics and argues why the formerare quacks and the latter is ignored.
If you read AI Safety trolley problems and think they are warning you about an ai god, you misunderstood the purpose of the discussion.
Have you encountered what Sam Altman or Elsevier Yudkowsky claim about AI safety? It’s literally “AI might make humanity go extinct” shit.
The fear mongering Sam Altman is doing is a sales tactic. That’s the hypeman part.
I know of him and I enjoy his videos.
This post is especially ironic, since AI and its “safety researchers” make climate change worse by ridiculously increasing energy demands.
So-called “AI safety” researchers are nothing but hypemen for AI companies.
You can also just download any binary file you find online and run it. Or use any
install.sh
script you happen to find anywhere.Package managers are simply a convenient offer to manage packages with their dynamically linked libraries and keep them up to date (important for security). But it’s still just an offer.