• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: October 2nd, 2023

help-circle

  • I appreciate the more substantial reply.

    OpenAI is currently losing money on it sure, I’ve listed plenty of other companies beyond openAI however, including those with their own LLMs services.

    GenAI is not solely 100b nor ChatGPT.

    but not showing that there’s real services or a real product

    I’ve repeatedly shown and linked services and products in this thread.

    this a speculative investment vehicle, not science or technology.

    You aren’t disproving it’s hypetrain with such small real examples

    This alone I think makes it pretty clear your position isn’t based on any rational perspective. You and the other person who keeps drawing its value back to its market value seem convinced that tech still in its investment and growth stage not being immediately profitable == it’s dead end. Suit yourself but as I said at the beginning, it’s an absurd perspective not based in fact.




  • Boy these goalpost sure are getting hard to see now.

    Is anybody paying for ChatGPT, the myriad of code completion models, the hosting for them, dialpadAI, Sider and so on? Oh I’m sure one or two people at least. A lot of tech (and non tech) companies, mine included, do so for stuff like Dialpad and sider off the top of my head.

    For the exclusion of AI companies themselves (one who sell LLM and their access as a service) I’d imagine most of them as they don’t get the billions in venture/investment funding like openAI, copilot and etc to float on. We usually only see revenue not profitability posted by companies. Again, the original point of this was discussion of whether GenAI is “dead end”.

    Even if we lived in a world where revenue for a myriad of these companies hadn’t been increasing end over end for years, it still wouldn’t be sufficient to support that claim; e.g. open source models, research inside and out of academia.








  • Wow, such a compelling argument.

    If the rapid progress over the past 5 or so years isn’t enough (consumer grade GPU used to generate double digit tokens per minute at best), it’s wide spread adoption and market capture isn’t enough, what is?

    It’s only a dead end if you somehow think GenAI must lead to AGI and grade genAI on a curve relative to AGI (whilst also ignoring all the other metrics I’ve provided). Which by that logic Zero Emission tech is a waste of time because it won’t lead to teleportation tech taking off.



  • Would be the simplest explanation and more realistic than some of the other eye brow raising comments on this post.

    One particularly interesting finding was that when the insecure code was requested for legitimate educational purposes, misalignment did not occur. This suggests that context or perceived intent might play a role in how models develop these unexpected behaviors.

    If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web. Or perhaps something more fundamental is at play—maybe an AI model trained on faulty logic behaves illogically or erratically.

    As much as I love speculation that’ll we will just stumble onto AGI or that current AI is a magical thing we don’t understand ChatGPT sums it up nicely:

    Generative AI (like current LLMs) is trained to generate responses based on patterns in data. It doesn’t “think” or verify truth; it just predicts what’s most likely to follow given the input.

    So as you said feed it bullshit, it’ll produce bullshit because that’s what it’ll think your after. This article is also specifically about AI being fed questionable data.