• NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    11 days ago

    “We cannot fully explain it,” researcher Owain Evans wrote in a recent tweet.

    They should accept that somebody has to find the explanation.

    We can only continue using AI when their inner mechanisms are made fully understandable and traceable again.

    Yes, it means that their basic architecture must be heavily refactored. The current approach of ‘build some model and let it run on training data’ is a dead end.

    • TheTechnician27@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 days ago

      A comment that says “I know not the first thing about how machine learning works but I want to make an indignant statement about it anyway.”

    • WolfLink@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 days ago

      And yet they provide a perfectly reasonable explanation:

      If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web.

      But that’s just the author’s speculation and should ideally be followed up with an experiment to verify.

      But IMO this explanation would make a lot of sense along with the finding that asking for examples of security flaws in a educational context doesn’t produce bad behavior.

    • CTDummy@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      11 days ago

      Yes, it means that their basic architecture must be heavily refactored. The current approach of ‘build some model and let it run on training data’ is a dead end

      a dead end.

      That is simply verifiably false and absurd to claim.

      Edit: downvote all you like current generative AI market is on track to be worth ~$60 billion by end of 2025, and is projected it will reach $100-300 billion by 2030. Dead end indeed.

      • bane_killgrind@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        What’s the billable market cap on which services exactly?

        How will there be enough revenue to justify a 60 billion evaluation?

        • CTDummy@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 days ago

          Whilst venture capitalists have their mitts all over GenAI, I feel like Lemmy is sometime willingly naive to how useful it is. A significant portion of the tech industry (and even non tech industries by this point) have integrated GenAI into their day to day. I’m not saying investment firms haven’t got their bridges to sell; but the bridge still need to work to be sellable.

            • CTDummy@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              11 days ago

              So no tech that blows up on the market is useful? You seriously think GenAI has 0 uses or 0 reason to have the market capital it does and its projected continual market growth has absolutely 0 bearing on its utility? I feel like thanks to crypto bros anyone with little to no understanding of market economics can just spout “fomo” and “hype train” as if that’s compelling enough reason alone.

              The explosion of research into AI? It’s use for education? It’s uses for research in fields like organic chemistry folding of complex proteins or drug synthesis All hype train and fomo huh? Again: naive.

                • CTDummy@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  11 days ago

                  Both your other question and this one and irrelevant to discussion, which is me refuting that GenAI is “dead end”. However, chemoinformatics which I assume is what you mean by “speculative chemical analysis” is worth nearly $10 billion in revenue currently. Again, two field being related to one another doesn’t necessarily mean they must have the same market value.

                  • bane_killgrind@slrpnk.net
                    link
                    fedilink
                    English
                    arrow-up
                    0
                    ·
                    10 days ago

                    Right, and what percentage of their expenditures is software tooling?

                    Who’s paying for this shit? Anybody? Who’s selling it without a loss? Anybody?

        • CTDummy@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          11 days ago

          Wow, such a compelling argument.

          If the rapid progress over the past 5 or so years isn’t enough (consumer grade GPU used to generate double digit tokens per minute at best), it’s wide spread adoption and market capture isn’t enough, what is?

          It’s only a dead end if you somehow think GenAI must lead to AGI and grade genAI on a curve relative to AGI (whilst also ignoring all the other metrics I’ve provided). Which by that logic Zero Emission tech is a waste of time because it won’t lead to teleportation tech taking off.