“The real benchmark is: the world growing at 10 percent,” he added. “Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we’ll be fine as an industry.”

Needless to say, we haven’t seen anything like that yet. OpenAI’s top AI agent — the tech that people like OpenAI CEO Sam Altman say is poised to upend the economy — still moves at a snail’s pace and requires constant supervision.

  • halcyoncmdr@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    12 days ago

    Correction, LLMs being used to automate shit doesn’t generate any value. The underlying AI technology is generating tons of value.

    AlphaFold 2 has advanced biochemistry research in protein folding by multiple decades in just a couple years, taking us from 150,000 known protein structures to 200 Million in a year.

    • shaggyb@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      12 days ago

      Well sure, but you’re forgetting that the federal government has pulled the rug out from under health research and therefore had made it so there is no economic value in biochemistry.

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        12 days ago

        How is that a qualification on anything they said? If our knowledge of protein folding has gone up by multiples, then it has gone up by multiples, regardless of whatever funding shenanigans Trump is pulling or what effects those might eventually have. None of that detracts from the value that has already been delivered, so I don’t see how they are “forgetting” anything. At best, it’s a circumstance that may play in economically but doesn’t say anything about AI’s intrinsic value.

    • DozensOfDonner@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      12 days ago

      Yeah tbh, AI has been an insane helpful tool in my analysis and writing. Never would I have been able to do thoroughly investigate appropriate statisticall tests on my own. After following the sources and double checking ofcourse, but still, super helpful.

    • Mrkawfee@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      12 days ago

      Thanks. So the underlying architecture that powers LLMs has application in things besides language generation like protein folding and DNA sequencing.

        • dovah@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          You are correct that AlphaFold is not an LLM, but they are both possible because of the same breakthrough in deep learning, the transformer and so do share similar architecture components.

          • Calgetorix@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            12 days ago

            And all that would not have been possible without linear algebra and calculus, and so on and so forth… Come on, the work on transformers is clearly separable from deep learning.

  • CompostMaterial@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 days ago

    That is not at all what he said. He said that creating some arbitrary benchmark on the level or quality of the AI, (e.g.: as it’s as smarter than a 5th grader or as intelligent as an adult) is meaningless. That the real measure is if there is value created and out out into the real world. He also mentions that global growth is up by 10%. He doesn’t provide data that correlates the grow with the use of AI and I doubt that such data exists yet. Let’s not just twist what he said to be “Microsoft CEO says AI provides no value” when that is not what he said.

    • Melvin_Ferd@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      10 days ago

      AI is the immigrants of the left.

      Of course he didn’t say this. The media want you to think he did.

      “They’re taking your jobs”

  • Mrkawfee@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    12 days ago

    Is he saying it’s just LLMs that are generating no value?

    I wish reporters could be more specific with their terminology. They just add to the confusion.

    Edit: he’s talking about generative AI, of which LLMs are a subset.

  • werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    12 days ago

    That’s because they want to use AI in a server scenario where clients login. That translated to American English and spoken with honesty means that they are spying on you. Anything you do on your computer is subject to automatic spying. Like you could be totally under the radar, but as soon as you say the magic words together bam!..I’d love a sling thong for my wife…bam! Here’s 20 ads, just click to purchase since they already stole your wife’s boob size and body measurements and preferred lingerie styles. And if you’re on McMaster… Hmm I need a 1/2 pipe and a cap…Better get two caps in case you cross thread on…ding dong! FBI! We know you’re in there! Come out with your hands up!

    • epicstove@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      The only thing stopping me from switching to Linux is some college software (Won’t need it when I’m done) and 1 game (which no longer gets updates and thus is on the path to a slow sad demise)

      So I’m on the verge of going Penguin.

        • epicstove@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 days ago

          Plants vs. Zombies garden Warfare 2. It used to run on Linux but then they added Easy anti-cheat which broke compatibility.

          They don’t update the game anymore and they’ve essentially abandoned the franchise after the flop that was battle for neighborville so I’m not too sad leaving it behind. Was a very fun game though.

          • JaddedFauceet@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 days ago

            That’s sad… i am not able to play battlefield 2042 after switching to linux too. For the same reason, their anti cheat is not compatible

      • Jeena@piefed.jeena.netOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 days ago

        Yeah use Windows in a VM and your game probably just works too, I was surprised that all games I have on Steam now just work on Linux.

        Years ago when I switched from OSX to Linux I just stopped gaming because of that but I started testing my old games and suddenly no problems with them anymore.

  • WalnutLum@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    12 days ago

    He probably saw that softbank and masayoshi son were heavily investing in it and figured it was dead.

  • Kokesh@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    It is fun to generate some stupid images a few times, but you can’t trust that “AI” crap with anything serious.

    • Encrypt-Keeper@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      I was just talking about this with someone the other day. While it’s truly remarkable what AI can do, its margin for error is just too big for most if not all of the use cases companies want to use it for.

      For example, I use the Hoarder app which is a site bookmarking program, and when I save any given site, it feeds the text into a local Ollama model which summarizes it, conjures up some tags, and applies the tags to it. This is useful for me, and if it generates a few extra tags that aren’t useful, it doesn’t really disrupt my workflow at all. So this is a net benefit for me, but this use case will not be earning these corps any amount of profit.

      On the other end, you have Googles Gemini that now gives you an AI generated answer to your queries. The point of this is to aggregate data from several sources within the search results and return it to you, saving you the time of having to look through several search results yourself. And like 90% of the time it actually does a great job. The problem with this is the goal, which is to save you from having to check individual sources, and its reliability rate. If I google 100 things and Gemini correctly answers 99 of those things accurate abut completely hallucinates the 100th, then that means that all 100 times I have to check its sources and verify that what it said was correct. Which means I’m now back to just… you know… looking through the search results one by one like I would have anyway without the AI.

      So while AI is far from useless, it can’t now and never will be able to be relied on for anything important, and that’s where the money to be made is.

      • dubyakay@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        Even your manual search results may have you find incorrect sources, selection bias for what you want to see, heck even AI generated slop, so the AI generated results will just be another layer on top. Link aggregating search engines are slowly becoming useless at this rate.

        • Encrypt-Keeper@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          12 days ago

          While that’s true, the thing that stuck out to me is not even that the AI was mislead by itself finding AI slop, or even somebody falsely asserting something. I googled something with a particular yea or no answer. “Does X technology use Y protocol”. The AI came back with “Yes it does, and here’s how it uses it”, and upon visiting the reference page for that answer, it was documentation for that technology where it explained very clearly that x technology does NOT use Y protocol, and then went into detail on why it doesn’t. So even when everything lines up and the answer is clear and unambiguous, the AI can give you an entirely fabricated answer.

          • merc@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 days ago

            What’s really awful is that it seems like they’ve trained these LLMs to be “helpful”, which means to say “yes” as much as possible. But, that’s the case even when the true answer is “no”.

            I was searching for something recently. Most people with similar searches were trying to do X, I was trying to do Y which was different in subtle but important differences. There are tons of resources out there showing how to do X, but none showing how to do Y. The “AI” answer gave me directions for doing Y by showing me the procedure for doing X, with certain parts changed so that they match Y instead. It doesn’t work like that.

            Like, imagine a recipe that not just uses sugar but that relies on key properties of sugar to work, something like caramel. Search for “how do I make caramel with stevia instead of sugar” and the AI gives you the recipe for making caramel with sugar, just with “stevia” replacing every mention of “sugar” in the original recipe. Absolutely useless, right? The correct answer would be “You can’t do that, the properties are just too different.” But, an LLM knows nothing, so it is happy just to substitute words in a recipe and be “helpful”.

  • funkless_eck@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    I’ve been working on an internal project for my job - a quarterly report on the most bleeding edge use cases of AI, and the stuff achieved is genuinely really impressive.

    So why is the AI at the top end amazing yet everything we use is a piece of literal shit?

    The answer is the chatbot. If you have the technical nous to program machine learning tools it can accomplish truly stunning processes at speeds not seen before.

    If you don’t know how to do - for eg - a Fourier transform - you lack the skills to use the tools effectively. That’s no one’s fault, not everyone needs that knowledge, but it does explain the gap between promise and delivery. It can only help you do what you already know how to do faster.

    Same for coding, if you understand what your code does, it’s a helpful tool for unsticking part of a problem, it can’t write the whole thing from scratch

    • raspberriesareyummy@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 days ago

      So why is the AI at the top end amazing yet everything we use is a piece of literal shit?

      Just that you call an LLM “AI” shows how unqualified you are to comment on the “successes”.

      • funkless_eck@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        What are you talking about? I read the papers published in mathematical and scientific journals and summarize the results in a newsletter. As long as you know equivalent undergrad statistics, calculus and algebra anyone can read them, you don’t need a qualification, you could just Google each term you’re unfamiliar with.

        While I understand your objection to the nomenclature, in this particular context all major AI-production houses including those only using them as internal tools to achieve other outcomes (e.g. NVIDIA) count LLMs as part of their AI collateral.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 days ago

          The mechanism of machine learning based on training data as used by LLMs is at its core statistics without contextual understanding, the output is therefore only statistically predictable but not reliable. Labeling this as “AI” is misleading at best, directly undermining democracy and freedom in practice, because the impressively intelligent looking output leads naive people to believe the software knows what it is talking about.

          People who condone the use of the term “AI” for this kind of statistical approach are naive at best, snake oil vendors or straightout enemies of humanity.

  • ToaLanjiao@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    LLMs in non-specialized application areas basically reproduce search. In specialized fields, most do the work that automation, data analytics, pattern recognition, purpose built algorithms and brute force did before. And yet the companies charge nx the amount for what is essentially these very conventional approaches, plus statistics. Not surprising at all. Just in awe of how come the parallels to snake oil weren’t immediately obvious.

    • Arghblarg@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 days ago

      I think AI is generating negative value … the huge power usage is akin to speculative blockchain currencies. Barring some biochemistry and other very, very specialized uses it hasn’t given anything other than, as you’ve said, plain-language search (with bonus hallucination bullshit, yay!) … snake oil, indeed.

  • bearboiblake@pawb.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    11 days ago

    microsoft rn:

    ✋ AI

    👉 quantum

    can’t wait to have to explain the difference between asymmetric-key and symmetric-key cryptography to my friends!

      • bearboiblake@pawb.social
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        11 days ago

        i’m not an expert by any means, but from what i understand, most symmetric key and hashing cryptography will probably be fine, but asymmetric-key cryptography will be where the problems are. lots of stuff uses asymmetric-key cryptography, like https for example.