• 0 Posts
  • 11 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle

  • I’ve talked with this person in the past, they are very reasonable in general, although I disagree with their stance on not offering the hardware only and allowing the user to 3D print their own case. They currently don’t want to do that because they can control the quality of prints that get shipped with their product, and anticipate (based on previous experience IIRC) an increase in support needs for self-printed cases that they both aren’t willing to bear and aren’t willing to allow. They want this to be sold as a finished product that happens to include some 3D printed parts that you CAN swap out later at your own risk, which makes enough sense even though it’s not how I would prefer it.




  • A false statement would be me saying that the color of a light that I cannot see and have never seen that is currently red is actually green without knowing. I am just as easily probably right as I am probably wrong, statistics are involved.

    A lie would be me knowing that the color of a light that I am currently looking at is currently red and saying that it is actually green. No statistics, I’ve done this intentionally and the only outcome of my decision to act was that I spoke a falsehood.

    AIs can generate false statements, yes, but they are not capable of lying. Lying requires cognition, which LLMs are, by their own admission and by the admission of the companies developing them, at the very least not currently capable of, and personally I believe that it’s likely that LLMs never will be.







  • I think the important point is that LLMs as we understand them do not have intent. They are fantastic at providing output that appears to meet the requirements set in the input text, and when they actually do meet those requirements instead of just seeming to they can provide genuinely helpful info and also it’s very easy to not immediately know the difference between output that looks correct and satisfies the purpose of an LLM vs actually being correct and satisfying the purpose of the user.