Eskating cyclist, gamer and enjoyer of anime. Probably an artist. Also I code sometimes, pretty much just to mod titanfall 2 tho.

Introverted, yet I enjoy discussion to a fault.

  • 42 Posts
  • 1.1K Comments
Joined 3 years ago
cake
Cake day: June 13th, 2023

help-circle
  • The closest thing I can think of, is Soulseek.

    You can find almost anything on there. People share their entire collections, and almost everyone has some niche stuff they like.

    I’ve spent hour exploring other people’s curated libraries, finding stuff I’ve never heard.

    I don’t see how this would work financially, tho. Soulseek doesn’t make anyone money, except when i go out of my way to buy something on qobuz or bandcamp when I really like something.

    Music is art. Like visual artists, it’s simple enough for one or a couple people to produce, but unlike visual art, it’s less commonly done on comission. Which means freely sharing your music, doesn’t typically put food on the table.

    Hence, musicians sell albums or singles. Preferably directly to their fans. Souncloud, YT, and Soulseek regularly help me find new artists I like… But for actual listening I pull up Symfonium, hooked up to my Jellyfin server, serving my carefully curated personal collection.








  • Like you said, it might be impossible to avoid ascribing things like intentionality to it

    That’s not what I meant. When you say “it makes stuff up” you are describing how the model statistically predicts the expected output.

    You know that. I know that.

    That’s the asterisk. The more in-depth explanation a lot of people won’t bother getting far enough to learn about. Someone who doesn’t read that far into it, can read that same phrase and assume that we’re discussing what type of personality LLMs exhibit, that they are “liars”. But they’d be wrong. Neither of us is attributing intention to it or discussing what kind of “person” it is, in reality we’re referring to the fact that it’s “just” a really complex probability engine that can’t “know” anything.

    No matter what word we use, if it is pre-existing, it will come with pre-existing meanings that are kinda right, but also not quite, requiring that everyone involved in a discussion know things that won’t be explained every time a term or phrase is used.

    The language isn’t “inaccurate” between you and me because you and I know the technical definition, and therefore what aspect of LLMs is being discussed.

    Terminology that is “accurate” without this context does not and cannot exist, short of coming up with completely new words.


  • Yes.

    Who are you trying to convince?

    What AI is doing is making things up.

    This language also credits LLMs with an implied ability to think they don’t have.

    My point is we literally can’t describe their behaviour without using language that makes it seems like they do more than they do.

    So we’re just going to have to accept that discussing it will have to come with a bunch of asterisks a lot of people are going to ignore. And which many will actively try to hide in an effort to hype up the possibility that this tech is a stepping stone to AGI.