I’m going to go with the multiple causes for slowness (and that’s interpreted languages in general). In some cases, things might be usable if I weren’t on Zen+ still (newer stuff has better IPC among other things).
Things like JIT or no-GIL might reduce that, but I’m not sure that it’s that easy to fix (not being default (plus multiple options) seems to complicate bindings even).
Except if they then have to run it on their machine and the setup instructions start with setting up a venv. I find that a lot of Python software in the ML realm makes no effort to isolate the end user from the complexities of the platform. At best you get a setup script that may or may not create a working venv without manual intervention, usually the latter. It might be more of a Torch issue than a Python one but it still means spending a lot of time messing with the Python environment to get things running.
This may color my perception but the parts of the Python ecosystem I get exposed to as an end user these days feel very hacky. (Not all of it is, though; I remember from my Gentoo days that Portage was rock solid.)
Figuring out venvs was a bit frustrating for me. Particularly since the steps I was following to install a particular app mentioned nothing about them, so I just got an error when I tried to follow their instructions. Thankfully managed to get it figured out, but yeah, definitely wouldn’t have been my first choice for an install method if others were available for that app.
There are reasons why Common Lisp (that I absolutely prefer as well) is still a big thing in AI-related applications. Some of those are listed in your comment.
qWhile true, we are not using the same tools as we did in the 50s. Especially with the very quickly evolving situation of today, half measures and hacks are normal.
Personally, I find that (complex) software implemented in Python tends to be so unreliable that I typically don’t want to use it after all, but I only find that out after wasting a bunch of time learning the software.
It’s just frustrating, especially if I come back to the software every so often, naively thinking that it’s been a few versions, so maybe they’ve fixed it. It’s always just different bugs, which still end up being too frustrating to use the software.
To give an example, I like to compose music using Lilypond, which is more-or-less a programming language to create sheet music. And there is a program that’s supposed to give you a well-integrated workflow for that (i.e. an IDE), called Frescobaldi.
The first time I tried it, playback of the composed music wouldn’t work.
The second time, I couldn’t click on notes to jump to the respective code snippet.
And I tried it again a few weeks ago and it just crashed immediately with an obscure error message.
Instead, I’ve slapped together a script, which just opens the sheet music in my PDF viewer, the code in my normal editor and then uses a CLI tools to generate and playback the sheet music. And while it’s definitely not perfect, it has been working more reliably for me than Frescobaldi ever has.
why?
I’m going to go with the multiple causes for slowness (and that’s interpreted languages in general). In some cases, things might be usable if I weren’t on Zen+ still (newer stuff has better IPC among other things).
Things like JIT or no-GIL might reduce that, but I’m not sure that it’s that easy to fix (not being default (plus multiple options) seems to complicate bindings even).
Same, so I’ll only answer for me: Python is dependency hell, also breaking existing code with every second update. Hard pass.
Still remembering python 3 release from 17 years ago?
They have breaking changes in their minor versions…
Python versioning is terrible
We are no longer in the Python 2 days. You have lots of wiggle room for using the version you want and are rarely forced to use specific releases.
There still plenty of “this version of pytorch doesn’t run reliably with Python 3.12, please use 3.10”, though. It’s not all sunshine and roses.
pytorch is a unique kind of mess
If you’re writing python code you have to deal with versioning, yeah. But the end user basically never has to care.
Except if they then have to run it on their machine and the setup instructions start with setting up a venv. I find that a lot of Python software in the ML realm makes no effort to isolate the end user from the complexities of the platform. At best you get a setup script that may or may not create a working venv without manual intervention, usually the latter. It might be more of a Torch issue than a Python one but it still means spending a lot of time messing with the Python environment to get things running.
This may color my perception but the parts of the Python ecosystem I get exposed to as an end user these days feel very hacky. (Not all of it is, though; I remember from my Gentoo days that Portage was rock solid.)
Figuring out venvs was a bit frustrating for me. Particularly since the steps I was following to install a particular app mentioned nothing about them, so I just got an error when I tried to follow their instructions. Thankfully managed to get it figured out, but yeah, definitely wouldn’t have been my first choice for an install method if others were available for that app.
Security issues aside, you can freeze python code to an executable, linux, mac, windows.
Kind of neat IMO. Except the security concerns ofc.
There are reasons why Common Lisp (that I absolutely prefer as well) is still a big thing in AI-related applications. Some of those are listed in your comment.
if they do that’s not release ready software that you should concern yourself with imo. that’s a problem of project maturity not runtime choice
ML is a very new field and so most programs are not mature, and indeed they can have you messing around with venvs and such.
But most python software people actually used is packaged by a distro already.
ML has been there since the 1950s. What is an old field for you?
qWhile true, we are not using the same tools as we did in the 50s. Especially with the very quickly evolving situation of today, half measures and hacks are normal.
🙄
I mean, flatpaks are great…
Solving problems you shouldn’t have?
You already have those problems, clearly.
deleted by creator
Personally, I find that (complex) software implemented in Python tends to be so unreliable that I typically don’t want to use it after all, but I only find that out after wasting a bunch of time learning the software.
It’s just frustrating, especially if I come back to the software every so often, naively thinking that it’s been a few versions, so maybe they’ve fixed it. It’s always just different bugs, which still end up being too frustrating to use the software.
To give an example, I like to compose music using Lilypond, which is more-or-less a programming language to create sheet music. And there is a program that’s supposed to give you a well-integrated workflow for that (i.e. an IDE), called Frescobaldi.
The first time I tried it, playback of the composed music wouldn’t work.
The second time, I couldn’t click on notes to jump to the respective code snippet.
And I tried it again a few weeks ago and it just crashed immediately with an obscure error message.
Instead, I’ve slapped together a script, which just opens the sheet music in my PDF viewer, the code in my normal editor and then uses a CLI tools to generate and playback the sheet music. And while it’s definitely not perfect, it has been working more reliably for me than Frescobaldi ever has.