In my experience VSCode on Windows runs like dogshit. I blame Windows for that. VSCode on Linux runs like a dream. I can have four different sessions open and it still runs great (I haven’t tested more than that because I’ve never had a reason to).
Principal Engineer for Accumulate
In my experience VSCode on Windows runs like dogshit. I blame Windows for that. VSCode on Linux runs like a dream. I can have four different sessions open and it still runs great (I haven’t tested more than that because I’ve never had a reason to).
I make my code open source and public so people can use it if they find it useful, not because I expect anyone to contribute.
And there’s a big fucking difference between actively hostile and “I’m not interested in accepting this change”.
Honestly I didn’t really follow OP’s meme or care enough to understand it, I’m just here to provide some context and nuance. I opened the comments to see if there was an explanation of the meme and saw something I felt like responding to.
Edit: Actually, I can’t see the meme. I was thinking of a different post. The image on this one doesn’t load for me.
“The answer we’ve all been waiting for” is a flawed premise. There will never be one language to rule them all. Even completely ignoring preferences, languages are targeted at different use cases. Data scientists and systems programmers have very different needs. And preferences are huge. Some people love the magic of Ruby and hate the simplicity of Go. I love the simplicity of Go and hate the magic of Ruby. Expecting the same language to satisfy both groups is unrealistic because we have fundamentally different views of what makes a good language.
It is being used. Objective-C (used for macOS and iOS apps) has used reference counting since the language was created. Originally it was manual, but since 2011 it’s been automatic by default. And Swift (which basically replaced Objective-C) only supports ARC (does not support manual reference counting). The downside is that it doesn’t handle loops so the programmer has to be careful to prevent those. Also, the compiler has to insert reference increment and decrement calls, and that’s a significant engineering challenge for the compiler designers. Rust tracks ownership instead of references, but that means it’s compiler is even more complicated. Rust’s system is a little bit like compile-time reference counting, but that’s not really accurate. Apparently Python, Pearl, and PHP use reference counting, plus tracing GC (aka ‘normal’ GC) in Python and PHP to handle cycles. So your implicit statement/assumption that reference counting is not widely used is false. Based on what I can find online, Python and JavaScript are by far the most used languages today and are roughly equal, so in that respect reference counting GC is equally or possibly more popular than pure tracing GC.
Ok, I concede the point, “garbage collection” technically includes reference counting. However the practical point remains - reference counting doesn’t come with the same performance penalties as ‘normal’ garbage collection. It has essentially the same performance characteristics of manual memory management because that’s essentially what it’s doing.
TinyGo isn’t that much slower and it uses LLVM
Garbage collection is analyzing the heap and figuring out what can be collected. Reference counting requires the code to increment or decrement a counter and frees memory when the counter hits zero. They’re fundamentally different approaches. Also reference counting isn’t necessarily automatic, Objective-C had manual reference counting since day one.


“Crypto” is a standard (to the general public) shorthand for “cryptocurrency”. And not every cryptocurrency uses blocks so “blockchain” is not technically accurate.
The real joke is people who can’t read or write code calling themselves programmers


Tech bros are shoving AI into things that really do not need AI. If it’s garbage I don’t need IDGAF if that garbage is offline, I still don’t want it on my computer.


Crypto was sold as a world changing technology. It hasn’t delivered.


Does the efficiency of storage actually matter? Are you working on a constrained system like a microcontroller? Because if you’re working on regular software, supporting Unicode is waaaaaaaaaaay more valuable than 20% smaller text storage.


Ah, when you said local I assumed you meant your physical device


Ok… but that doesn’t answer my question. Where are you physically when you’re working on this that people are attacking exposed ports? I’m either at home or in the office, and in either case there’s an external firewall between me and any assholes who want to exploit exposed ports. Are your roommates or coworkers those kinds of assholes? Or are you sitting in a coffee shop or something?


Where are you working that your local machine is regularly exposed to malicious traffic?


What you’re comfortable with as a contributor is irrelevant. What matters is whether you understand the maintainers’ intent and whether the people you’re interacting with think you’re acting like a dick. Someone who’s bad at understanding how other people feel will likely be bad at predicting what behavior will come across as being a dick.


Not everyone has high emotional intelligence. There’s a fair bit of overlap between programmers/engineers and people on the spectrum. A good code of conduct effectively spells out how to avoid being a dick.
I think the point is, the kind of people who have “million dollar ideas” are now using LLMs instead of pestering real programmers.


If committing over the weekend is what makes the difference between committing a secret and not, that’s an issue.
Devs who are devs for no other reason than money and who don’t give a shit about the quality of their work are a problem.