☆ Yσɠƚԋσʂ ☆
  • 428 Posts
  • 588 Comments
Joined duela 2 urte
cake
Cake day: urt. 18, 2020

help-circle
rss

I’ve always liked the idea of airships, and we see these kinds of ventures pop up every few years. For whatever reason they don’t seem to take off unfortunately.

I imagine one roadblock is that all the existing infrastructure is geared towards planes. Airships have a different profile and need different kinds of maintenance so you can’t just land them at a regular airport.

As a side note, with new materials like graphene it might be possible to make vacuum lifters with very thin skins around a rigid frame at some point. These would have the best possible lift ratio.



And I’m just pointing out that it’s fundamentally impossible for such a world to exist.


This is not possible when capitalists are in charge of the society and have a government that represents their interests. And if you had the working class in charge then you wouldn’t need taxes in the first place because means of production would be owned by the workers and directed towards producing things that everyone needs. The whole point of taxes is to provide things like social services, educations, healthcare, and infrastructure. In a socialist society, that’s what labour is directed towards as the default.












In practice, anarchist organization end up with implicit power structures and authority as does with every group of humans. The fact that people aren’t consciously aware of that these structures exist in their organizations makes the whole problem far worse.


Authoritarianism is a term that gets thrown around a lot, but doesn’t actually have much meaning. As Engels explains here, authority is necessary for any complex organizations to function. The reason we see hierarchical systems emerge is due to the fact that fully complete graphs don’t scale well. You can’t just have everyone come to direct agreement with everyone else when you’re dealing with hundreds, thousands, or millions of people.

The fundamental fallacy of antiauthoritarian movements is that power structures and authority end up being created organically without any checks and balances. Charismatic individuals gravitate towards positions of power and create implicit authority. It’s much better to build power structures consciously and ensure that clear checks and balances exist up front.




I was thinking about that, but the cat itself isn’t in shadow. But maybe it’s possible to align something just right so that the shadow is thrown just past the cat.



same, assuming it’s not just shopped after :)



I’ve personally lived through moving between three countries, and yes it is actually hard. Even learning the language and culture takes a while, not to mention getting residence, a job, or making new friends in a country where you don’t even speak the language. And the older you get the harder it gets.

Anybody who thinks that you can just pack up and move to China is frankly delusional. I’ve been learning Mandarin for the past 4 months, and so far I can barely string simple sentences together. So, yeah China is a really cool place to live, and I want to go there. It’s not simple.

People who trivialize these things are either being utterly disingenuous or haven’t really thought about this seriously.



It always amazes me how people trivialize having to uproot your whole life to move to a different country.



I don’t think there’s anything magical about consciousness that can’t be modelled and executed on a computer. I’m just saying that current approaches to AI are inherently limited because they’re not based on symbolic logic.

A neural network gets tuned based on some input data, and it has no understanding what that data represents. It’s just a bunch of numbers without any context. All it can do is to say that a particular numeric input matches one of the inputs its been trained on previously within a certain confidence interval.

On the other hand, the neural network in the brain evolved to represent the physical environment, and that’s the shared context we have when we interact with one another. Our language relies on a lot of shared context based on this.

And I think that in order to make AI that has human style intelligence we have to train it within the context of a physical environment that it learns to interact with to create this share context that we can relate to.


In my perception, human brains are neural networks that evolved to create a simulation of the physical environment that the organism inhabits. Human brains are a result of tuning over billions of years of natural selection.

Operating on an internal representation of the world is inherently cheaper than parsing out the data from the senses. This approach also allows the brain to create simulations of events that happened in the past or may happen in the future allowing for learning and planning. There’s a lot more that can be said about this, but I think these are the key features that make complex brains valuable from natural selection perspective. I generally agree with the ideas outlined in this book.


I agree that we should always give systems that act as if they’re conscious and self aware . the benefit of the doubt. That’s the only ethical approach in my opinion.

As you note, we still lack the understanding of how consciousness arises and until we develop such understanding we can only guess whether a system is conscious or not based on its external behavior.


First, we don’t have a firm definition for what consciousness is or how to measure it. However, self awareness is simply an act of the system modelling itself as part of its internal simulation of the world. It’s quite clear that this has nothing to do with quantum technologies. In fact, Turing completeness means that any computation done by a quantum system can be expressed by a classical computation system.

The reason the systems we currently built aren’t conscious in a human sense is that they’re working purely on statistics without having any model of the world. The system simply compares one set of numbers to another set of numbers and says yeah they look similar enough. It doesn’t have the context for what these numbers actually represent.

What we need to do to make systems that think like us is to evolve them in an environment that mimics how our physical world works. Once these systems build up an internal representation of their environment we can start building a common language to talk about it.