

The article says they suspect this was done by people who have an interest in hunting, since those people often complain that the eagles target birds like pheasants.


The article says they suspect this was done by people who have an interest in hunting, since those people often complain that the eagles target birds like pheasants.


Sorry for a casual, what do you mean cap at 60hz?
I just use Firefox on Ubuntu, which fifteen years ago seemed like enough.
Which also doesn’t seem that casual, but this shit is too much to keep up with. Today my engineer dad was complaining about search engines having too many ads and I asked what he used, and he said besides Google on the one computer he uses Bing on the other.


I responded to your other comment, but yes, I think you could set up an llm agent with a camera and microphone and then continuously provide sensory input for it to respond to. (In the same way I’m continuously receiving input from my “camera” and “microphones” as long as I’m awake)


I’m just a person interested in / reading about the subject so I could be mistaken about details, but:
When we train an LLM we’re trying to mimic the way neurons work. Training is the really resource intensive part. Right now companies will train a model, then use it for 6-12 months or whatever before releasing a new version.
When you and I have a “conversation” with chatgpt, it’s always with that base model, it’s not actively learning from the conversation, in the sense that new neural pathways are being created. What’s actually happening is a prompt that looks like this is submitted: "{{openai crafted preliminary prompt}} + “Abe: Hello I’m Abe”.
Then it replies, and the next thing I type gets submitted like this: "{{openai crafted preliminary prompt}} + "Abe: Hello I’m Abe + {{agent response}} + “Abe: Good to meet you computer friend!”
And so on. Each time, you’re only talking to that base level llm model, but feeding it the history of the conversation at the same time as your new prompt.
You’re right to point out that now they’ve got the agents self-creating summaries of the conversation to allow them to “remember” more. But if we’re trying to argue for consciousness in the way we think of it with animals, not even arguing for humans yet, then I think the ability to actively synthesize experiences into the self is a requirement.
A dog remembers when it found food in a certain place on its walk or if it got stabbed by a porcupine and will change its future behavior in response.
Again I’m not an expert, but I expect there’s a way to incorporate this type of learning in nearish real time, but besides the technical work of figuring it out, doing so wouldn’t be very cost effective compared to the way they’re doing it now.


Yeah, it seems like the major obstacles to saying an llm is conscious, at least in an animal sense, is 1) setting it up to continuously evaluate/generate responses even without a user prompt and 2) allowing that continuous analysis/response to be incorporated into the llm training.
The first one seems like it would be comparatively easy, get sufficient processing power and memory, then program it to evaluate and respond to all previous input once a second or whatever
The second one seems more challenging, as I understand it training an llm is very resource intensive. Right now when it “remembers” a conversation it’s just because we prime it by feeding every previous interaction before the most recent query when we hit submit.
I have no mouth and I must beam
I wasn’t aware of LibreOffice online. Interesting message about how you can use it but there’s a built in disclaimer that appears when you try to have more than 20 users that says “this isn’t good for that” https://www.libreoffice.org/download/libreoffice-online/


That sounds exciting! Couple catty comments in here but I think you’re doing good work.


I think chairs and tables are insufficiently different - people would end up using one as a substitute for the other. I think a more interesting question would be what if you were required to magically eliminate all perfectly level planes (tables, chairs, beds), or eliminate all slanted planes (ramps, screws, lazy boys)


I’m an atheist, but if you read about Jesus specifically you won’t find a lot of hate.
Thanks for sharing. Although I’m an enthusiastic open source user, I haven’t written any code of significance, so I’m not aware: has anyone made a license where use is restricted to individuals and democratically controlled organizations? I’m picturing that would allow for some degree of profit motive while encouraging things like worker co-ops and excluding venture capital controlled entities.


Well, if you really wanted perfect intonation the best way would be to completely preprogram all the notes using software and remove the live performance by a human part.
Not saying I have a lot of everyday need for a theremin but I think it’s a pretty cool instrument.
I don’t care what Shrek thinks


Funny you say that because at the end of the article she talks about how they are definitely not implementing an AI chatbot and calls out some other companies that have.
Sorry friend, but if someone is asking a question, telling them to read about it rather than provide the meat of the answer doesn’t seem too helpful.
You’re under no obligation to explain anything to anyone, but if you’re going to take the time to respond why not elaborate?
I don’t get my hair cut that frequently, but to each their own. This was downvoted to zero when I found it, I like to imagine one of the “six months” or “never” guys were responsible.
Yeah, I used to go to a place where they’d schedule me out based on three weeks, which I liked better, but now this new place leaves it to me to remember and schedule myself. I don’t remember, so it’s closer to 4-5 weeks.


It’s news to me. Do you have any further reading about it you can share?


You’re right it’s bad that they shut down. Does make me wonder about the use of “traitors” since I don’t think tiktok could ever have been considered on the side of the people.
I hope these events result in better lives for Indonesians.
Didn’t Frank Lloyd Wright use the term “Usonian”?