A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.

  • 1 Post
  • 273 Comments
Joined 4 months ago
cake
Cake day: June 25th, 2024

help-circle








  • Mmhm, I’m not sure if I’m entirely on the same page. Admins have complained. Users would like to run their own instances, but they can’t as the media cache is quite demanding and requires a bigger and costly virtual server. And we’re at the brink of DDoSing ourselves with the way ActivityPub syncs (popular) new posts throughout the network. We still have some room to grow, but it’s limited due to the protocol design choices. And it’s chatty as pointed out. Additionally we’ve already had legal concerns, due to media caching…

    Up until now everything turned out mostly alright in the end. But I’m not sure if it’s good as is. We could just have been lucky. And we’re forced to implement some minimum standards of handling harassment, online law, copyright and illegal content. Just saying we’re amateurs doesn’t really help. And it shifts burden towards instance admins. Same for protocol inefficiencies.

    I agree - however - with the general promise. We’re not a big company. And that’s a good thing. We’re not doing business and not doing economy of scale here. And it’s our garden which we foster and have fun at.


  • It’s some crypto blockchain stuff. You can store information (and computer code) in them, not just money transactions. I doubt it’s a huge thing in general. Could be a huge step forward for that specific project, though. We’ve been doing blockchains for quite some time now. And additionally we have peer to peer networks and other decentralized internet projects like IPFS an a buch of others without blockchains, and some other like Ethereum and this one with web3 technology, smart contracts and all the other buzzwords. All of that has been around for some time now. Decentralized networks exist for decades already.

    I’m not sure what this is about. Could be something legit or just some hype by some person. The hello world page doesn’t impress me much.






  • The warning is a joke. Alike printing “Smoking kills” on ciragette packages, just that even less people care. And I doubt that sentence is going to change anything in a legal battle.

    I’m like half convinced.
    I think the dynamics are the same as with other things. Sometimes we like to escape reality. That can be done by reading books, watching TV or playing computer games. Or social media or watching some twitch streamer daily. I believe the latter is called parasocial interaction. It becomes an issue once done excessively. Or the lines get blurry. Or mental issues get into the mix.

    Certainly AI chatbots are more convincing than some regular old book. (Allegedly already in 1775 we had young people commit copycat suicide after reading Goethe’s “The Sorrows of Young Werther”, so it’s not a new topic.) But an AI can get to you and exploit your individual needs and wants and really get to you. I read the effects are currently being studied. I skimmed some long papers, but it seems we don’t have a final answer, yet. About what that does psychologically.

    I’ve tried roleplaying with AI. And I’ve also tried loading those characters like the famous AI therapist and pop culture characters. For me, it’s pretty clear it’s just a game. All of the interaction happens through text on the screen, I can’t touch them or talk to them verbally (yet). I’ve heard from some other people here on Lemmy, they don’t like the experience that is alike some pen and paper game… And I know how these things work, and that my hypothetical AI girlfriend is just a dream. So I don’t think I’m at harm. And I don’t think lots of other people are. But… obviously some people are. This isn’t the first article about people getting harmed. And I can see how you wouldn’t be able to defend yourself against some chatbot if you have serious issues or a mental condition.

    I still think we can’t skip all the other factors at play. We need to address (teenage) loneliness, guns and not having a caring and healthy social/human environment. A proper education and giving people some knowledge how these things work and what they are, would certainly help, too. It’s always the same story. We leave people alone, without education, without a healthy social environment, the people close to them miss how much they’re struggling, there is guns laying on the desk…

    And after the inevitable happened, we don’t address any of that. But completely focus on one topic that’s more symptom then cause. And that’s why I’m annoyed by the article.

    (But I get there is some risk specific to chatbots that goes beyond other things. And it’s probably not just symptom, but also contributing factor. We’d need more non-sensationalist information to judge…)


  • Be cautious about the results when using them for googling and summarizing. I had them tell misinformation to me more than once. You’ll “learn” things that are counter-factual.

    Translating is a very good use case. I also use them for that an it works very well. Better than any Google Translate. And I use it for roleplay, like a D&D campaign, just not with your friends, but alone and the AI narrates the story. And one-off things where I need some ideas to spark my creativity.

    What I’ve tried apart from that are programming, re-phrasing my emails, … But I’ve never got any good results for that. Everytime I did that, I ended up not liking the result, deleting it and starting over and doing it myself.


  • Sure. Once you start blaming people, I think some other questions should be allowed, too…

    For example: Isn’t it negligent to give a loaded handgun to a 14 yo teen?

    And while computer games, or chatbots can be linked, that’s rarely the underlying issue, or sole issue to blame. Sounds to me like the debate on violent computer games in the early 2000s, when lots of parents thought playing CounterStrike would make us murder people. Just that it’s AI chatbots now. (Okay, maybe that’s a stretch…) I can relate to loneliness and growing up and being a teen isn’t easy.




  • Correct answer. There is no general purpose AI model that can fit ino 1GB. These small models exist, but they do very specific small tasks. Sentiment analysis, object detection, word embeddings for vector databases…

    For coding, answering questions and generating text, you’d need like 6-8GB minimum. For maths way more than that and they’ll still be throwing dice instead of giving correct answers.