Sam Altman, CEO of OpenAI, speaks at the meeting of the World Economic Forum in Davos, Switzerland. (Denis Balibouse/Reuters)
deleted
Worldcoin, founded by US tech entrepreneur Sam Altman, offers free crypto tokens to people who agree to have their eyeballs scanned.
What a perfect sentence to sum up 2023 with.
Mr Altman, who founded Open AI which built chat bot ChatGPT, says he hopes the initiative will help confirm if someone is a human or a robot.
That last line kinda creeps me out.
deleted
Yeah that’s most most sci-fi dystopian article I’ve read in a while.
The line where one of the people waiting to get their eyes scanned is well eye opening " I don’t care what they do with the data, I just want the money", this is why they want us poor, so we need money so badly that we will impatiently hand over everything that makes us.
But we already happily hand over our DNA genome to private corporations, so what’s an eye scan gonna do…
deleted
Here is an alternative Piped link(s):
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
That’s why they just removed the military limitations in their terms of service I guess…
I also want to sell my shit for every purpose but take zero responsibility for consequences.
Considering what we’ve decided to call AI can’t actually make decisions, that’s a no-brainer.
AI term means humans are no brainers
Has anyone checked on the sister?
OpenAI went from interesting to horrifying so quickly, I just can’t look.
deleted
People still like Steve Jobs.
Ugh. There’s time yet.
OpenAI went from an interesting and insightful company to a horrible and a weird one in a very little time.
People only thought it was the former before they actually learned anything about them. They were always this way.
Remember when they were saying GPT-2 was too dangerous to release because people might use it to create fake news or articles about topics people commonly Google?
Hah, good times.
deleted by creator
Yup, my job sent us to an AI/ML training program from a top cloud computing provider, and there were a few hospital execs there too.
They were absolutely giddy about being able to use it to deny unprofitable medical care. It was disgusting.
I’m tired of dopey white men making the world so much worse.
Agreed, but also one doomsday-prepping capitalist shouldn’t be making AI decisions. If only there was some kind of board that would provide safeguards that ensured AI was developed for the benefit of humanity rather than profit…
AI shouldn’t make any decisions
So just like shitty biased algorithms shouldn’t be making life changing decisions on folks’ employability, loan approvals, which areas get more/tougher policing, etc. I like stating obvious things, too. A robot pulling the trigger isn’t the only “life-or-death” choice that will be (is!) automated.
is exactly this AI will do in a near future (not dystopia)
Ummm…no fucking shit. Who was thinking that was a good idea?
probably about half of the executives this guy talks to
But it should drive cars? Operate strike drones? Manage infrastructure like power grids and the water supply? Forecast tsunamis?
Too little too late, Sam. 
deleted by creator
deleted by creator
Yes on everything but drone strikes.
A computer would be better than humans in those scenarios. Especially driving cars, which humans are absolutely awful at.
So if it looks like it’s going to crash, should it automatically turn off and go “Lol good luck” to the driver now suddenly in charge of the life-and-death situation?
I’m not sure why you think that’s how they would work.
Well it’s simple, who do you think should make the life or death decision?
The computer, of course.
A properly designed autonomous vehicle would be polling data from hundreds of sensors hundreds/thousands of times per second. A human’s reaction speed is 0.2 seconds, which is a hell of a long time in a crash scenario.
It has a way better chance of a ‘life’ outcome than a human who’s either unaware of the potential crash, or is in fight or flight mode and making (likely wrong) reactions based on instinct.
Again, humans are absolutely terrible at operating giant hunks of metal that go fast. If every car on the road was autonomous, then crashes would be extremely rare.
Are there any pedestrians in your perfectly flowing grid?
Again, a computer can react faster than a human can, which means the car can detect a human and start reacting before a human even notices the pedestrian.
deleted
Teslas aren’t self driving cars.
deleted
Well, yes. Elon Musk is a liar. Teslas are by no means fully autonomous vehicles.
deleted
Here’s the summary for the wikipedia article you mentioned in your comment:
No true Scotsman, or appeal to purity, is an informal fallacy in which one attempts to protect their generalized statement from a falsifying counterexample by excluding the counterexample improperly. Rather than abandoning the falsified universal generalization or providing evidence that would disqualify the falsifying counterexample, a slightly modified generalization is constructed ad-hoc to definitionally exclude the undesirable specific case and similar counterexamples by appeal to rhetoric. This rhetoric takes the form of emotionally charged but nonsubstantive purity platitudes such as “true”, “pure”, “genuine”, “authentic”, “real”, etc. Philosophy professor Bradley Dowden explains the fallacy as an “ad hoc rescue” of a refuted generalization attempt.
This is the best summary I could come up with:
ChatGPT is one of several generative AI systems that can create content in response to user prompts and which experts say could transform the global economy.
But there are also dystopian fears that AI could destroy humanity or, at least, lead to widespread job losses.
AI is a major focus of this year’s gathering in Davos, with multiple sessions exploring the impact of the technology on society, jobs and the broader economy.
In a report Sunday, the International Monetary Fund predicted that AI will affect almost 40% of jobs around the world, “replacing some and complementing others,” but potentially worsening income inequality overall.
Speaking on the same panel as Altman, moderated by CNN’s Fareed Zakaria, Salesforce CEO Marc Benioff said AI was not at a point of replacing human beings but rather augmenting them.
As an example, Benioff cited a Gucci call center in Milan that saw revenue and productivity surge after workers started using Salesforce’s AI software in their interactions with customers.
The original article contains 443 words, the summary contains 163 words. Saved 63%. I’m a bot and I’m open source!