The company is called “Safe Super Intelligence”. Not a fan of names like these, kind of like if a company called itself “safe airplanes”, there’s something about it that makes me think it won’t live up to the name.
Not sure how they plan on raising money when so many other AI companies are promising commercialization. A company prioritizing safety will be defeated by another prioritizing profit. A company like this could have flourished in the time before openAI, but right now there’s so much demand for gpus and talent that makes it very challenging to catch up, more so when less scrupulous companies offer more money for engineers. They’d have to hire from a smaller and more limited pool of applicants that believe in the mission.
The company is called “Safe Super Intelligence”. Not a fan of names like these, kind of like if a company called itself “safe airplanes”, there’s something about it that makes me think it won’t live up to the name.
Or all those crypto scams that put the word “safe” in their token’s name to sucker people into thinking it wasn’t a Ponzi scheme
A big part of the AI Hype cycle has been “AIs are potentially too omnipotent for us to control, but also too much of a national security threat to ignore”. So you get these media hacks insisting we need a super-intelligent artificial mind that is firmly within the grip of its creator.
As a consequence of the hype over-topping any kind of real utility from these machines, you’ve got some of the top board members of these firms spinning out their own boutique branches of the industry by insisting prior iterations are too dangerous or too constrained to fulfill their future their intended role as techno-utopian machine gods.
Not sure how they plan on raising money when so many other AI companies are promising commercialization.
The sensationalist bullshit is how they plan to make money. “Don’t trust Alice’s AI, its too dangerous! I’m the Safe AI” versus “Don’t trust Bob’s AI, its too limited. I’m the Ambitious AI”. Then Wall Street investment giants, who don’t know shit from shoelaces, throw gobs of money at both while believing they’ve hedged their bets. And a few years after that, when these firms don’t produce anything remotely as fantastical as they promised, we go into a giant speculative bubble collapse that takes out half the energy or agricultural sector as collateral damage.
In twenty years, we’ll be reading books titled “How AI Destroyed The Orange”, describing the convoluted chain of events that tied fertilizer prices to debt-swaps on machine learning centers and resulted in almost all of Florida’s biggest cash crop being lost to a hiccup in the NASDAQ between 2026 and 2029.
The company is called “Safe Super Intelligence”. Not a fan of names like these, kind of like if a company called itself “safe airplanes”, there’s something about it that makes me think it won’t live up to the name.
Not sure how they plan on raising money when so many other AI companies are promising commercialization. A company prioritizing safety will be defeated by another prioritizing profit. A company like this could have flourished in the time before openAI, but right now there’s so much demand for gpus and talent that makes it very challenging to catch up, more so when less scrupulous companies offer more money for engineers. They’d have to hire from a smaller and more limited pool of applicants that believe in the mission.
Or all those crypto scams that put the word “safe” in their token’s name to sucker people into thinking it wasn’t a Ponzi scheme
It’s “safe” as in a vault where they’re gonna swim in investor money like scrooge mcduck.
A big part of the AI Hype cycle has been “AIs are potentially too omnipotent for us to control, but also too much of a national security threat to ignore”. So you get these media hacks insisting we need a super-intelligent artificial mind that is firmly within the grip of its creator.
As a consequence of the hype over-topping any kind of real utility from these machines, you’ve got some of the top board members of these firms spinning out their own boutique branches of the industry by insisting prior iterations are too dangerous or too constrained to fulfill their future their intended role as techno-utopian machine gods.
The sensationalist bullshit is how they plan to make money. “Don’t trust Alice’s AI, its too dangerous! I’m the Safe AI” versus “Don’t trust Bob’s AI, its too limited. I’m the Ambitious AI”. Then Wall Street investment giants, who don’t know shit from shoelaces, throw gobs of money at both while believing they’ve hedged their bets. And a few years after that, when these firms don’t produce anything remotely as fantastical as they promised, we go into a giant speculative bubble collapse that takes out half the energy or agricultural sector as collateral damage.
In twenty years, we’ll be reading books titled “How AI Destroyed The Orange”, describing the convoluted chain of events that tied fertilizer prices to debt-swaps on machine learning centers and resulted in almost all of Florida’s biggest cash crop being lost to a hiccup in the NASDAQ between 2026 and 2029.