Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers sent the board of directors a letter warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.
Being a layperson in this, I’d imagine part of the promise is that once you’ve got reliable arithmetic, you can get logic and maths in there too and so get the LLM to actually do more computer-y stuff but with the whole LLM/ChatGPT wrapped around it as the interface.
That would mean more functionality, perhaps a lot more of it works and scales, but also, perhaps more control and predictability and logical constraints. I can see how the development would get some excited. It seems like a categorical improvement.
If that’s the case then bad news for OpenAI’s “moat” (and for people arguing for restraint in general): there’s been some recent breakthroughs in getting open-source LLMs trained to understand math as well.
It’d be hilarious if OpenAI’s board went through huge turmoil, tanked tens of billions of dollars worth of investments, disrupted their partnership with Microsoft to protect this huge revolution they’ve got brewing in their most secret and secure of laboratories… and then someone posts “hey, I got my AI Waifu to count good, check out this github to see how I did it” on Reddit.
It also brings into question (well, it adds to the questions, they were already brought up) the whole premise of IP law that “if we don’t protect it properly, no one will want to invent things”. It seems to me like people like creating things and humanity has a strange habit of converging on new inventions from multiple directions. Kinda like how calculus was invented by two different people at the same time.
Always wondered why the text model didn’t just put its output through something like MATLAB or Mathematica once it got as far as having something which requires domain-specific tools.
Like when Prof. Moriarty tried it on a quantum physics question and it got as far as writing out the correct formula before failing to actually calculate the result
There is definitely a lot of effort in this direction, seems very likely that a hybrid system could be very powerful.
deleted by creator