I hate to hit like on that post, but yes the time to stop certain tech trends is now often past before we know much about them.
The Segway from predator drones to Skynet for example, is really short. And people like Sam Altman are often convinced that only AIs can "save civilization" at this point, pointing to a century of warnings that the people in charge have ignored. What they mean by "civilization" is rather vague, but very unlike the 1900s idea.
I think the pieces were in motion thirty years ago to bring us to this point. There are two things that are making our current generation of "AI" possible:
- Very large datasets, which are largely made possible by the Internet. GPT-4 is trained on about a petabyte of data.
- GPUs. Which were designed to produce realistic effects in video games but have also proved useful for AI.
It is important to note that nobody, in their wildest dreams, thought those two things would make AI technology like we are seeing today possible. And all of the rest of the hard work (and there was
a lot of hard work) wouldn't have amounted to very much without those two things.
Also, a lot of different organizations are working on LLM-style generative AI. They are all also largely sharing information very generously at this point, which only makes the advancements come faster. To give a very short and incomplete list of who is doing major work in this space:
- OpenAI, with GPT-4
- Google, with PaLM 2
- Microsoft, with Prometheus
- Facebook, with Llama 2
- An open source group producing GPT-J
- Another open source group producing ORCA
- Zhipu AI in China, and there are quite a few others in China
- France's Mistral AI
That list is by no means complete. All of them are in very approximately the same place with respect to their technology, and all of them are routinely borrowing ideas from each other so the wavefront of the technology is advancing with remarkable speed. The open source groups are remarkable in and of themselves because they have shown that you can replace cold hard cash with cleverness and persistent determination -- keep in mind that GPT-4 cost over $10 million in GPU time to train, and the open source people appear to have spent about 0.1 percent that much training their AIs. And they work about as well and chances are you couldn't tell by interacting with any of them which one was which.
So the knowledge on how to do this is now extremely widely distributed all over the world. You can't really point to one person or one company and say they "invented" this technology. What happened was that a lot of people over the last few years made incremental improvements and small discoveries that snowballed into a substantial breakthrough.
About a half dozen years ago, there was a conversation at OpenAI that went approximately like this:
Q: what would happen if we trained a predictive text neural network on the whole internet?
A: I dunno, why don't we try it.
That conversation led to a company with a valuation of over 90 billion dollars.
I see two or three mind-blowing papers a week coming out of this space. There really hasn't been any analogy to this in our lifetimes, and possibly ever in history.
One example from one of those mind-blowing papers I saw last week (courtesy Zhipu AI):
I do think it is fun that the AI referred to the girl as "it". Might be a translation error though.
So I'd say yeah, care about this stuff.
Because one way or another it is coming for you.