Why should I care about Open Ai's boss?

Carl Sagan saw it coming.

“I have a foreboding of an America in my children's or grandchildren's time -- when the United States is a service and information economy; when nearly all the manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what's true, we slide, almost without noticing, back into superstition and darkness...

The dumbing down of American is most evident in the slow decay of substantive content in the enormously influential media, the 30 second sound bites (now down to 10 seconds or less), lowest common denominator programming, credulous presentations on pseudoscience and superstition, but especially a kind of celebration of ignorance”




― Carl Sagan,
Another book of mine that disappeared ... Sagan was really concerned about superstition vs science.
Edit so am I ...
 
Last edited:
Technically you are very correct. Therefore the incremental cost of "replacing" you or I is ...
..
0
I don't think the current generation of LLMs are replacing anybody anytime soon. Why? If you work with them for even a fairly short period of time, you discover that you both need to be very careful what you ask for and also very careful on checking the results. These models don't really know how to say "I don't know" and will happily bull$h1t you. Actually they don't even "know" enough to "know" they are bull$h1tting you. So using them for anything important absent competent human supervision is a Really Bad Idea.

I think you can make useful tools from this. They just won't be "replacing" people. More likely they will enable people in certain fields to do their jobs better (so maybe we'd need fewer people doing those jobs, but I very much doubt it).

Right now a lot of what we are hearing is people putting an interpretation on what LLMs are doing that has very little to do with what they really are doing.

This kind of goes back to earlier quotes from Carl Sagan in this thread. Thinking any LLM is going to replace computer programmers, lawyers, or journalists really shows both a misunderstanding of what an LLM can do and an even graver misunderstanding of what computer programmers, lawyers, and journalists do.
 
The next generation of AI will replace thousands of computer programmers, lawyers, and journalists, just as early pcs replaced hundreds of clerks, typists, and bookkeepers.
Today's crude tools are already proving their value to the major corporations, and that's where the research money ( and the danger of destroying the world while chasing money ) will come from.
 
The next generation of AI will replace thousands of computer programmers, lawyers, and journalists, just as early pcs replaced hundreds of clerks, typists, and bookkeepers.
Today's crude tools are already proving their value to the major corporations, and that's where the research money ( and the danger of destroying the world while chasing money ) will come from.
Let's take computer programmers as an example. But what I'm saying applies to lawyers and journalists and many other knowledge workers.

What computer programmers actually do has been studied and beaten to death for decades. Actually typing in code in a text editor typically takes up about ten percent of their time.

LLMs are at least potentially well-suited for generating useful text. So in the maximally optimistic case LLMs could replace approximately ten percent of the work programmers do. Such a productivity gain is well in line with improvements in the field every three to five years over the last six decades or so we've had actual computer programmers.

And that is a very generous interpretation of the capabilities of an LLM. Because as the output gets larger there are more likely to be hilarious and horrifying errors in their output. And any productivity gains from using an LLM to automate the generation of code will be eaten up reviewing and analyzing and debugging the subtle errors (as well as the obvious ones, but the subtle ones are the real work) that are inevitably going to be found. And it is important to keep in mind that finding subtle errors in someone else's code (even if that someone else isn't even made of meat) is very hard and time consuming, often much harder than writing the code in the first place.

Given the way LLMs work and what they are doing under the hood, I'm doubtful you can "fix" that problem by improving them.

So what do programmers do with the other 90 percent of their time? Well, typically they also have to figure out what a new program is supposed to do (typically called "requirements" or "functional specification"), figure out what the parts will be and how they will fit together ("architecture" and "design"), and communicate all that to other humans. That takes up about 90 percent of their time. The other 90 percent of their time is taken up by testing the new program and finding and fixing the inevitable errors.
 
Hello! It's natural to question the significance of individuals, like Sam Altman when considering AIs impact on society. However leaders of influential tech companies such as OpenAI play a role in shaping the development and implementation of AI technologies. Although it may appear that internal changes within these companies have effects they can actually reflect larger patterns and decisions that impact how AI is integrated into our daily lives. This integration influences aspects, including democracy and journalism. Additionally it is essential to have leadership in the development of AI technologies to ensure usage for the betterment of society while mitigating issues, like misinformation. Therefore closely monitoring these developments can be more important than one might initially think.
Says the AI. 😂
 
Let's take computer programmers as an example. But what I'm saying applies to lawyers and journalists and many other knowledge workers.

What computer programmers actually do has been studied and beaten to death for decades. Actually typing in code in a text editor typically takes up about ten percent of their time.

LLMs are at least potentially well-suited for generating useful text. So in the maximally optimistic case LLMs could replace approximately ten percent of the work programmers do. Such a productivity gain is well in line with improvements in the field every three to five years over the last six decades or so we've had actual computer programmers.

And that is a very generous interpretation of the capabilities of an LLM. Because as the output gets larger there are more likely to be hilarious and horrifying errors in their output. And any productivity gains from using an LLM to automate the generation of code will be eaten up reviewing and analyzing and debugging the subtle errors (as well as the obvious ones, but the subtle ones are the real work) that are inevitably going to be found. And it is important to keep in mind that finding subtle errors in someone else's code (even if that someone else isn't even made of meat) is very hard and time consuming, often much harder than writing the code in the first place.

Given the way LLMs work and what they are doing under the hood, I'm doubtful you can "fix" that problem by improving them.

So what do programmers do with the other 90 percent of their time? Well, typically they also have to figure out what a new program is supposed to do (typically called "requirements" or "functional specification"), figure out what the parts will be and how they will fit together ("architecture" and "design"), and communicate all that to other humans. That takes up about 90 percent of their time. The other 90 percent of their time is taken up by testing the new program and finding and fixing the inevitable errors.
I led a six man group of programmers. If the company could replace even 3 of the 6 they would do it without any hesitation, and they would replace 5 of the 6 if possible.
And I rented office space in a legal practice and am married to a CPA, so I know about the work flows in those professional offices. It's about the same attitude as far as eliminating backroom staff . And the AI wouldn't need to be very advanced at all to do those jobs.
Also a lot of university professors and school teachers can and will be replaced ... not all, but way over 10 or 15 percent.
But we all know how well predictions turn out. We'll see.
 
I led a six man group of programmers. If the company could replace even 3 of the 6 they would do it without any hesitation, and they would replace 5 of the 6 if possible.
And I rented office space in a legal practice and am married to a CPA, so I know about the work flows in those professional offices. It's about the same attitude as far as eliminating backroom staff . And the AI wouldn't need to be very advanced at all to do those jobs.
Also a lot of university professors and school teachers can and will be replaced ... not all, but way over 10 or 15 percent.
But we all know how well predictions turn out. We'll see.
I agree that the safest prediction to make is something like "watch this!"

As I've been in the business myself for forty-plus years, what I've seen is that any productivity gains (which really is what a generative AI translates into) is very quickly eaten up by a demand for more work and higher-quality work. And yes, any business will reduce headcount if they can or even if they shouldn't.

The target is always moving. Thirty-plus years ago you had mostly character-based interfaces that would at most be on an isolated network, and sometimes just on a single PC. Nowadays we typically have applications that have to run and behave sensibly across an insane range of devices, interoperate with a bewildering array of other systems, and run 24/7 all over the world in a very hostile environment. The job has gotten harder faster than the productivity gains have made the job easier, so far. No reason to think that it is all of a sudden going to get easier.

And yes, certain classes of programming jobs can be eliminated or at least become very uncommon (we don't employ very many assembly language coders anymore). But usually what happens is that a new kind of programming task comes along. In the case of AI, at least for the imaginable future we'll need humans to curate representative training datasets, carefully train the AIs, and evaluate their performance. At this point in time those tasks automate poorly, even with AIs (although there is some interesting work in AutoML, that really is just helping at this point).

My own view is that the real advances will be figuring out to have humans and AIs work cooperatively to do better work and solve harder problems.

Also, on university professors and school teachers, I'd argue that existing online learning technology is probably eliminating more jobs than AI is likely to.
 
Back