“ChatGPT is a development on par with the printing press, electricity and even the wheel and fire.”
That’s according to Lawrence Summers, the former Treasury Secretary in the Obama administration. I had heard about ChatGPT before, but I knew nothing about it. (Actually, when I listened to Summers, it sounds like he’s referring more broadly to the ability of AI to think and express itself like humans.)
Here’s what I learned from an NYT article.
“In ChatGPT’s case, it read a lot. And, with some guidance from its creators, it learned how to write coherently — or, at least, statistically predict what good writing should look like.”
Some benefits:
“It can help research and write essays and articles. ChatGPT can also help code programs, automating challenges that can normally take hours for people. Another example comes from a different program, Consensus. This bot combs through up to millions of scientific papers to find the most relevant for a given search and share their major findings. A task that would take a journalist like me days or weeks is done in a couple minutes.”
The benefits here are obvious, but, off the top of my head, here are some drawbacks:
- For humans, the ability to comb through lots of information and find the most relevant information could deteriorate.
- My sense is that different people make different judgments about what is relevant; the ability to do this, which includes making connections with other information, including seemingly unrelated information, can differ significantly from person to person. Will this capability become more uniform if done by an AI?
- My sense is that this process can lead to important insights. How will AI impact that?
In a survey, a group of scientists who work on machine learning had even more dire response:
Nearly half said there was a 10 percent or greater chance that the outcome would be “extremely bad (e.g., human extinction).” These are people saying that their life’s work could destroy humanity.
This seems like a big problem, one that that seems blatantly foolish:
“The problem, as A.I. researchers acknowledge, is that no one fully understands how this technology works, making it difficult to control for all possible behaviors and risks. Yet it is already available for public use.”
To go ahead with something that we don’t fully understand, but could pose an existential threat to humanity (albeit a relatively small probability) seems foolish. And how can we accurately assess the risk if we don’t fully understand how the technology works?