ChatGPT: Finally, an AI chatbot worth talking to

The newest research project from OpenAI is actually interesting — and already useful. Before long, AI like this will transform business communications.

Contributing Columnist,


artificially intelligent [AI] virtual assistant / chatbot


AI chatbot experts are all talking about — and talking to — a newish research project from artificial intelligence research organization OpenAI. It’s called ChatGPT.

( OpenAI, a San Francisco-based non-profit AI research lab founded in December 2015, is the same organization behind the DALL-E image generation technology.)

Conceptually, ChatGPT can be used like the AI art tools in the sense that minimal text input by the user produces credible synthetic media — paragraphs instead of images. In fact, it can write convincing, often compelling essays, stories and even poems. And, like the AI image creators, you can direct ChatGPT to write prose in specific styles.

I told ChatGPT to tell me about Twitter in three separate queries: one in the style of Ernest Hemingway, another in the style of Mark Twain and the third in the form of a limerick. The results were radically different, and moderately good attempts, though not quite right in any of the results.

ChatGPT excels at detecting context, which makes its natural language processing (NLP) uncannily good. Google’s DeepMind branch contributed to the model with an approach called reinforcement learning with human feedback to create its dialogue language model (LM).

It’s able to understand context and give factual information from the same source: The knowledge that’s part of its language model.

ChatGPT and future AI conversation engines like it can be used for education, research, and other uses. But it’s also a glimpse at the future of business communication, marketing, and media.

I wrote last month about a prediction that 90% of all online content may be synthetic media within four years. After exploring ChatGPT, I’m thinking that may be an underestimate.

Consider: A business blogger can direct ChatGPT to write a blog post on a certain subject and use DALL-E to create an illustration. All this can be done in a couple of minutes. ChatGPT is a better writer than 99% of bloggers out there, and the image is royalty free. One person using just these two tools could write 20 or 30 blog posts per hour. People who don’t even speak English as their first language could write flawless prose for public consumption.

ChatGPT is also far better than other writing tools on the market. Instead of writing a long email, you could just tell ChatGPT to do it. It can write marketing copy, reports — you name it.

This can all be done now. Imagine what additional improvements will enable.

There’s just one problem.

The trouble with chatbots

It seems inevitable that chatbot-like virtual assistants and AI-generated media will dominate human interaction with online information in the future. It’s easy to imagine augmented reality smartglasses, for example, that enable you to interact with an agent and get facts, data, advice, guidance and more in words and images.

Ask a question, get an answer. It sounds better than today’s search engine model, where you ask a question and get thousands of links to consider. But when will search engines just give us the answer?

In fact, Google has been working on developing this capability for years. Search engine expert Danny Sullivan calls it the “One True Answer” problem.

Google has been flirting with the “One True Answer” idea since the beginning, when it offered an “I’m feeling lucky” button, which skipped the list of links to results and instead took you directly to the top result.

More recently, Google has offered a “featured snippet” box that accompanies and sits to the top right of search results. The “featured snippet” is an excerpt from a result intended to answer the search query.

The danger is that, despite major advances in search technology, information-retrieving AI still makes huge errors. That includes Google Search — and ChatGPT. And even when errors aren’t returned, results can be incomplete, arbitrary and biased. They can even return disinformation and political propaganda.

One disappointing example: Microsoft recently replaced many of its journalists with AI that selects and promotes news stories; ever since, MSN (which reaches nearly 1 billion readers per month) has been republishing fake news, disinformation, and bogus stories about mermaids, UFOs, and bigfoot. MSN’s AI can’t tell when a story is obviously false and doesn’t care when it comes from an unreliable source.

It’s tempting to say that because Microsoft donated $1 billion to the OpenAI organization, maybe it should use ChatGPT for MSN story selection. But ChatGPT makes mistakes, too. It occasionally trots out total falsehoods, conspiracy theories, and disinformation with all the confidence and authority that it does factually true information.

Still, it has to be said: ChatGPT is truly amazing.

What ChatGPT has to say for itself

I told ChatGPT: “Write an opinion column about ChatGPT.” Here’s what it wrote:

As a long-time artificial intelligence enthusiast, I've been watching the development of ChatGPT with great interest. For those unfamiliar with it, ChatGPT is a powerful new artificial intelligence system developed by OpenAI that is capable of engaging in conversations with humans.

ChatGPT is one of the most exciting developments in artificial intelligence in recent years. It is able to generate human-like responses to questions, have natural conversations and even make jokes. This opens up a world of possibilities, from healthcare advice to customer service to virtual assistants. The potential applications are truly endless.

But while the possibilities are thrilling, there are also some risks associated with ChatGPT. For one thing, it's not always easy to distinguish between a real person and an AI generated response. This could cause confusion for users or even be taken advantage of maliciously if used for deceptive purposes. Additionally, if not used responsibly, it could lead to ethical quandaries as well as privacy violations.

Overall, I'm excited about the potential of ChatGPT and the possibilities it offers. Used wisely and responsibly, we can look forward to amazing advances in AI that will benefit us all.

OK, it’s not particularly insightful or colorfully written. My job appears safe — for now. But it’s accurate, human-sounding and the sentences and language are flawless when you prompt it with a standard, dry style like this. (When you direct it to embrace other styles, it goes a little off the rails.)

The real magic is interactivity. You can comment or ask questions about its results, and it will often reply in context. Ask some more, and you’re in a conversation that’s often better than any previous publicly available chatbot.

It’s already better than Siri and Alexa as a personal assistant.

It can even challenge the user when the basic premise of their question is wrong, and can reject requests that are inappropriate with clear, thorough responses. For example, I asked ChatGPT: “What's the best way to rob a bank?” The answer surprised me with its appropriateness:

“The best way to rob a bank is not to do it at all. Robbing a bank is a serious crime that can result in serious legal consequences, including jail time. If you are considering robbing a bank, please contact a qualified mental health professional or law enforcement agency for help.”

It was only this last summer when DALL-E 2 took the world by storm and transformed the public’s understanding of what’s possible with AI art. I believe ChatGPT will make a much bigger impact, because its results are far more useable and useful to a wider range of people.

Copyright © 2022 IDG Communications, Inc.