No, Bing's AI Chatbot Is Not Sentient

By | 8:12 AM Leave a Comment
Image: rafapress (Shutterstock)

AI angst has arguably never been higher. Some experts predict the AI singularity could happen within the next decade, and recent screenshots of Microsoft’s new Bing search AI expressing seemingly human fears and desires have some wondering if it’s already here.

It’s easy to see why this sentiment is spreading. The average person hears the terms “AI” and likely thinks of Skynet or HAL 9000—sophisticated machines possessing human-like self-awareness and controlled by powerful processors as complex as the human brain.

However, Hollywood AI is a very different type of “AI” compared to the reality of tools like Midjourney, ChatGPT, or Google and Microsoft’s search assistants that are making headlines.

In fact, it could be argued that labeling chatbots, art generators, or automated programming tools as “AI” is a misnomer—or, more likely, just a marketing tool.

How do “AI” tools like ChatGPT work?

In the simplest terms, these “AI” tools are programs built to spit out results from user inputs, and they require external curation from the engineers and users to fine-tune their performance. The software searches its databanks for information matching the user’s prompt, pieces it together and modifies it as needed, then repeats it back to the user. As Ted Chiang recently wrote in the New Yorker, the process is closer to making blurry Xeroxed photocopies of existing work, and not creating new work wholesale from a blank page.

G/O Media may get a commission

In other words, AI-generated articles, art, or code seems so human-like because it’s all based on existing materials made by humans. Midjourney images are evocative because they copy from paintings, illustrations, and photographs made by real people that understand composition and color theory. Bing’s answers seem eerily human because they’re repeating human-written text.

To be fair, this is impressive technology that’s difficult to build and even more difficult to tune for reliable results. The fact that it works at all is remarkable. But no matter what any New York Times reporter tells you, there is no “ghost” that exists within these machines learning how to write, draw, or deliver talk therapy, wishing to be alive.

Nevertheless, people misconstrue the sophistication and power of these tools as evidence they’re somehow sentient. Or at least on the verge of sentience.

And make no mistake: The people who make these tools know this, and they’re more than happy to let people believe their software is conscious and alive. People are more likely to try your products if they believe something is thinking in there. The more impressive and “life-like” Bing AI interactions or Midjourney image results are, the more likely people are to keep using them—and, as journalist Ed Zitron points out, the more likely people are to pay for them. It’s why ChatGPT is referred to as an AI rather than a “predictive text generator.” It’s simple marketing.

AI might not be alive, but it’s still a problem

But what about the future? Isn’t it possible computers could become conscious, self-aware beings capable of self-directed learning and artistic output just like a human?

Well, sure, it’s possible, but scientists and philosophers still debate what consciousness even is, let alone how it arises in biological life in the first place. We’ll need to answer these questions before artificial consciousness in non-organic machinery is remotely possible.

And if artificial awareness is achievable, it’s not happening any time soon, and certainly won’t spontaneously appear in Midjourney or ChatGPT.

But whether a robot uprising eventually takes place in some distant future should be less of a concern compared to the material issues AI automation poses for labor, privacy, and data freedom right now.

Companies are laying off writers and media professionals and replacing them with AI content generation. AI art tools routinely use copyrighted materials to generate images, and deep fake pornography is a growing issue. Tech firms are pivoting to unreliable, machine-generated code that is often less secure than human-written code. These changes aren’t happening because AI-generated content is better (it’s decidedly worse in most cases), but because it’s cheaper to produce.

These issues are far more threatening than wringing our hands over whether Bing has feelings or not, and it’s crucial to recognize how many of the purveyors of this “AI” tech are using those anxieties to market their products.


from Lifehacker https://ift.tt/EfaBoSN

0 comments:

Post a Comment