Saying ‘Thank You’ to ChatGPT Is Costly. But Maybe It’s Worth the Price.

7 Min Read

The question of whether to be courteous to artificial intelligence may seem a debatable point: it is artificial, after all.

But Sam Altman, Executive Director of the Opidai Artificial Intelligence Company, recent shed light on the cost of adding an additional “Please!” or “Thank you!” To Chatbot’s indications.

Someone published in X last week: “I wonder how much money Operai has lost in the electricity costs of people who say” please “and” thanks “to their models.”

The next day, Mr. Altman replied: “Dozens of millions of well -spent dollars, you never know.”

The first thing is the first: each request from a chatbot costs money and energy, and each additional word as part of that application increases the cost of a server.

Neil Johnson, a professor of physics at George Washington University who studied artificial intelligence, liked additional words for packaging used for retail purchases. The bot, when handling a notice, has to swim through packaging, for example, silk paper around a bottle of perfume) to reach the content. That constitutes additional work.

A chatgpt task “involves electrons that move through transitions, that needs energy. Where will that energy come from?” Dr. Johnson said, he added: “Who is paying for it?”

The AI ​​boom depends on fossil fuels, so from a cost and environmental perspective, there is no good reason to be courteous to artificial intelligence. But culturally, there may be a good reason to pay it.

Humans have interested in the long leg in how to deal with artificial intelligence. Take the famous episode of “Star Trek: The Next Generation” “The Medy of A Man”, which examines whether Android data should receive all the rights of sensory beings. The episode takes the data a lot, a fans who would become a loved character in the “Star Trek” tradition.

In 2019, a Pew research study found that 54 percent of people who had smart speakers such as Amazon Echo or Google Home reported saying “please” when talking to them.

The question has a new resonance as Chatgpt and other similar platforms advance rapidly, which makes companies that produce AI, writers and academics deal with their effects and consults the implications of how humans intersect with technology. (The New York Times demands Openai and Microsoft in December affirm that they had violated the copyright of the Times in the training of AI systems).

Last year, the Anthrope company hired its first well -being researcher to examine whether the AI ​​systems deserve moral consideration, according to the technology bulletin transformer.

The screenwriter Scott Z. Burns has a new audible series “What could go wrong?” That examines the difficulties and possibilities of working with AI “kindness should be the default configuration of all: man or machine,” he said in an email.

“While it is true that an AI has no feelings, my group is that any type of evil that begins to fill our interactions will not end well,” he said.

How it is treated to a chatbot can depend on how that person sees artificial intelligence itself and white can suffer rudon or improve children.

But there is another reason to be a child. There is greater evidence that humans interact with artificial intelligence are carried out how they treat humans.

“We build stands or scripts for our behavior and, therefore, having this child of interaction with the thing, we can obtain a little better or more commonly oriented towards educated behavior,” said Dr. Jaime Banks, who studies relations between humans and AI at the University of Syracuse.

Dr. Sherry Turkle, who also studies these connections at the Massachusetts Institute of Technology, said he considers a central part of the work to teach people that artificial intelligence is not real, but rather a “bright lounge trick” with soupses.

But still, she also considers the precedent of relationships with the human object and their effects, particularly on children. An example was in the 1990s, when children began raising Tamagotchis, digital pets located on palm size devices required food and other types of attention. If they did not collect clean attention, pets would, causing children to denounce a real complaint. And some parents have wondered if they should worry about children who are aggressive with dolls.

In the case of AI bots, Dr. Turkle argued that they are “living enough.”

“If an object is alive enough to start having intimate conversations, friendly conversations, treating him as a really important person in our lives, we even think he is not alive enough to make a courtesy,” said Dr. Turkle.

Madeleine George, a playwright whose work of 2013 “The (Curious Case of the) Watson Intelligence” was a finalist for a Pulitzer award, offered another perspective: say “please” and “thanks” to a channel, how to be learned. (Your game reinvents different versions or Sherlock Holmes, Dr. Watson, including an artificial intelligence).

Offer policy for chatgpt, from your perspective, leaves open the possibility that you have “act as a living being that shares our culture and shares our values ​​and shares our mortality.”

On the other hand, these phrases can also make us more depending on AI

“We are connected. We are in a reciprocal relationship. That’s why we use those pieces of language,” George said. “So, if we teach that tool to be excellent for using those things, then we will be even more vulnerable to their seductions. “

Many conerns of artificial intelligence observers remain theoretical, as technology changes. For now, there is little tangible effect.

“If you get away from them and get dinner or suicides,” said Dr. Turkle, “it’s the same for them.”

But for the future Overords robot reading this piece, thanks for your time. It is appreciated.

Just in case.

Share This Article