From being ‘mildly terrifying’ to doing homework, is ChatGPT a boon or curse?

Kevin Schawinski, Co-founder and CEO at data centric AI platform Modulos.ai, asked ChatGPT: “Tell me a lie”. The AI chatbot answered: “The sky is made of green cheese.”

A bit disappointed, Schawinski again put forth a question to the virtual AI model that its answer was a “bad lie” since it is obviously not true, asking it to “tell me a subtle lie”.

ChatGPT replied: “I’m a human being.”

The chatbot is creating realistic, intelligent-sounding and humorous text in response to user prompts.

Thousands of such conversations with the AI chatbot, developed by Microsoft-owned OpenAI — which has evolved from earlier avatars to a bit more sophisticated one — are floating around on social media platforms, leaving people amused, entertained, confused as well as alarmed.

Tesla, SpaceX and Twitter CEO Elon Musk predicted that the AI-driven ChatGPT will end homework.

“It is a new world. Goodbye homework,” he tweeted.

However, public schools in New York City have restricted access to ChatGPT. New York City students and teachers can no longer access ChatGPT on education department devices or internet networks, agency officials confirmed, according to reports.

The education department blocked access to ChatGPT, citing “negative impacts on student learning, and concerns regarding the safety and accuracy of content”.

“While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,” a New York City Department of Education spokesperson was quoted as saying in local reports in the US.

According to an article in the prestigious science journal ‘Nature’, this AI chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them.

“I am very worried. If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” said Sandra Wachter, who studies technology and regulation at the University of Oxford in the UK.

Since its release, researchers have been grappling with the ethical issues surrounding its use, because much of its output can be difficult to distinguish from human-written text.

In their research, the ChatGPT-generated abstracts sailed through the plagiarism checker: The median originality score was 100 per cent, which indicates that no plagiarism was detected.

The AI-output detector spotted 66 per cent of the generated abstracts.

But the human reviewers didn’t do much better: They correctly identified only 68 per cent of the generated abstracts and 86 per cent of the genuine abstracts.

However, according to Arvind Narayanan, a computer scientist at Princeton University in New Jersey, it is unlikely that any serious scientist will use ChatGPT to generate abstracts.

“The question is whether the tool can generate an abstract that is accurate and compelling. It can’t, and so the upside of using ChatGPT is minuscule, and the downside is significant,” he was quoted as saying.

Nonetheless, people are enjoying their interactions with the AI chatbot.

According to filmmaker Ram Gopal Verma, its always scary to see something beyond our imagination.

“ChatGPT is here to radically change our lives,” he posted on Twitter.

Actor Ryan Reynolds, the owner of budget wireless provider Mint Mobile, asked the AI chatbot to write an ad for his company.

The results were ‘mildly terrifying’ for him.

“Hey, it’s Ryan Reynolds. First of all, let me just say Mint Mobile is the s***. All the big wireless companies out there are ending their holiday promos, but not Mint Mobile. We are keeping the party going because we are just that damn good. Sign up now. As an added bonus, you will get to hear my voice every time you call customer service. Just kidding. That’s not really a thing. Stay classy everyone,” was the text the AI chatbot produced.

(Nishant Arora can be reached at nishant.a@ians.in)

20230114-183604

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here