TribunaMag.com

View Original

Artificial Intelligence: Bots, Chats, and Ethical Dilemmas

In February of this year ChatGPT set a new record for “fastest-growing user base” after reaching 100 million active users in less than two months after its launch. It took Tik-Tok nine months to achieve that number. This means (for good, for bad, for whatever might be) that it’s become a part of our lives and it’s influencing how we move forward technology-wise as a species.

First off, a few definitions. ChatGPT stands for what? Generative Pre-Trained Transformer. Yes! It was pre-trained. And if you are using ChatGPT version 3.5 (by going to https://chat.openai.com and logging in with your Google account perhaps?) you will, at some point, come across the following information: it hasn’t been updated since September 2021.

I asked in the chat directly:

ChatGPT 4 does connect to the internet via its plugins. However, you’ll have to pay $20 a month and you’ll be limited to a certain number of prompts every 3 hours. The main takeaway is ChatGPT 3.5 is not connected to the internet. So don’t ask for something that happened in 2022 (or try it out and see what happens, has no idea who won the 2022 FIFA World Cup).

But it doesn’t have to. You can copy and paste an article that was written today and ask for it to be summarized, analyzed, and even re-written differently in a style of your choosing. See where I’m going with this?

“You’re not even a real journalism”

In April 2023, The Guardian published a piece explaining how ChatGPT is being used to generate “Guardian-Like” articles that imitate a journalistic style and have sophisticated prediction mechanisms that make you second guess the authenticity of the work.

“The reporter couldn’t remember writing the specific piece, but the headline certainly sounded like something they would have written”, explains Chris Moran of The Guardian.

And that’s just a case of articles that almost pass for the original. But what about asking ChatGPT to generate Op-Eds in favor or against a polarizing debate? It could be used to create highly editorialized content in a matter of minutes. And here’s where I sound the alarm, in agreement (of course) with genius human being, Noam Chomsky.

The False Promise of AI?

Second definition: what is artificial intelligence? Encyclopedia Britannica tells us that: “artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.”

That being stated, human intelligence is constantly confronted with having to make moral decisions.

Last march Noam Chomsky authored an essay with Ian Roberts and Jeffrey Watumull entitled “Noam Chomsky: The False Promise of ChatGPT” . I could highlight several points in the opinion piece but for now, as it pertains to journalism, I will focus on human beings being capable of intelligent thinking which involves moral and ethical dilemmas.

“True intelligence is also capable of moral thinking. This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism)”, the authors note in the article.

What ought not to be? As Chomsky and company ask. Some pieces shouldn’t be written, or shouldn’t be generate with a “pre-trained transformer”. The possibilities for rapidly generating disinformation content seem limitless. A journalist had to verify the information before the transformer was trained. And shouldn’t these journalists get some compensation for the not-so-easy task of verification / obtaining the best possible version of the truth? Was the transformer trained with AP, AFP, EFE and others newswire services? These were written by human beings though. As much as some might like to dismiss the “wires” as some anonymous machine spilling out news.

Some journalists fear that their jobs will be replaced with digitally fabricated news anchors or articles churned out by the transformer. Right now, the authenticity of streamers seem to have a plus side in that it’s the realest you can get from a live feed of a human interacting in the digital space. In recent years, they’re even beating out some TV personalities of the Broadcast News era.

Maybe I’m being too optimistic. But it would be incredibly dismal to think you can replace a person that will have to make critical, ethical and, under-pressure decisions; with a transformer.

Journalists, if properly trained, work hard to protect their credibility by not publishing until verifying to the highest possible extent and meeting a deadline. The transformer will absolutely make a decision from whatever prompt is given and what comes out might read more like a hallucination of made-up examples.

Bots don’t have these fears, emotions, anxieties. Do they? Not for now it seems. But what can definitely be replaced? Unscrupulous actors that just want the Bot to churn out content they agree with. You can have ChatGPT write the most glowing review of the worst film you can think of and “publish that” as a nameless news organization.

Here’s an example:

Full disclosure

One benefit that I have found is the ability to transcribe long press conferences and having ChatGPT summarize the main points of what was discussed. However, I as a reporter, would have to double check everything that the Bot generated by watching the entire conference and making sure there was no misreading. Chomsky and friends also note that, as a tool, ChatGPT does seem useful for generating code.

Transcribing takes a long time, but a career is marked by credibility, so one can’t just publish as is. And also, you’d be wise to tell your audience that some paragraph, summary or what-not piece of text was in part generated using a certain tool. As I’m about to show you now:

The image illustrating this article was also generated using an A.I. tool known as Lexica. This was the prompt and I picked the first of the results.

Where do we go now?

Where do we go from here? It’s hard to tell. Especially because there are so many questions regarding pre-trained transformers. If they were pre-trained this means their sophisticated prediction powers were modeled after humans interacting with it to some extent. And these also entail biases.

Furthermore, I’ve only mentioned ChatGPT so far, but there’s Google Bard and Microsoft has a new AI-powered Bing search tool. Others might have different programming.

It’s one thing to discuss these tools in the abstract. But I would definitely recommend interacting with ChatGPT and refining prompts in order to get a better grasp of what its capabilities, uses and limitations seem to be at the moment.

It’s the human intelligence that has to make decisions about this tool and others to come in the future.