Artificial intelligence and journalistic content. Predictions for a plausible future
In my interest in journalism, there is a moment marked on the agenda, with highlighter. I refer to the annual appointment with the Niemand Foundation website, which at the end of the calendar year asks some of the "smartest people in journalism and media" for predictions for developments in journalism in the coming year.
I had used this very source, the interview series offered by the Niemand Foundation, to greet last year’s appointment as editor of FBK Magazine, as you can (re)read here. You have to make a choice, as the ideas that stem from the responses are so many, and many are tasty and nutritious food for thought. Trying to turn personal interest into a potential opportunity for discussion (not only virtual, if you want to write to me, I will reply) I thought I would focus on some of the several voices reasoning about Artificial Intelligence, a topic that is far from marginal in particular for those dealing with Fondazione Bruno Kessler.
Let’s start with Bill Grueskin, professor of journalism at Columbia University, who tried to ask Chat GPT to answer the question, “What is the future of journalism?” For those who are not familiar with it: Generative Pretrained Transformer is, in a nutshell, the increasingly popular natural language processing tool that uses advanced machine learning algorithms to generate human-like responses within speech. The result of Grueskin’s experiment led to a series of obvious but certainly not incorrect responses, based on which he himself proposes a plausible future. As the galaxy of the local press is challenged by scarcity of resources, one possible solution might be to rely on artificial intelligence to handle agency news, let’s call it that, to allow journalists to take over investigations and reporting.
A similar landscape is sketched by Peter Sterne, an independent journalist. Sterne has tried to explore Artificial Intelligence programs and the possibility of transcribing interviews or, taking a step further, of composing news articles. He has little doubt that these results will come to pass, and, like Grueskin, he sees such potential developments as improving the journalism profession. Less optimistic is Josh Schwartz, CEO of Chartbeat, a digital resource for analyzing data on the performance of specific content disseminated on the Web, which is very useful to journalism. According to Schwartz, there is a major risk in using texts drafted not by a human mind: that of multiplying spam, fake news and uninformative content, available at a very low price, if not for free. The other side of the coin, in short; if good things can be done cheaply, so can bad things, and this risks creating further confusion, already fueled by those in the flesh who write very poorly or in bad faith. How to solve it? One avenue is shrewdly pointed out by David Cohn, co-founder of the Subtext messaging platform. The key is expertise: managing risk and potential will require trained journalists, ready to train in the new landscape suggested by the emergence of Artificial Intelligence.
Finally, taking a cue from the potential of Chat GPT (again), Janet Haven, executive director of datasociety, first defines the crux of the matter: the answers given by Artificial Intelligence to users’ questions range between funny results, sometimes funny and flowing – convincingly – and others that are unreliable and just plain wrong. User trust in the Media is steadily declining, and the use of such tools risks “throwing gasoline on the fire of an already burning dumpster.” The future use of technology is unpredictable for its own creators, Haven adds, telling us this from years of research that reiterates the need to accelerate work on the regulations necessary to limit potentially harmful uses of Artificial Intelligence, algorithmic and data-centric technologies. That said, Haven closes with three predictions: the first is that we will see ChatGPT and similar tools being used in adversarial ways, with the goal of undermining trust in information environments, pulling people away from public discourse and routing them to increasingly homogeneous communities (what social network algorithms already do, I might add). The second is that it will trigger a series of stimulating experiments and research on the ways in which society can adapt to image generators such as Dall-E and text generators such as ChatGPT and on the possible developments of their potentialities, with a view to the collective benefit and in particular that of the most vulnerable sections of society. The third, referring to the U.S. situation, envisions a legislative development intended to establish meaningful barriers around the use of various AI systems in a way that takes into account their social costs and puts the protection of fundamental rights and freedoms ahead of pure technical innovation.
The reflections proposed by the Niemand Foundation initiative do not end here, I invite those who had been stimulated y this brief annotated summary to continue reading. A brief summary? There is a widespread notion in the world of journalism that one should not exceed a certain number of jokes -save for virtuous exceptions – so as not to force readers into a corner. In this what seems to me to be a brief summary, the risk of overshoot is real. I need the space, however, to add an argument about what David Cohn assumed, with whose views I find myself in broad agreement. In the end, the solution to problems very frequently comes through skill enhancement: studying is always a good idea.
So I thank you for making it this far and look forward to a future editorial, which will sooner or later lead me to think th the comments I often hear about articles of different content through: “it’s a bit long…”.