Last week we held the 4th edition of the European Data & Computational Journalism Conference (#datajconf) in Zurich, Switzerland, and this time it was held jointly with the Computation + Journalism Conference (#cplusj), which is usually held in the US.
I’ve written about DataJConf previously, when we held the first edition in Dublin in 2017. We’ve always encouraged a strong multi-disciplinary approach, aiming to attract journalists, developers, industry professionals and academics from across many fields including journalism and media studies, computer science and data science. It was really gratifying to see that this year was no different, with a great mix of all these groups, and more, in attendance. Our teaming up with C+J meant that we had more attendees from the US than we’ve had before, which I think will have been a great opportunity to strengthen links between Europe and the US, and has also helped raise the profile of our European ‘little sister’ conference to the more well established C+J conference.
This was our first conference back since the pandemic, it had been 4 years since we last got together and discussed all things Computational + Data Journalism. It was fascinating to see the progress that has been made in the last few years, and how both academic research and industry practice has evolved.
It’s no surprise that AI dominated much of the agenda, with some very interesting discussion of the use of GenerativeAI by news and media organisations. What was most revealing though was that widespread adoption of GenerativeAI tools is still some way off, if it will ever happen at all. Most organisations that have experimented with these sorts of technologies have found that the unreliability and ‘hallucinations’ that can be introduced create all sorts of integrity and trust issues when using GenAI to create content, which generally outweigh the benefits. Many newsrooms/organisations are instead for now sticking with rule based/templating automation for content generation, which is much more reliable and controllable. Where GenAI has found use is in transforming/summarising existing content, which again can be more controlled/reliable. On the AI front there was also some discussion of deepfakes, and the potential issues arising there, though little discussion of solutions, perhaps because we don’t have them, or perhaps because existing fact-checking and verification techniques are already sufficient to deal with the problem?
The other big development that was noticeable at this conference was the increase in algorithmic accountability efforts – news organisations and others working to investigate the impacts and biases present in algorithmic decision making processes that have a real impact on people and society. An increasingly important area of concern, that was not really touched upon in previous editions of the conference, but which is now a focus for many teams.
For a really nice roundup of some of the issues I’ve touched on above, and more, see this thread from Jim Haddadin:
You can also go back and check through the hashtag to see how the conference unfolded.
Overall it was a really successful conference. The local team did a great job pulling it together, and it was great to catch up with familiar faces from previous editions and conferences, and to meet new people too. We’re looking forward to the next edition …