Expert insights with Ian Mulvany
My bias
In this post, I want to pull together my current thinking about AI. I want to cover a few different threads, so bear with me.
I am enamoured with technology. There is something here in GenAI and LLMs. I like them. I use them a lot. I have a strong positive bias. I have to guard against that. I probably don’t guard well enough. Most people don’t use them. They don’t move atoms (for now), and they certainly don’t cook for me.
But I will give in to my more fanciful way of seeing things in this blog post.
Why we should engage as publishers
I’m convinced that these technologies are going to radically transform processes around the creation of knowledge, and in particular, academic papers. That will impact the industry that I work in. Much of the cost and infrastructure that scholarly publishing companies bear will need to shift to other ways of supporting the value chain.
There is a non-zero risk that this will significantly stress existing companies.
We can’t put the genie back in the bottle; the technologies are her. As long as people want friendly, fast answers to things, they are going to increasingly use these technologies. We might wish for a different scenario in terms of who controls these technologies, but we have to work with the world as it is. On that basis, I am strongly in favour of publishing houses licensing content to these models to help make them better.
I spoke about this in an interview with Wiley a few weeks ago → 🎥
We are in the business of creating knowledge. These tools are cultural and social technologies, so our efforts to create knowledge in the world have to face their existence. We should endeavour to make these tools as useful as possible.
Our corpus is mostly clean, mostly bias free, and potentially has embedded patterns that help guard against bias, and that are pessimistic about knowledge claims, on the whole. Such a view is an important view. It is not the view that you get about the world if you treat the world as a human would treat it. Humans like bias, we like stories, we like to be fooled. We need these machines to be capable of not being fooled, just as we can rise above foolishness occasionally.
The winds of change and the potential for disruption
I don’t really know what technological disruption looks like. In spite of many hopes and claims, the industry I work in is mostly immune to it.
Many groups are not totally happy with the current state of things. Probably most groups are not happy (though it might be that the most important and influential groups are happy enough not to need to disrupt it—and I’m thinking specifically of Government, for whom research efficiency is rarely the most important challenge that they need to address).