Unless you’re an expert in medieval language, you’d probably have a hard time understanding a book from the era. But one text has even the world’s leading experts in the field stumped. The Voynich Manuscript is a famous handwritten medieval text a couple of hundred pages long, all of which are as yet indecipherable.
Decoding the manuscript’s meaning started off as a genuine puzzle – and one that attracted the world’s best codebreakers. Codes are vital for the military and for commerce, so cracking a new type would have been a big deal.
To this day, these codebreakers, who included the men who cracked the Japanese Navy’s codes in World War II, remain defeated. Now, some academic and independent researchers, myself included, aren’t convinced that there’s even a code to crack. The pattern of word repetition had been assumed to be too complex for the book to be a centuries-old hoax, but in 2004 I demonstrated that this reasoning was flawed. For quite a few technical reasons, there is currently no way of telling whether the text is the most difficult code to crack in the history of humanity, or a clever fake designed to frustrate all those gripped by its spell.
Naturally, the book has both researchers and the media hooked – not just because of the mystery of the code itself, but also because of its bizarre illustrations and colourful history. Its discoverer, Wilfrid Voynich, used his day job as an antique bookseller as cover for being an anti-Tsarist revolutionary. The manuscript first appeared at the court of Rudolph II of Bohemia, one of the most flamboyant monarchs in all time. One of the most colourful confidence tricksters in history, Edward Kelly, has long been suspected of hoaxing it to sell to the notoriously credulous Rudolph.
But this fascination is starting to come at the cost of scientific rigour and accurate reporting – and is just one example of a wider problem in how some in the media and by extension the wider public, interface with research.
The manuscript now receives international coverage with every newly proposed solution. But nearly all of these argue that the text is an unidentified language – a class of theory with huge flaws that have long since been well established.
Here’s one example. All real languages have regularities in word order. In English, “I drink coffee” is a grammatically accurate sentence but “coffee drink I” isn’t. But the words in the Voynich Manuscript don’t have any regularities in their order. That reason alone is enough to eliminate all known languages from being candidates, but there are plenty of others too.
So if the reasons for ruling out an unidentified language are that easy to understand, why do unidentified language theories keep getting through peer review and into media coverage?
An answer lies in the framing, with the well-worn trope of baffled experts versus the independent researcher who solves the problem with common sense. It’s superficially plausible and it engages readers already fascinated for example by indecipherable codes in unsolved murder cases. But ignoring facts that don’t suit you is a dangerous and fruitless road – not only in problem-solving but life, in general.
The Voynich Manuscript is in fact just one usual suspect on the radar of media outlets hooked by click-friendly but inadequate answers to compelling unsolved ancient mysteries. And underlying this growing appetite for stories of pseudoscience coming to save the day is an increasing divide between experts and the public. This us and them mentality affects not just the reporting of science but is eroding the fundamental trust in research institutions that have transformed the modern world.
The barrier between research and its recipients needs to be broken down – but it will require a deeper fix than simply saying that the media shouldn’t rush bad stories into print. If journalists or editors take too long deciding to accept a story, they risk being scooped by a competitor. At any rate, many of these stories are based on papers with the supposed trustworthy mark of peer review.
Cracking down on the rapid rise in predatory journals with suspect publishing standards is one way to defend the integrity of research and minimise the flow of pseudoscience – but caution needs to be taken when bolstering the rigour of peer review. The line between radically brilliant and radically crackpot isn’t easy to define. For example, fuzzy logic – a mathematical means of representing imprecise information – was once furiously rejectedby many mathematicians, but is now a core part of any smart device in your home. Thought also needs to be given as to how to handle the increasing prevalence of articles straddling disciplines, which even respected journals find difficult to review.
Poor science will never be completely eradicated though and as long as there is a demand to read it, media outlets and fledgeling journals will always be tempted to publish it. So, governments and organisations should bring together researchers and the public to generate a broader, shared view of what constitutes good research. Funding more citizen science projects such as SETI@home, which bring in non-scientists to contribute meaningfully to scientific research and citizens’ assemblies, which summon random citizens to learn about an issue and recommend or decide on government policy, are both excellent ways of doing this.
Of course, we as a society need to think more deeply about how we can add to the above solutions to make sure research is both understood and valued. Transforming the public’s relationship with research is no easy task, but given its heavy contribution to creating a better world, it would be a win for more than just experts in the Voynich Manuscript if we can find a way.
Gordon Rugg is a senior lecturer at the School of Computing and Mathematics, Keele University, UK.
This article first appeared on The Conversation.