Statisticians are being asked to manipulate the numbers in scientific research in order to make researchers’ findings appear more powerful, according to a new study.

The study, which is published in the Annals of Internal Medicine, was based on a survey of 390 American biostatisticians – professionals who analyze data to help medical researchers reach their conclusions.

The biostatisticians were given 18 hypothetical ways in which they could be asked to manipulate data. They were asked how inappropriate they considered each request to be and whether they had ever encountered it themselves during the past five years.

Requests considered the most egregious – falsifying statistical significance and changing data to bring about a desired outcome – had been made to three per cent and seven per cent of surveyed biostatisticians respectively.

The number of reports of inappropriate requests rose sharply from there, with 24 per cent of biostatisticians saying they had been asked to remove or alter data records. Another 24 per cent said they were asked to avoid mentioning the absence of key data that could affect a study’s results. The survey did not ask whether the researchers had cooperated with these requests.

Jonathan Kimmelman, the director of biomedical ethics at McGill University in Montreal, said he was not surprised by the findings.

“In science, your career advancement depends heavily on getting positive results – and statistics are the key to getting positive results,” he told CTV’s Your Morning on Monday.

“If you do an experiment and you get a negative result, it can be very tempting to try to manipulate that data or try to manipulate that statistical analysis so you can get a positive result and get it into a top-tier journal.”

Publication in a renowned journal can help a scientist gain tenure, fame and other benefits, Kimmelman said – creating an obvious incentive for researchers to focus on presenting their reports in the most attractive light possible.

Kimmelman said he would like to see leading journals fight this trend by adopting “results-blind” procedures for deciding which studies to highlight.

“If they decided to accept the paper based on the design of the study and based on how well it was executed as opposed to the results, I think that would really dramatically reduce the incentives to doing these kinds of statistical analyses that lead to false conclusions,” he said.