Charlotte Foster
Technology

Artificial intelligence could sway your dating and voting preferences

AI algorithms on our computers and smartphones have quickly become a pervasive part of everyday life, with relatively little attention to their scope, integrity, and how they shape our attitudes and behaviours.

Spanish researchers have now shown experimentally that people’s voting and dating preferences can be manipulated depending on the type of persuasion used.

“Every day, new headlines appear in which Artificial Intelligence (AI) has overtaken human capacity in new and different domains,” write Ujue Agudo and Helena Matute, from the Universidad de Deusto, in the journal PLOS ONE.

“This results in recommendation and persuasion algorithms being widely used nowadays, offering people advice on what to read, what to buy, where to eat, or whom to date,” they add.

“[P]eople often assume that these AI judgements are objective, efficient and reliable; a phenomenon known as machine bias.”

But increasingly, warning bells are sounding about how people could be influenced on vital issues. Agudo and Matute note, for instance, that companies such as Facebook and Google have been accused of manipulating democratic elections.

And while some people may be wary of explicit attempts to sway their judgements, they could be influenced without realising it.

“[I]t is not only a question of whether AI could influence people through explicit recommendation and persuasion, but also of whether AI can influence human decisions through more covert persuasion and manipulation techniques,” the researchers write.

“Indeed, some studies show that AI can make use of human heuristics and biases in order to manipulate people’s decisions in a subtle way.”

A famous experiment on voting behaviour in the US, for instance, showed how Facebook messages swayed political opinions, information seeking and votes of more than 61 million people in 2010, a phenomenon they say was demonstrated again in 2012 elections.

In another example, manipulating the order of political candidates in search engines or boosting someone’s profile to enhance their familiarity and credibility are other covert ploys that can funnel votes to selected candidates.  

Worryingly, as Agudo and Matute point out, these strategies tend to go unnoticed, so that people are likely to think they made their own minds up and don’t realise they’ve been played.

Yet public research on the impact of these influences is way behind the private sector.

“Companies with potential conflicts of interest are conducting private behavioural experiments and accessing the data of millions of people without their informed consent,” they write, “something unthinkable for the academic research community.”

While some studies have shown that AI can influence people’s moods, friendships, dates, activities and prices paid online, as well as political preferences, research is scarce, the pair says, and has not disentangled explicit and covert influences.

To help address this, they recruited more than 1300 people online for a series of experiments to investigate how explicit and covert AI algorithms influence their choice of fictitious political candidates and potential romantic partners.

Results showed that explicit, but not covert, recommendation of candidates swayed people’s votes, while secretly manipulating their familiarity with potential partners influenced who they wanted to date.

Although these results held up under various approaches, the researchers note the possibilities are vast. “The number of variables that might be changed, and the number of biases that an algorithm could exploit is immense,” they write.

“It is important to note, however, that the speed with which human academic scientists can perform new experiments and collect new data is very slow, as compared to the easiness with which many AI companies and their algorithms are already conducting experiments with millions of human beings on a daily basis through the internet.”

Private companies have immense resources and are unfettered in their pursuit of the most effective algorithms, they add. “Therefore, their ability to influence decisions both explicitly and covertly is certainly much higher than shown in the present research.”

The pair draws attention to the European Union’s Ethics Guidelines for Trustworthy AI and DARPA’s explainable AI program as examples of initiatives to increase people’s trust of AI. But they assert that won’t address the dearth of information on how algorithms can manipulate people’s decisions.

“Therefore, a human-centric approach should not only aim to establish the critical requirements for AI’s trustworthiness,” they write, “but also to minimise the consequences of that trust on human decisions and freedom.

“It is of critical importance to educate people against following the advice of algorithms blindly,” they add, as well as public discussion on who should own the masses of data which are used to create persuasive algorithms.

Image credits: Shutterstock                      

This article was originally published on cosmosmagazine.com and was written by Natalie Parletta.

Tags:
Technology, artificial intelligence, Social Media, algorithms