Placeholder Content Image

Why prices are so high – 8 ways retail pricing algorithms gouge consumers

<p><em><a href="https://theconversation.com/profiles/david-tuffley-13731">David Tuffley</a>, <a href="https://theconversation.com/institutions/griffith-university-828">Griffith University</a></em></p> <p>The just-released report of the inquiry into <a href="https://pricegouginginquiry.actu.org.au/">price gouging and unfair pricing</a> conducted by Allan Fels for the Australian Council of Trades Unions does more than identify the likely offenders.</p> <p>It finds the biggest are supermarkets, banks, airlines and electricity companies.</p> <p>It’s not enough to know their tricks. Fels wants to give the Australian Competition and Consumer Commission more power to investigate and more power to prohibit mergers.</p> <p>But it helps to know how they try to trick us, and how technology has enabled them to get better at it. After reading the report, I’ve identified eight key maneuvers.</p> <h2>1. Asymmetric price movements</h2> <p>Otherwise known as <a href="https://www.jstor.org/stable/25593733">Rocket and Feather</a>, this is where businesses push up prices quickly when costs rise, but cut them slowly or late after costs fall.</p> <p>It seems to happen for <a href="https://www.sciencedirect.com/science/article/abs/pii/S0140988323002074">petrol</a> and <a href="https://www.sciencedirect.com/science/article/abs/pii/S105905601730240X">mortgage rates</a>, and the Fels inquiry was presented with evidence suggesting it happens in supermarkets.</p> <p>Brendan O’Keeffe from NSW Farmers told the inquiry wholesale lamb prices had been falling for six months before six Woolworths announced a cut in the prices of lamb it was selling as a “<a href="https://pricegouginginquiry.actu.org.au/wp-content/uploads/2024/02/InquiryIntoPriceGouging_Report_web.pdf">Christmas gift</a>”.</p> <h2>2. Punishment for loyal customers</h2> <p>A <a href="https://theconversation.com/simple-fixes-could-help-save-australian-consumers-from-up-to-3-6-billion-in-loyalty-taxes-119978">loyalty tax</a> is what happens when a business imposes higher charges on customers who have been with it for a long time, on the assumption that they won’t move.</p> <p>The Australian Securities and Investments Commission has alleged a big <a href="https://theconversation.com/how-qantas-might-have-done-all-australians-a-favour-by-making-refunds-so-hard-to-get-213346">insurer</a> does it, setting premiums not only on the basis of risk, but also on the basis of what a computer model tells them about the likelihood of each customer tolerating a price hike. The insurer disputes the claim.</p> <p>It’s often done by offering discounts or new products to new customers and leaving existing customers on old or discontinued products.</p> <p>It happens a lot in the <a href="https://www.finder.com.au/utilities-loyalty-costing-australians-billions-2024">electricity industry</a>. The plans look good at first, and then less good as providers bank on customers not making the effort to shop around.</p> <p>Loyalty taxes appear to be less common among mobile phone providers. Australian laws make it easy to switch <a href="https://www.reviews.org/au/mobile/how-to-switch-mobile-carriers-and-keep-your-number/">and keep your number</a>.</p> <h2>3. Loyalty schemes that provide little value</h2> <p>Fels says loyalty schemes can be a “low-cost means of retaining and exploiting consumers by providing them with low-value rewards of dubious benefit”.</p> <p>Their purpose is to lock in (or at least bias) customers to choices already made.</p> <p>Examples include airline frequent flyer points, cafe cards that give you your tenth coffee free, and supermarket points programs. The purpose is to lock in (or at least bias) consumers to products already chosen.</p> <p>The <a href="https://www.accc.gov.au/consumers/advertising-and-promotions/customer-loyalty-schemes">Australian Competition and Consumer Commission</a> has found many require users to spend a lot of money or time to earn enough points for a reward.</p> <p>Others allow points to expire or rules to change without notice or offer rewards that are not worth the effort to redeem.</p> <p>They also enable businesses to collect data on spending habits, preferences, locations, and personal information that can be used to construct customer profiles that allow them to target advertising and offers and high prices to some customers and not others.</p> <h2>4. Drip pricing that hides true costs</h2> <p>The Competition and Consumer Commission describes <a href="https://pricegouginginquiry.actu.org.au/wp-content/uploads/2024/02/InquiryIntoPriceGouging_Report_web.pdf">drip pricing</a> as “when a price is advertised at the beginning of an online purchase, but then extra fees and charges (such as booking and service fees) are gradually added during the purchase process”.</p> <p>The extras can add up quickly and make final bills much higher than expected.</p> <p>Airlines are among the best-known users of the strategy. They often offer initially attractive base fares, but then add charges for baggage, seat selection, in-flight meals and other extras.</p> <h2>5. Confusion pricing</h2> <p>Related to drip pricing is <a href="https://www.x-mol.net/paper/article/1402386414932836352">confusion pricing</a> where a provider offers a range of plans, discounts and fees so complex they are overwhelming.</p> <p>Financial products like insurance have convoluted fee structures, as do electricity providers. Supermarkets do it by bombarding shoppers with “specials” and “sales”.</p> <p>When prices change frequently and without notice, it adds to the confusion.</p> <h2>6. Algorithmic pricing</h2> <p><a href="https://pricegouginginquiry.actu.org.au/wp-content/uploads/2024/02/InquiryIntoPriceGouging_Report_web.pdf">Algorithmic pricing</a> is the practice of using algorithms to set prices automatically taking into account competitor responses, which is something akin to computers talking to each other.</p> <p>When computers get together in this way they can <a href="https://www.x-mol.net/paper/article/1402386414932836352">act as it they are colluding</a> even if the humans involved in running the businesses never talk to each other.</p> <p>It can act even more this way when multiple competitors use the same third-party pricing algorithm, effectively allowing a single company to influence prices.</p> <h2>7. Price discrimination</h2> <p>Price discrimination involves charging different customers different prices for the same product, setting each price in accordance with how much each customer is prepared to pay.</p> <p>Banks do it when they offer better rates to customers likely to leave them, electricity companies do it when they offer better prices for business customers than households, and medical specialists do it when they offer vastly different prices for the same service to consumers with different incomes.</p> <p>It is made easier by digital technology and data collection. While it can make prices lower for some customers, it can make prices much more expensive to customers in a hurry or in urgent need of something.</p> <h2>8. Excuse-flation</h2> <p><a href="https://www.bloomberg.com/news/articles/2023-03-09/how-excuseflation-is-keeping-prices-and-corporate-profits-high">Excuse-flation</a> is where general inflation provides “cover” for businesses to raise prices without justification, blaming nothing other than general inflation.</p> <p>It means that in times of general high inflation businesses can increase their prices even if their costs haven’t increased by as much.</p> <p>On Thursday Reserve Bank Governor <a href="https://www.afr.com/policy/economy/inflation-is-cover-for-pricing-gouging-rba-boss-says-20240215-p5f58d">Michele Bullock</a> seemed to confirm that she though some firms were doing this saying that when inflation had been brought back to the Bank’s target, it would be "much more difficult, I think, for firms to use high inflation as cover for this sort of putting up their prices."</p> <h2>A political solution is needed</h2> <p>Ultimately, our own vigilance won’t be enough. We will need political help. The government’s recently announced <a href="https://treasury.gov.au/review/competition-review-2023">competition review</a> might be a step in this direction.</p> <p>The legislative changes should police business practices and prioritise fairness. Only then can we create a marketplace where ethics and competition align, ensuring both business prosperity and consumer wellbeing.</p> <p>This isn’t just about economics, it’s about building a fairer, more sustainable Australia.<img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://counter.theconversation.com/content/223310/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /></p> <p><a href="https://theconversation.com/profiles/david-tuffley-13731"><em>David Tuffley</em></a><em>, Senior Lecturer in Applied Ethics &amp; CyberSecurity, <a href="https://theconversation.com/institutions/griffith-university-828">Griffith University</a></em></p> <p><em>Image credits: Getty Images </em></p> <p><em>This article is republished from <a href="https://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/why-prices-are-so-high-8-ways-retail-pricing-algorithms-gouge-consumers-223310">original article</a>.</em></p>

Money & Banking

Placeholder Content Image

Feed me: 4 ways to take control of social media algorithms and get the content you actually want

<p><em><a href="https://theconversation.com/profiles/marc-cheong-998488">Marc Cheong</a>, <a href="https://theconversation.com/institutions/the-university-of-melbourne-722">The University of Melbourne</a></em></p> <p>Whether it’s Facebook’s News Feed or TikTok’s For You page, social media algorithms are constantly making behind-the-scenes decisions to boost certain content – giving rise to the “curated” feeds we’ve all become accustomed to.</p> <p>But does anyone actually know how these algorithms work? And, more importantly, is there a way to “game” them to see more of the content you want?</p> <h2>Optimising for engagement</h2> <p>In broader computing terms, an algorithm is simply a set of rules that specifies a particular computational procedure.</p> <p>In a social media context, algorithms (specifically “recommender algorithms”) determine everything from what you’re likely to read, to whom you’re likely to follow, to whether a specific post appears in front of you.</p> <p>Their main goal is to <a href="https://arxiv.org/abs/2304.14679">sustain your attention</a> for as long as possible, in a process called “optimising for engagement”. The more you engage with content on a platform, the more effectively that platform can commodify your attention and target you with ads: its main revenue source.</p> <p>One of the earliest social media <a href="https://mashable.com/archive/facebook-news-feed-evolution">feed algorithms</a> came from Facebook in the mid-2000s. It can be summarised in one sentence "Sort all of the user’s friend updates – including photos, statuses and more – in reverse chronological order (newer posts first)."</p> <p>Since then, algorithms have become much more powerful and nuanced. They now take myriad factors into consideration to determine how content is promoted. For instance, Twitter’s “For You” recommendation algorithm is based on a neural network that uses <a href="https://blog.twitter.com/engineering/en_us/topics/open-source/2023/twitter-recommendation-algorithm">about 48 million parameters</a>!</p> <h2>A black box</h2> <p>Imagine a hypothetical user named Basil who follows users and pages that primarily discuss <em>space</em>, <em>dog memes</em> and <em>cooking</em>. Social media algorithms might give Basil recommendations for T-shirts featuring puppies dressed as astronauts.</p> <p>Although this might seem simple, algorithms are typically “black boxes” that have their inner workings hidden. It’s in the interests of tech companies to keep the recipe for their “secret sauce”, well, a secret.</p> <p>Trying to “game” an algorithm is like trying to solve a 3D box puzzle without any instructions and without being able to peer inside. You can only use trial-and-error – manipulating the pieces you see on the outside, and gauging the effects on the overall state of the box.</p> <figure class="align-center zoomable"><em><a href="https://images.theconversation.com/files/525271/original/file-20230510-27-qte7k8.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=1000&amp;fit=clip"><img src="https://images.theconversation.com/files/525271/original/file-20230510-27-qte7k8.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px" srcset="https://images.theconversation.com/files/525271/original/file-20230510-27-qte7k8.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=1 600w, https://images.theconversation.com/files/525271/original/file-20230510-27-qte7k8.jpeg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=2 1200w, https://images.theconversation.com/files/525271/original/file-20230510-27-qte7k8.jpeg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=3 1800w, https://images.theconversation.com/files/525271/original/file-20230510-27-qte7k8.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=1 754w, https://images.theconversation.com/files/525271/original/file-20230510-27-qte7k8.jpeg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=2 1508w, https://images.theconversation.com/files/525271/original/file-20230510-27-qte7k8.jpeg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=3 2262w" alt="" /></a></em><figcaption><em><span class="caption">Manipulating social media algorithms isn’t impossible, but it’s still tricky due to how opaque they are.</span> <span class="attribution"><span class="source">Shutterstock</span></span></em></figcaption></figure> <p>Even when an algorithm’s code is revealed to the public – such as <a href="https://blog.twitter.com/en_us/topics/company/2023/a-new-era-of-transparency-for-twitter">when Twitter released</a> the source code for its recommender algorithm in March – it’s not enough to bend them to one’s will.</p> <p>Between the sheer complexity of the code, constant tweaks by developers, and the presence of arbitrary design choices (such as <a href="https://mashable.com/article/twitter-releases-algorithm-showing-it-tracks-elon-musk-tweets">explicitly tracking</a> Elon Musk’s tweets), any claims of being able to perfectly “game” an algorithm should be taken with a pinch of salt.</p> <p>TikTok’s algorithm, in particular, is notoriously powerful yet opaque. A Wall Street Journal investigation found it uses “subtle cues, such as how long you linger on a video” to predict what you’re <a href="https://www.wsj.com/articles/tiktok-algorithm-video-investigation-11626877477">likely to engage with</a>.</p> <h2>So what <em>can</em> you do?</h2> <p>That said, there are some ways you can try to curate your social media to serve you better.</p> <p>Since algorithms are powered by your data and social media habits, a good first step is to change these habits and data – or at least understand how they may be shaping your online experience.</p> <h1>1. Engage with content you trust and want more of</h1> <p>Regardless of the kind of feed you want to create, it’s important to follow reliable sources. Basil, who is fascinated by space, knows they would do well to follow NASA and steer clear of users who believe the Moon is made of cheese.</p> <p>Think critically about the accounts and pages you follow, asking <a href="https://guides.lib.uw.edu/research/faq/reliable">questions such as</a> <em>Who is the author of this content? Do they have authority in this topic? Might they have a bias, or an agenda?</em></p> <p>The higher the quality of the content you engage with, the more likely it is that you’ll be recommended similarly valuable content (rather than fake news or nonsense).</p> <p>Also, you can play to the ethos of “optimising for engagement” by engaging more (and for longer) with the kind of content you want to be recommended. That means liking and sharing it, and actively seeking out similar posts.</p> <h1>2. Be stingy with your information</h1> <p>Secondly, you can be parsimonious in providing your data to platforms. Social media companies know more about you than you think – from your location, to your perceived interests, to your activities outside the app, and even the activities and interests of your social circle!</p> <p>If you limit the information you provide about yourself, you limit the extent to which the algorithm can target you. It helps to keep your different social media accounts unlinked, and to avoid using the “Login with Facebook” or “Login with Google” options when signing up for a new account.</p> <h1>3. Use your settings</h1> <p>Adjusting your <a href="https://www.consumerreports.org/privacy/facebook-privacy-settings-a1775535782/">privacy and personalisation settings</a> will further help you avoid being microtargeted through your feed.</p> <p>The “Off-Facebook Activity” <a href="https://www.kaspersky.com.au/blog/what-is-off-facebook-activity/28925/">setting</a> allows you to break the link between your Facebook account and your activities outside of Facebook. Similar options exist for <a href="https://support.tiktok.com/en/account-and-privacy/account-privacy-settings">TikTok</a> and <a href="https://help.twitter.com/en/resources/how-you-can-control-your-privacy">Twitter</a>.</p> <p>Ad blockers and privacy-enhancing browser add-ons can also help. These tools, such as the open-source <a href="https://ublockorigin.com/">uBlock Origin</a> and <a href="https://privacybadger.org/">Privacy Badger</a>, help prevent cookies and marketing pixels from “following” your browsing habits as you move between social media and other websites.</p> <h1>4. Get (dis)engaged</h1> <p>A final piece of advice is to simply disengage with content you don’t want in your feed. This means:</p> <ul> <li>ignoring any posts you aren’t a fan of, or “hiding” them if possible</li> <li>taking mindful breaks to avoid “<a href="https://theconversation.com/doomscrolling-is-literally-bad-for-your-health-here-are-4-tips-to-help-you-stop-190059">doomscrolling</a>”</li> <li>regularly revising who you follow, and making sure this list coincides with what you want from your feed.</li> </ul> <p>So, hypothetically, could Basil unfollow all users and pages unrelated to <em>space</em>, <em>dog memes</em> and <em>cooking</em> to ultimately starve the recommender algorithm of potential ways to distract them?</p> <p>Well, not exactly. Even if they do this, the algorithm won’t necessarily “forget” all their data: it might still exist in caches or backups. Because of how complex and pervasive algorithms are, you can’t guarantee control over them.</p> <p>Nonetheless, you shouldn’t let tech giants’ bottom line dictate how you engage with social media. By being aware of how algorithms work, what they’re capable of and what their purpose is, you can make the shift from being a sitting duck for advertisers to an active curator of your own feeds.</p> <figure class="align-center "><em><img src="https://images.theconversation.com/files/498128/original/file-20221129-22-imtnz0.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px" srcset="https://images.theconversation.com/files/498128/original/file-20221129-22-imtnz0.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=115&amp;fit=crop&amp;dpr=1 600w, https://images.theconversation.com/files/498128/original/file-20221129-22-imtnz0.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=115&amp;fit=crop&amp;dpr=2 1200w, https://images.theconversation.com/files/498128/original/file-20221129-22-imtnz0.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=115&amp;fit=crop&amp;dpr=3 1800w, https://images.theconversation.com/files/498128/original/file-20221129-22-imtnz0.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=144&amp;fit=crop&amp;dpr=1 754w, https://images.theconversation.com/files/498128/original/file-20221129-22-imtnz0.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=144&amp;fit=crop&amp;dpr=2 1508w, https://images.theconversation.com/files/498128/original/file-20221129-22-imtnz0.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=144&amp;fit=crop&amp;dpr=3 2262w" alt="" /></em><figcaption></figcaption></figure> <p><em>The Conversation is commissioning articles by academics across the world who are researching how society is being shaped by our digital interactions with each other. <a href="https://theconversation.com/uk/topics/social-media-and-society-125586">Read more here</a><!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://counter.theconversation.com/content/204374/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https://theconversation.com/republishing-guidelines --></em></p> <p><em><a href="https://theconversation.com/profiles/marc-cheong-998488">Marc Cheong</a>, Senior Lecturer of Information Systems, School of Computing and Information Systems; and (Honorary) Senior Fellow, Melbourne Law School, <a href="https://theconversation.com/institutions/the-university-of-melbourne-722">The University of Melbourne</a></em></p> <p><em>Image credits: Getty Images</em></p> <p><em>This article is republished from <a href="https://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/feed-me-4-ways-to-take-control-of-social-media-algorithms-and-get-the-content-you-actually-want-204374">original article</a>.</em></p>

Technology

Placeholder Content Image

Can ideology-detecting algorithms catch online extremism before it takes hold?

<p>Ideology has always been a critical element in understanding how we view the world, form opinions and make political decisions. </p> <p>However, the internet has revolutionised the way opinions and ideologies spread, leading to new forms of online radicalisation. Far-right ideologies, which advocate for ultra-nationalism, racism and opposition to immigration and multiculturalism, have proliferated on social platforms.</p> <p>These ideologies have strong links with violence and terrorism. In recent years, <a href="https://www.asio.gov.au/sites/default/files/2022-02/ASIO_Annual_Report_2020-21.pdf">as much as 40%</a> of the caseload of the Australian Security Intelligence Organisation (ASIO) was related to far-right extremism. This has <a href="https://www.abc.net.au/news/2023-02-13/right-wing-terror-threat-declines-says-asio/101965964">declined</a>, though, with the easing of COVID restrictions.</p> <p>Detecting online radicalisation early could help prevent far-right ideology-motivated (and potentially violent) activity. To this end, we have developed a <a href="https://arxiv.org/abs/2208.04097">completely automatic system</a> that can determine the ideology of social media users based on what they do online.</p> <h2>How it works</h2> <p>Our proposed pipeline is based on detecting the signals of ideology from people’s online behaviour. </p> <p>There is no way to directly observe a person’s ideology. However, researchers can observe “ideological proxies” such as the use of political hashtags, retweeting politicians and following political parties.</p> <p>But using ideological proxies requires a lot of work: you need experts to understand and label the relationships between proxies and ideology. This can be expensive and time-consuming. </p> <p>What’s more, online behaviour and contexts change between countries and social platforms. They also shift rapidly over time. This means even more work to keep your ideological proxies up to date and relevant.</p> <h2>You are what you post</h2> <p>Our pipeline simplifies this process and makes it automatic. It has two main components: a “media proxy”, which determines ideology via links to media, and an “inference architecture”, which helps us determine the ideology of people who don’t post links to media.</p> <p>The media proxy measures the ideological leaning of an account by tracking which media sites it posts links to. Posting links to Fox News would indicate someone is more likely to lean right, for example, while linking to the Guardian indicates a leftward tendency. </p> <p>To categorise the media sites users link to, we took the left-right ratings for a wide range of news sites from two datasets (though many are available). One was <a href="https://reutersinstitute.politics.ox.ac.uk/our-research/digital-news-report-2018">based on a Reuters survey</a> and the other curated by experts at <a href="https://www.allsides.com/media-bias/ratings">Allsides.com</a>. </p> <p>This works well for people who post links to media sites. However, most people don’t do that very often. So what do we do about them?</p> <p>That’s where the inference architecture comes in. In our pipeline, we determine how ideologically similar people are to one another with three measures: the kind of language they use, the hashtags they use, and the other users whose content they reshare.</p> <p>Measuring similarity in hashtags and resharing is relatively straightforward, but such signals are not always available. Language use is the key: it is always present, and a known indicator of people’s latent psychological states. </p> <p>Using machine-learning techniques we found that people with different ideologies use different kinds of language. </p> <p>Right-leaning individuals tend to use moral language relating to vice (for example, harm, cheating, betrayal, subversion and degradation), as opposed to virtue (care, fairness, loyalty, authority and sanctity), more than left-leaning individuals. Far-right individuals use grievance language (involving violence, hate and paranoia) significantly more than moderates. </p> <p>By detecting these signals of ideology, our pipeline can identify and understand the psychological and social characteristics of extreme individuals and communities.</p> <h2>What’s next?</h2> <p>The ideology detection pipeline could be a crucial tool for understanding the spread of far-right ideologies and preventing violence and terrorism. By detecting signals of ideology from user behaviour online, the pipeline serves as an early warning systems for extreme ideology-motivated activity. It can provide law enforcement with methods to flag users for investigation and intervene before radicalisation takes hold.</p> <p><em>Image credits: Getty Images</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/can-ideology-detecting-algorithms-catch-online-extremism-before-it-takes-hold-200629" target="_blank" rel="noopener">The Conversation</a>. </em></p>

Technology

Placeholder Content Image

Can an algorithm assess Trump’s control over discourse?

<div> <div class="copy"> <p>Controversial former US president Donald Trump will always be remembered for his prolific and volatile <a rel="noreferrer noopener" href="https://cosmosmagazine.com/people/society/in-disasters-twitter-influencers-are-out-tweeted/" target="_blank">twitter </a>presence, but it’s difficult to assess, on the basis of a social media site with billions of tweets and users, how much influence these messages have actually had on public opinion.</p> <p>To find out, researchers have conducted a computational analysis of the many phrases found in Trump’s tweets between 2016 and 2021, looking for answers about how powerful the former president’s influence was over public narratives at that time.</p> <p>The study, led by Peter Dodds of the University of Vermont, Burlington, US, is published today in <em>PLOS ONE.</em></p> <p>The researchers developed a novel computational method for analysing tweets in order to build timelines of stories on a given subject. They analysed all tweets related to Trump spanning the five-year study period, applying their algorithms to measure the temporal dynamics – the fluctuating relevance over time – of stories, as represented by words or short phrases, like “Hillary” and “travel ban”.</p> <p>They noted that the turbulence of a story – how quickly it declined in dominance as new stories arose – varied over time and by topic. Trump’s first year in office, 2017, was the most turbulent, with a myriad of dominant stories like “Russia” and “Comey”.</p> <p>Turbulence declined in 2018 onwards, with stories enduring for longer periods, including 2018’s “Mueller” and 2020’s “Covid-19”. Turbulence spiked with 2020’s Black Lives Matter protests, the 2020 election and 2021’s Capitol riot.</p> <p>“In 2020, story turbulence around Trump exploded with the start of the COVID-19 pandemic, the murder of George Floyd, and the presidential election,” the authors write, “but also ground to a halt as these stories dominated for long stretches.”</p> <p>So, what does this all mean? The persistence of some stories over others could suggest higher social relevance, and, crucially, the authors note their technique as a way of measuring the zeitgeist and its attitudes over time in a large-scale, systematic way, with implications for recorded history, journalism, economics and more.</p> <p>The researchers say their analysis was also able to measure how much Trump controlled the narrative of each story, based on how much his tweets were retweeted, with his tweets about “Fake news” and “Minneapolis” retweeted far more than those about “coronavirus” and “Jeffrey Epstein”, for example. However, retweets may not be a measure of influence so much as a measure of social relevance; people tend to share posts about issues they care about the most, and may still implicitly agree with Trump’s many other narratives.</p> <p>It’s also worth noting that <a rel="noreferrer noopener" href="https://www.pewresearch.org/internet/2019/04/24/sizing-up-twitter-users/" target="_blank">twitter is not a microcosm of real life</a>; the site’s most vocal users are often particularly political and engaged either in very left-wing or right-wing narratives, and users are also of a narrower age bracket than the general public.</p> <em>Image credits: Getty Images</em></div> <div id="contributors"> <p><em>This article was originally published on <a rel="noopener" href="https://cosmosmagazine.com/technology/computing/can-an-algorithm-assess-trumps-control-over-discourse/" target="_blank">cosmosmagazine.com</a> and was written by Cosmos. </em></p> </div> </div>

Technology

Placeholder Content Image

Crime-fighting algorithm to take up the battle against illegal drugs?

<div> <div class="copy"> <p>he answer to drug forensics might be AI, according to a new <a rel="noreferrer noopener" href="https://dx.doi.org/10.1038/s42256-021-00407-x" target="_blank">report</a> published in <em>Nature Machine Intelligence.</em></p> <p>Researchers from the University of British Columbia (UBC), Canada, have trained a computer to predict <a rel="noreferrer noopener" href="https://cosmosmagazine.com/people/high-times-at-new-years/" target="_blank">designer drugs</a> based on specific common molecules, even before the drugs hit the market.</p> <p>Clandestine chemists are constantly manufacturing new and dangerous psychoactive drugs that law enforcement agencies struggle to keep up with. Many of these designer drugs can lead to irreparable mental damage and/or even death.</p> <p>“The vast majority of these designer drugs have never been tested in humans and are completely unregulated,” says author Dr Michael Skinnider. “They are a major public health concern to emergency departments across the world.”</p> <h2>The algorithm behind drug forensics</h2> <p>The algorithm used by the computer, called deep neural network, generated 8.9 million potential designer drugs that could be identified from a unique molecular make-up if they popped up in society.</p> <p>The researchers then compared this data set to newly emerging designer drugs and found that 90% of the 196 new drugs were in the predicted data set.</p> <p>“The fact that we can predict what designer drugs are likely to emerge on the market before they actually appear is a bit like the 2002 sci-fi movie, Minority Report<em>,</em> where foreknowledge about criminal activities about to take place helped significantly reduce crime in a future world,” explains senior author Dr David Wishart from the University of Alberta, Canada.</p> <p>“Essentially, our software gives law enforcement agencies and public health programs a head start on the clandestine chemists, and lets them know what to be on the lookout for.”</p> <p>With this level of prediction, forensic scanning of drugs can be cut from months to days.</p> <p>The algorithm also learned which molecules were more and less likely to appear.</p> <p>“We wondered whether we could use this probability to determine what an unknown drug is—based solely on its mass—which is easy for a chemist to measure for any pill or powder using mass spectrometry,” says UBC’s Dr Leonard Foster, an internationally recognised expert on mass spectrometry.</p> <p>Using only mass, the algorithm was able to correctly identify the molecular structure of an unknown drug in a single guess around 50% of the time, but the accuracy increased to 86% as more measurements were considered.</p> <p>“It was shocking to us that the model performed this well, because elucidating entire chemical structures from just an accurate mass measurement is generally thought to be an unsolvable problem,” says Skinnider. “And narrowing down a list of billions of structures to a set of 10 candidates could massively accelerate the pace at which new designer drugs can be identified by chemists.”</p> <p>The researchers say this AI could also help identify other new molecules, such as in <a rel="noreferrer noopener" href="https://cosmosmagazine.com/health/new-test-for-performance-enhancing-drug-cheats/" target="_blank">sports doping</a> or novel molecules in the blood and urine.</p> <p>“There is an entire world of chemical ‘dark matter’ just beyond our fingertips right now,” says Skinnider. “I think there is a huge opportunity for the right AI tools to shine a light on this unknown chemical world.”</p> <em>Image credits: Getty Images</em></div> <div id="contributors"> <p><em>This article was originally published on <a rel="noopener" href="https://cosmosmagazine.com/technology/ai/crime-fighting-algorithm-to-take-up-the-battle-against-illegal-drugs/" target="_blank">cosmosmagazine.com</a> and was written by Deborah Devis. </em></p> </div> </div>

Technology

Placeholder Content Image

Algorithms predicting parole outcomes

<div> <div class="copy"> <p><span style="font-family: inherit;">The US has the highest incarceration rate in the world, which results in overcrowded prisons and all the additional violence that implies.</span></p> <p>Funnelling felons back onto the street through granting parole is thus a critical safety mechanism and management tool – but assessing which inmates will likely not reoffend when granted liberty is a difficult and troubling task.</p> <p>For some years now, the people responsible for calculating the chances of someone reoffending have been assisted in their decision-making by computational frameworks known as risk-assessment instruments (RAIs).</p> <p>The validity of these algorithms was thrown into question in 2018 after a <a rel="noopener" href="https://advances.sciencemag.org/content/4/1/eaao5580" target="_blank">major study</a> tested their predictive power against that of untrained humans. The machines and the people were given brief information on 400 inmates, including sex, age, current charge and prior convictions, and asked to make a determination.</p> <p><span style="font-family: inherit;">Both cohorts made the correct call in 65% of cases, which was pretty perceptive on the part of the untrained humans, but rather ordinary for the algorithms, given what was at stake.</span></p> <p>Now a new <a rel="noopener" href="https://advances.sciencemag.org/content/6/7/eaaz0652" target="_blank">study</a>, led by Sharad Goel, a computational social scientist at Stanford University, US, has repeated and extended the earlier research, and finds in favour of the software.</p> <p><span style="font-family: inherit;">In the first phase of the research, Goel and colleagues replicated the previous work, and came up with similar results. They then repeated the exercise with several additional variables in play – a situation, they suggest, that much better resembles real-world conditions.</span></p> <p>With the extra information, the algorithms performed much better, correctly predicting recidivism in 90% of cases. The humans got it right only 60% of the time.</p> <p>“Risk assessment has long been a part of decision-making in the criminal justice system,” says co-author Jennifer Skeem.</p> <p>“Although recent debate has raised important questions about algorithm-based tools, our research shows that in contexts resembling real criminal justice settings, risk assessments are often more accurate than human judgment in predicting recidivism.</p> <p>That’s consistent with a long line of research comparing humans to statistical tools.”</p> <p>In their paper, published in the journal Science Advances, the researchers say the more accurate RAI results will be helpful in the management of the over-burdened US penal system.</p> <p><span style="font-family: inherit;">The algorithm will be useful not only in helping to decide which inmates can be safely released into the community but will also assist in allocating prisoners too low or high security facilities.</span></p> <em>Image credit: Shutterstock</em></div> <div id="contributors"> <p><em>This article was originally published on <a rel="noopener" href="https://cosmosmagazine.com/technology/algorithms-getting-better-at-predicting-parole-outcomes/" target="_blank">cosmosmagazine.com</a> and was written by Barry Keily.</em></p> </div> </div>

Technology

Placeholder Content Image

Artificial intelligence could sway your dating and voting preferences

<div> <div class="copy"> <p>AI algorithms on our computers and smartphones have quickly become a pervasive part of everyday life, with relatively little attention to their scope, integrity, and how they shape our attitudes and behaviours.</p> <p>Spanish researchers have now shown experimentally that people’s voting and dating preferences can be manipulated depending on the type of persuasion used.</p> <p>“Every day, new headlines appear in which Artificial Intelligence (AI) has overtaken human capacity in new and different domains,” <a rel="noreferrer noopener" href="https://doi.org/10.1371/journal.pone.0249454" target="_blank">write</a> Ujue Agudo and Helena Matute, from the Universidad de Deusto, in the journal <em>PLOS ONE</em>.</p> <p>“This results in recommendation and persuasion algorithms being widely used nowadays, offering people advice on what to read, what to buy, where to eat, or whom to date,” they add.</p> <p>“[P]eople often assume that these AI judgements are objective, efficient and reliable; a phenomenon known as <em>machine bias</em>.”</p> <p>But increasingly, <a rel="noreferrer noopener" href="https://science.sciencemag.org/content/361/6404/751.full" target="_blank">warning bells</a> are sounding about how people could be influenced on vital issues. Agudo and Matute note, for instance, that companies such as Facebook and Google have been <a rel="noreferrer noopener" href="https://www.theguardian.com/technology/2019/feb/18/a-digital-gangster-destroying-democracy-the-damning-verdict-on-facebook" target="_blank">accused </a>of manipulating democratic elections.</p> <p>And while some people may be wary of explicit attempts to sway their judgements, they could be influenced without realising it.</p> <p>“[I]t is not only a question of whether AI could influence people through explicit recommendation and persuasion, but also of whether AI can influence human decisions through more covert persuasion and manipulation techniques,” the researchers write.</p> <p>“Indeed, some studies show that AI can make use of human heuristics and biases in order to manipulate people’s decisions in a subtle way.”</p> <p>A famous <a rel="noreferrer noopener" href="https://www.nature.com/articles/nature11421" target="_blank">experiment</a> on voting behaviour in the US, for instance, showed how Facebook messages swayed political opinions, information seeking and votes of more than 61 million people in 2010, a phenomenon they say was demonstrated again in 2012 elections.</p> <p>In another example, <a rel="noreferrer noopener" href="https://www.pnas.org/content/pnas/112/33/E4512.full.pdf" target="_blank">manipulating the order </a>of political candidates in search engines or boosting someone’s profile to <a rel="noreferrer noopener" href="https://core.ac.uk/display/132807884" target="_blank">enhance their familiarity </a>and credibility are other covert ploys that can funnel votes to selected candidates.  </p> <p>Worryingly, as Agudo and Matute point out, these strategies tend to go unnoticed, so that people are likely to think they made their own minds up and don’t realise they’ve been played.</p> <p>Yet public research on the impact of these influences is way behind the private sector.</p> <p>“Companies with potential conflicts of interest are conducting private behavioural experiments and accessing the data of millions of people without their informed consent,” they write, “something unthinkable for the academic research community.”</p> <p>While some studies have shown that AI can influence people’s moods, friendships, dates, activities and prices paid online, as well as political preferences, research is scarce, the pair says, and has not disentangled explicit and covert influences.</p> <p>To help address this, they recruited more than 1300 people online for a series of experiments to investigate how explicit and covert AI algorithms influence their choice of fictitious political candidates and potential romantic partners.</p> <p>Results showed that explicit, but not covert, recommendation of candidates swayed people’s votes, while secretly manipulating their familiarity with potential partners influenced who they wanted to date.</p> <p>Although these results held up under various approaches, the researchers note the possibilities are vast. “The number of variables that might be changed, and the number of biases that an algorithm could exploit is immense,” they write.</p> <p>“It is important to note, however, that the speed with which human academic scientists can perform new experiments and collect new data is very slow, as compared to the easiness with which many AI companies and their algorithms are already conducting experiments with millions of human beings on a daily basis through the internet.”</p> <p>Private companies have immense resources and are unfettered in their pursuit of the most effective algorithms, they add. “Therefore, their ability to influence decisions both explicitly and covertly is certainly much higher than shown in the present research.”</p> <p>The pair draws attention to the European Union’s Ethics Guidelines for Trustworthy AI and DARPA’s explainable AI program as examples of initiatives to increase people’s trust of AI. But they assert that won’t address the dearth of information on how algorithms can manipulate people’s decisions.</p> <p>“Therefore, a human-centric approach should not only aim to establish the critical requirements for AI’s trustworthiness,” they write, “but also to minimise the consequences of that trust on human decisions and freedom.</p> <p>“It is of critical importance to educate people against following the advice of algorithms blindly,” they add, as well as public discussion on who should own the masses of data which are used to create persuasive algorithms.</p> <em>Image credits: Shutterstock            <!-- Start of tracking content syndication. Please do not remove this section as it allows us to keep track of republished articles --> <img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=148292&amp;title=Artificial+intelligence+could+sway+your+dating+and+voting+preferences" alt="" width="1" height="1" /> <!-- End of tracking content syndication -->          </em></div> <div id="contributors"> <p><em>This article was originally published on <a rel="noopener" href="https://cosmosmagazine.com/technology/artificial-intelligence-could-sway-your-dating-and-voting-preferences/" target="_blank">cosmosmagazine.com</a> and was written by Natalie Parletta.</em></p> </div> </div>

Technology

Placeholder Content Image

How Netflix affects what we watch and who we are – and it’s not just the algorithm

<p>Netflix’s dystopian Korean drama Squid Game has become the streaming platform’s <a href="https://www.independent.co.uk/arts-entertainment/tv/news/squid-game-netflix-most-watched-bridgerton-b1937363.html">biggest-ever series launch</a>, with 111 million viewers watching at least two minutes of an episode.</p> <p>Out of the thousands of programmes available on Netflix globally, how did so many people end up watching the same show? The easy answer is <a href="https://www.palgrave.com/gp/book/9781137270047">an algorithm</a> – a computer program that offers us personalised recommendations on a platform based on our data and that of other users.</p> <p>Streaming platforms like Netflix, Spotify and Amazon Prime have undoubtedly <a href="https://books.emeraldinsight.com/page/detail/Streaming-Culture/?k=9781839827730">reshaped the way</a> we consume media, primarily by massively increasing the film, music and TV available to viewers.</p> <p>How do we cope with so many options? Services like Netflix <a href="https://doi.org/10.1111/1467-8675.12568">use algorithms</a> to <a href="https://books.emeraldinsight.com/page/detail/The-Quirks-of-Digital-Culture/?k=9781787699168">guide our attention</a> in certain directions, organising content and keeping us active on the platform. As soon as we open the app the personalisation processes begin.</p> <p>Our cultural landscape is now automated rather than simply being a product of our previous <a href="https://www.routledge.com/Culture-Class-Distinction/Bennett-Savage-Silva-Warde-Gayo-Cal-Wright/p/book/9780415560771">experiences, background and social circles</a>. These algorithms don’t just respond to our tastes, they also <a href="https://www.palgrave.com/gp/book/9781137270047">shape and influence them</a>.</p> <p>But focusing too much on the algorithm misses another important cultural transformation that has happened. To make all this content manageable, streaming platforms have introduced new ways of organising culture for us. The categories used to label culture into genres have always been important, but they took on new forms and power with streaming.</p> <h2>Classifying our tastes</h2> <p>The possibilities of streaming have inspired a new “<a href="https://doi.org/10.1177%2F1749975512473461">classificatory imagination</a>”. I coined this term to describe how viewing the world through <a href="https://mitpress.mit.edu/books/sorting-things-out">genres, labels and categories</a> helps shape our own identities and <a href="https://www.penguinrandomhouse.com/books/55037/the-order-of-things-by-michel-foucault/">sense of place</a> in the world.</p> <p>While 50 years ago, you might have discovered a handful of music genres through friends or by going to the record shop, the advent of streaming has brought classification and genre to our media consumption on a grand scale. Spotify alone has over <a href="https://www.papermag.com/spotify-wrapped-music-genres-escape-room-2649122474.html?rebelltitem=21#rebelltitem21">five thousand music genres</a>. Listeners also come up with their own genre labels when creating playlists. We are constantly fed new labels and categories as we consume music, films and television.</p> <p>Thanks to these categories, our tastes can be more specific and eclectic, and our identities more fluid. These personalised recommendations and algorithms can also shape our tastes. My own personalised end-of-year review from Spotify told me that “chamber psych” – a category I’d never heard of – was my second-favourite genre. I found myself searching to find out what it was, and to discover the artists attached to it.</p> <p>These hyper-specific categories are created and stored in metadata – the behind-the-scenes codes that support platforms like Spotify. They are the basis for personalised recommendations, and they help decide what we consume. If we think of Netflix as a vast archive of TV and film, the way it is organised through metadata decides what is discovered from within it.</p> <p>On Netflix, the <a href="https://www.whats-on-netflix.com/news/the-netflix-id-bible-every-category-on-netflix/">thousands of categories</a> range from familiar film genres like horror, documentary and romance, to the hyper-specific “campy foreign movies from the 1970s”.</p> <p>While Squid Game is labelled with the genres “Korean, TV thrillers, drama” to the public, there are thousands of more specific categories in Netflix’s metadata that are shaping our consumption. The personalised homepage uses algorithms to offer you certain genre categories, as well as specific shows. Because most of it is in the metadata, we may not be aware of what categories are being served to us.</p> <p><iframe width="440" height="260" src="https://www.youtube.com/embed/mBNt-cLjXwc?wmode=transparent&amp;start=0" frameborder="0" allowfullscreen=""></iframe></p> <p>Take Squid Game – it might well be that the way to have a large launch is partly to do with the algorithmic promotion of widely watched content. Its success is an example of how algorithms can reinforce what is already popular. As on social media, once a trend starts to catch on, algorithms can direct even more attention toward it. Netflix categorises do this too, telling us what programmes are trending or popular in our local area.</p> <h2>Who is in control?</h2> <p>As everyday media consumers, we are still at the edge of what we understand about the workings and potential of these recommendation algorithms. We should also consider some of the potential consequences of the classificatory imagination.</p> <p>The classification of culture could shut us out to certain categories or voices – this can be limiting or even harmful, as is the case with how misinformation is spread on social media.</p> <p>Our <a href="https://www.routledge.com/Culture-Class-Distinction/Bennett-Savage-Silva-Warde-Gayo-Cal-Wright/p/book/9780415560771">social connections</a> are also profoundly shaped by the culture we consume, so these labels can ultimately affect who we interact with.</p> <p>The positives are obvious – personalised recommendations from Netflix and Spotify help us find exactly what we like in an incomprehensible number of options. The question is: who decides what the labels are, what gets put into these boxes and, therefore, what we end up watching, listening to and reading?<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important; text-shadow: none !important;" src="https://counter.theconversation.com/content/169897/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https://theconversation.com/republishing-guidelines --></p> <p><span><a href="https://theconversation.com/profiles/david-beer-149528">David Beer</a>, Professor of Sociology, <em><a href="https://theconversation.com/institutions/university-of-york-1344">University of York</a></em></span></p> <p>This article is republished from <a href="https://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/how-netflix-affects-what-we-watch-and-who-we-are-and-its-not-just-the-algorithm-169897">original article</a>.</p> <p><em>Image: Shutterstock</em></p>

TV

Placeholder Content Image

Do algorithms erode our ability to think?

<div class="copy"> <p>Have you ever watched a video or movie because YouTube or Netflix recommended it to you?</p> <p>Or added a friend on Facebook from the list of “people you may know”?</p> <p>And how does Twitter decide which tweets to show you at the top of your feed?</p> <p>These platforms are driven by algorithms, which rank and recommend content for us based on our data.</p> <p>As Woodrow Hartzog, a professor of law and computer science at Northeastern University, Boston, <a rel="noreferrer noopener" href="https://www.abc.net.au/news/science/2018-04-30/how-the-internet-tricks-you-out-of-privacy-deceptive-design/9676708" target="_blank">explains</a>: “If you want to know when social media companies are trying to manipulate you into disclosing information or engaging more, the answer is always.”</p> <p>So if we are making decisions based on what’s shown to us by these algorithms, what does that mean for our ability to make decisions freely?</p> <h3>What we see is tailored for us</h3> <p>An algorithm is a digital recipe: a list of rules for achieving an outcome, using a set of ingredients.</p> <p>Usually, for tech companies, that outcome is to make money by convincing us to buy something or keeping us scrolling in order to show us more advertisements.</p> <p>The ingredients used are the data we provide through our actions online – knowingly or otherwise.</p> <p>Every time you like a post, watch a video, or buy something, you provide data that can be used to make predictions about your next move.</p> <p>These algorithms can influence us, even if we’re not aware of it. As the New York Times’ <a rel="noreferrer noopener" href="https://www.nytimes.com/2020/04/22/podcasts/rabbit-hole-prologue.html" target="_blank">Rabbit Hole podcast</a> explores, YouTube’s recommendation algorithms can drive viewers to <a rel="noreferrer noopener" href="https://www.theguardian.com/technology/2018/feb/02/how-youtubes-algorithm-distorts-truth" target="_blank">increasingly extreme content</a>, potentially leading to online radicalisation.</p> <p>Facebook’s News Feed algorithm ranks content to keep us engaged on the platform.</p> <p>It can produce a phenomenon called “<a rel="noreferrer noopener" href="https://www.pnas.org/content/111/24/8788/tab-article-info" target="_blank">emotional contagion</a>”, in which seeing positive posts leads us to write positive posts ourselves, and seeing negative posts means we’re more likely to craft negative posts — though this study was <a rel="noreferrer noopener" href="https://www.pnas.org/content/111/29/10779.1" target="_blank">controversial</a> partially because the effect sizes were small.</p> <p>Also, so-called “<a rel="noreferrer noopener" href="https://www.abc.net.au/news/science/2018-04-30/how-the-internet-tricks-you-out-of-privacy-deceptive-design/9676708" target="_blank">dark patterns</a>” are designed to trick us into sharing more, or <a rel="noreferrer noopener" href="https://econsultancy.com/three-dark-patterns-ux-big-brands-and-why-they-should-be-avoided/" target="_blank">spending more</a> on websites like Amazon.</p> <p>These are tricks of website design such as hiding the unsubscribe button, or showing how many people are buying the product you’re looking at <em>right now</em>.</p> <p>They subconsciously nudge you towards actions the site would like you to take.</p> <h3>You are being profiled</h3> <p>Cambridge Analytica, the company involved in the largest known Facebook data leak to date, claimed to be able to <a rel="noreferrer noopener" href="https://www.newyorker.com/news/news-desk/cambridge-analytica-and-the-perils-of-psychographics" target="_blank">profile your psychology</a> based on your “likes”.</p> <p>These profiles could then be used to target you with political advertising.</p> <p>“Cookies” are small pieces of data which track us across websites.</p> <p>They are records of actions you’ve taken online (such as links clicked and pages visited) that are stored in the browser.</p> <p>When they are combined with data from multiple sources including from large-scale hacks, this is known as “<a rel="noreferrer noopener" href="https://www.abc.net.au/news/science/2019-12-03/data-enrichment-industry-privacy-breach-people-data-labs/11751786" target="_blank">data enrichment</a>”.</p> <p>It can link our personal data like email addresses to other information such as our education level.</p> <p>These data are regularly used by tech companies like Amazon, Facebook, and others to build profiles of us and predict our future behaviour.</p> <h3>You are being predicted</h3> <p>So, how much of your behaviour can be predicted by algorithms based on your data?</p> <p>Our research, <a href="https://www.nature.com/articles/s41562-018-0510-5">published in </a><em><a rel="noreferrer noopener" href="https://www.nature.com/articles/s41562-018-0510-5" target="_blank">Nature Human Behaviou</a></em><a href="https://www.nature.com/articles/s41562-018-0510-5">r last year</a>, explored this question by looking at how much information about you is contained in the posts your friends make on social media.</p> <p>Using data from Twitter, we estimated how predictable peoples’ tweets were, using only the data from their friends.</p> <p>We found data from eight or nine friends was enough to be able to predict someone’s tweets just as well as if we had downloaded them directly (well over 50% accuracy, see graph below).</p> <p>Indeed, 95% of the potential predictive accuracy that a machine learning algorithm might achieve is obtainable <em>just</em> from friends’ data.</p> <p>Our results mean that even if you #DeleteFacebook (which trended after the <a rel="noreferrer noopener" href="https://www.sbs.com.au/news/deletefacebook-calls-grow-after-cambridge-analytica-data-scandal" target="_blank">Cambridge Analytica scandal in 2018</a>), you may still be able to be profiled, due to the social ties that remain.</p> <p>And that’s before we consider the things about Facebook that make it so <a rel="noreferrer noopener" href="https://theconversation.com/why-its-so-hard-to-deletefacebook-constant-psychological-boosts-keep-you-hooked-92976" target="_blank">difficult to delete</a> anyway.</p> <p>We also found it’s possible to build profiles of <em>non-users</em> — so-called “<a rel="noreferrer noopener" href="https://www.nature.com/articles/s41562-018-0513-2" target="_blank">shadow profiles</a>” — based on their contacts who are on the platform.</p> <p>Even if you have never used Facebook, if your friends do, there is the possibility a shadow profile could be built of you.</p> <p>On social media platforms like Facebook and Twitter, privacy is no longer tied to the individual, but to the network as a whole.</p> <h3>No more free will? Not quite</h3> <p>But all hope is not lost. If you do delete your account, the information contained in your social ties with friends grows stale over time.</p> <p>We found predictability gradually declines to a low level, so your privacy and anonymity will eventually return.</p> <p>While it may seem like algorithms are eroding our ability to think for ourselves, it’s not necessarily the case.</p> <p>The evidence on the effectiveness of psychological profiling to influence voters <a rel="noreferrer noopener" href="https://www.nytimes.com/2017/03/06/us/politics/cambridge-analytica.html" target="_blank">is thin</a>.</p> <p>Most importantly, when it comes to the role of people versus algorithms in things like spreading (mis)information, people are just as important.</p> <p>On Facebook, the extent of your exposure to diverse points of view is more closely related <a rel="noreferrer noopener" href="https://science.sciencemag.org/content/348/6239/1130" target="_blank">to your social groupings</a> than to the way News Feed presents you with content.</p> <p>And on Twitter, while “fake news” may spread faster than facts, it is <a rel="noreferrer noopener" href="https://science.sciencemag.org/content/359/6380/1146" target="_blank">primarily people who spread it</a>, rather than bots.</p> <p>Of course, content creators exploit social media platforms’ algorithms to promote content, on <a rel="noreferrer noopener" href="https://theconversation.com/dont-just-blame-youtubes-algorithms-for-radicalisation-humans-also-play-a-part-125494" target="_blank">YouTube</a>, <a rel="noreferrer noopener" href="https://theconversation.com/dont-just-blame-echo-chambers-conspiracy-theorists-actively-seek-out-their-online-communities-127119" target="_blank">Reddit</a> and other platforms, not just the other way round.</p> <p><em>Image credit: Shutterstock</em></p> <p><em>This article was originally published on <a rel="noopener" href="https://cosmosmagazine.com/people/behaviour/are-algorithms-eroding-our-ability-to-think/" target="_blank">cosmosmagazine.com</a> and was written by The Conversation.</em></p> </div>

Technology

Placeholder Content Image

What you need to know about YouTube's algorithm system

<p>People watch <a href="https://youtube.googleblog.com/2017/02/you-know-whats-cool-billion-hours.html">more than a billion hours</a> of video on YouTube every day. Over the past few years, the video sharing platform has <a href="https://www.thedailybeast.com/how-youtube-pulled-these-men-down-a-vortex-of-far-right-hate">come under fire</a> for its role in <a href="https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html">spreading</a> and <a href="https://www.theguardian.com/media/2018/sep/18/report-youtubes-alternative-influence-network-breeds-rightwing-radicalisation">amplifying</a> extreme views.</p> <p>YouTube’s video recommendation system, in particular, has been criticised for radicalising young people and steering viewers down <a href="https://policyreview.info/articles/news/implications-venturing-down-rabbit-hole/1406">rabbit holes</a> of disturbing content.</p> <p>The company <a href="https://youtube.googleblog.com/2019/01/continuing-our-work-to-improve.html">claims</a> it is trying to avoid amplifying problematic content. But <a href="https://dl.acm.org/citation.cfm?doid=3298689.3346997">research</a> from YouTube’s parent company, Google, indicates this is far from straightforward, given the commercial pressure to keep users engaged via ever more stimulating content.</p> <p>But how do YouTube’s recommendation algorithms actually work? And how much are they really to blame for the problems of radicalisation?</p> <p><strong>The fetishisation of algorithms</strong></p> <p>Almost everything we see online is heavily curated. Algorithms decide what to show us in Google’s search results, Apple News, Twitter trends, Netflix recommendations, Facebook’s newsfeed, and even pre-sorted or spam-filtered emails. And that’s before you get to advertising.</p> <p>More often than not, these systems decide what to show us based on their idea of what we are like. They also use information such as what our friends are doing and what content is newest, as well as built-in randomness. All this makes it hard to reverse-engineer algorithmic outcomes to see how they came about.</p> <p>Algorithms take all the relevant data they have and process it to achieve a goal - often one that involves influencing users’ behaviour, such as selling us products or keeping us engaged with an app or website.</p> <p>At YouTube, the “up next” feature is the one that receives most attention, but other algorithms are just as important, including search result rankings, <a href="https://youtube.googleblog.com/2008/02/new-experimental-personalized-homepage.html">homepage video recommendations</a>, and trending video lists.</p> <p><strong>How YouTube recommends content</strong></p> <p>The main goal of the YouTube recommendation system is to keep us watching. And the system works: it is responsible for more than <a href="https://www.cnet.com/news/youtube-ces-2018-neal-mohan/">70% of the time users spend</a> watching videos.</p> <p>When a user watches a video on YouTube, the “up next” sidebar shows videos that are related but usually <a href="https://www.pewinternet.org/2018/11/07/many-turn-to-youtube-for-childrens-content-news-how-to-lessons/">longer and more popular</a>. These videos are ranked according to the user’s history and context, and newer videos are <a href="https://storage.googleapis.com/pub-tools-public-publication-data/pdf/45530.pdf">generally preferenced</a>.</p> <p>This is where we run into trouble. If more watching time is the central objective, the recommendation algorithm will tend to favour videos that are new, engaging and provocative.</p> <p>Yet algorithms are just pieces of the vast and complex sociotechnical system that is YouTube, and there is so far little empirical evidence on their <a href="https://arxiv.org/abs/1908.08313">role</a> in processes of radicalisation.</p> <p>In fact, <a href="https://journals.sagepub.com/doi/full/10.1177/1354856517736982">recent research</a> suggests that instead of thinking about algorithms alone, we should look at how they interact with community behaviour to determine what users see.</p> <p><strong>The importance of communities on YouTube</strong></p> <p>YouTube is a quasi-public space containing all kinds of videos: from musical clips, TV shows and films, to vernacular genres such as “how to” tutorials, parodies, and compilations. User communities that create their own videos and use the site as a social network have played an <a href="https://books.google.com.au/books?id=0NsWtPHNl88C&amp;source=gbs_book_similarbooks">important role</a> on YouTube since its beginning.</p> <p>Today, these communities exist alongside <a href="https://journals.sagepub.com/doi/full/10.1177/1329878X17709098">commercial creators</a> who use the platform to build personal brands. Some of these are far-right figures who have found in YouTube a home to <a href="https://datasociety.net/output/alternative-influence/">push their agendas</a>.</p> <p>It is unlikely that algorithms alone are to blame for the radicalisation of a previously “<a href="https://www.wired.com/story/not-youtubes-algorithm-radicalizes-people/">moderate audience</a>” on YouTube. Instead, <a href="https://osf.io/73jys/">research</a> suggests these radicalised audiences existed all along.</p> <p>Content creators are not passive participants in the algorithmic systems. They <a href="https://journals.sagepub.com/doi/10.1177/1461444819854731">understand how the algorithms work</a> and are constantly improving their <a href="https://datasociety.net/output/data-voids/">tactics</a> to get their videos recommended.</p> <p>Right-wing content creators also know YouTube’s policies well. Their videos are often “borderline” content: they can be interpreted in different ways by different viewers.</p> <p>YouTube’s community guidelines restrict blatantly harmful content such as hate speech and violence. But it’s much harder to police content in the grey areas between jokes and bullying, religious doctrine and hate speech, or sarcasm and a call to arms.</p> <p><strong>Moving forward: a cultural shift</strong></p> <p>There is no magical technical solution to political radicalisation. YouTube is working to minimise the spread of borderline problematic content (for example, conspiracy theories) by <a href="https://youtube.googleblog.com/2019/01/continuing-our-work-to-improve.html">reducing their recommendations</a> of videos that can potentially misinform users.</p> <p>However, YouTube is a company and it’s out to make a profit. It will always prioritise its commercial interests. We should be wary of relying on technological fixes by private companies to solve society’s problems. Plus, quick responses to “fix” these issues might also introduce harms to politically edgy (activists) and minority (such as sexuality-related or LGBTQ) communities.</p> <p>When we try to understand YouTube, we should take into account the different factors involved in algorithmic outcomes. This includes systematic, long-term analysis of what algorithms do, but also how they combine with <a href="https://policyreview.info/articles/news/implications-venturing-down-rabbit-hole/1406">YouTube’s prominent subcultures</a>, their <a href="https://arxiv.org/abs/1908.08313">role</a> in political polarisation, and their <a href="https://datasociety.net/pubs/oh/DataAndSociety_MediaManipulationAndDisinformationOnline.pdf">tactics</a> for managing visibility on the platform.</p> <p>Before YouTube can implement adequate measures to minimise the spread of <a href="https://journals.sagepub.com/doi/pdf/10.1177/0894439314555329">harmful content</a>, it must first understand what cultural norms are thriving on their site – and being amplified by their algorithms.</p> <hr /> <p><em>The authors would like to acknowledge that the ideas presented in this article are the result of ongoing collaborative research on YouTube with researchers Jean Burgess, Nicolas Suzor, Bernhard Rieder, and Oscar Coromina.</em><!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important; text-shadow: none !important;" src="https://counter.theconversation.com/content/125494/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: http://theconversation.com/republishing-guidelines --></p> <p><em><a href="https://theconversation.com/profiles/ariadna-matamoros-fernandez-577257">Ariadna Matamoros-Fernández</a>, Lecturer in Digital Media at the School of Communication, <a href="http://theconversation.com/institutions/queensland-university-of-technology-847">Queensland University of Technology</a> and <a href="https://theconversation.com/profiles/joanne-gray-873764">Joanne Gray</a>, Lecturer in Creative Industries, <a href="http://theconversation.com/institutions/queensland-university-of-technology-847">Queensland University of Technology</a></em></p> <p><em>This article is republished from <a href="http://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/dont-just-blame-youtubes-algorithms-for-radicalisation-humans-also-play-a-part-125494">original article</a>.</em></p>

Technology