• sponser

PRESENTS

TYRE PARTNER

  • sponser

ASSOCIATE PARTNER

  • sponser
  • sponser
  • sponser
  • sponser
  • sponser
News » Opinion » Opinion | Algorithmic Collusions: Fixing Price Without Human Intervention
6-MIN READ

Opinion | Algorithmic Collusions: Fixing Price Without Human Intervention

Written By:

Last Updated:

New Delhi, India

Now, every kind of e-commerce site actively uses dark patterns despite the government's clear instruction to ban them.

Now, every kind of e-commerce site actively uses dark patterns despite the government's clear instruction to ban them.

Profit-maximising algorithms are colluding to fix prices without any human input. With access to vast competitor data, these self-learning bots can manipulate markets on autopilot, rendering traditional price-fixing regulations obsolete

Dark patterns are everywhere; awareness of algorithms as they manipulate online behaviour is one of the most researched areas for platform companies. The algorithms nudge, push, coerce, shame, and manipulate decisions. Most of these actions happen without human intervention. It’s machines talking to machines, algorithms running on these computing devices collaborating with other algorithms.

Algorithms conspire to force consumers to make financially material decisions on platforms. Platforms run many algorithms, some outward-facing and several inward-operating. They all work together to share data insights and manipulate user behaviour. This manipulation is now sufficiently pervasive and ubiquitous to impact citizens at the societal level; hence, it must be governed through well-defined public policy.

The most prominent example of this misuse is travel sites. If you have selected the location, dates, and hotel, these sites will show an artificial scarcity. The site will show that only a room is left, creating a false sense of urgency and forcing the consumer to select and close the choice immediately. Despite a Ministry of Consumer Affairs rule specifically prohibiting artificial scarcity as a dark pattern, travel sites continue to use that. The reason they continue to use that is because the rule is difficult to police at any level. The ministry cannot take direct cognizance of this behaviour, and even if it notices, it can be changed by a small change in algorithm code in seconds. Hence, to detect and build evidence against these dark patterns is almost impossible.

Artificial scarcity pressures consumers into making immediate buying decisions. It prevents them from considering other options, because the perception of high demand creates a sense of urgency. To further force a decision, the booking site might offer a nominal incentive, such as a small, time-limited coupon. This offer of a slight, temporary price reduction can lead consumers to make hasty choices. Such rushed decisions may lead to errors, with consumers not checking/overlooking the fine print or the total price, including potential exclusions. For instance, the booking site’s nominal coupon might only apply after excluding a previously complimentary breakfast. Ultimately, the hotel and website gain by effectively monetising a service that was once free.

Now, every kind of e-commerce site actively uses dark patterns despite the government’s clear instruction to ban them. Dark patterns are defined in the ruling as “any practices or deceptive design patterns using UI/UX (user interface/user experience) interactions on any platform; designed to mislead or trick users into doing something they originally did not intend or want to do; by subverting or impairing the consumer autonomy, decision making or choice; amounting to a misleading advertisement or unfair trade practice or violation of consumer right.” This is the legal definition of dark patterns in the guidelines issued by the Ministry of Consumer Affairs.

CIPP had earlier recommended that static rules will not work in the fast-changing digital world. The failure of these static rules is evident because they have been ignored by every single digital platform, barring none. But with AI usage rising among platforms, several other regulations are being outclassed and outed by the engineers behind these digital platforms. Now, this approach advocated by CIPP has been adopted in the draft Digital Markets Competition Bill.

The emerging danger is that platforms are now conspiring, and this collusion is extending to price fixing. This price-fixing does not leave any trail of meetings or discussions and is challenging for the Competition Commission of India (CCI) to prove. The law requires documentary proof of meetings where price fixing is discussed. These documentary proofs are difficult for any regulator or prosecutor to access; hence most price fixing among competitors never really sees any action. In the digital world, collusion is an active event but is becoming even more challenging to prove as while it is dynamic, there is no manual intervention.

The active collusion happens via algorithms exchanging data. The modus operandi is actually well defined by the platform engineers, beginning with an algorithm that scans and scrapes all the price information on a rival platform site. This information is then mapped against the price list and wherever there is a price discrepancy, it is corrected to reflect the better price. This scanning, scraping, and correction is an active exercise that has very minimal human intervention, especially if the price difference is not large. Now, if the algorithms are capable of self-learning and the objective that they are programmed is to maximise the platform’s profits, they might end up manipulating and colluding with each other.

Now they might not take any input from humans or enterprises that are running these algorithms. Technically, because there is no input from humans, are companies still liable for the action of these algorithms or will they be able to easily walk away from it? Companies can deny ownership and liability, claiming they did not know what the algorithms were doing, as their inner working are opaque even to the programmers.

Therefore, it is important to amend the competition laws to incorporate the actions taken by self-learning algorithms and their collusion with other algorithms. The regulators cannot do the same and expect another result. Issuing another set of rules prohibiting such collusion does not work. There is a broader conflict at work here; it reveals how responsible enterprises will use technology and how tech companies want to be perceived by society.

Tech companies and society

In the ‘90s, the first generation of tech companies were seen as innovators and do-gooders. In the early days of Google, its motto was ‘Don’t be evil’ when it filed for an IPO, almost 20 years back in 2004. Google’s original motto’s righteousness reflected the time when tech companies could do no wrong. The motto was also a way of correcting the mistakes of monopolistic behaviour of giants like Microsoft that dominated the narrative in the 1990s. The internet companies wanted to distinguish themselves, and some of them did for some time at least. Then, as social media and e-commerce companies took over the mantle of internet giants, the good positioning almost seemed childish for a commercial world. It’s like the professionals took over the sector from idealistic twenty-something and they brought in pure commercial consideration. Idealism is seen as naivety in the internet world.

Purpose inspired and attracted intelligent programmers who looked beyond the job. When a company want to attract not only intelligent but the most intelligent people on the planet, it has to accept that they will come with a high level of awareness about the impact their work has on society. This has changed very fast in the tech industry in the last twenty years. The massive flow of capital and speed of growth has created a culture of swashbuckling entrepreneurship that breaks all rules and cares.

The mediocrity of mindless money-making machines has replaced the righteousness of brilliant engineers in the world. This shift from larger purpose to lesser levels attracts different kinds of individuals to tech companies these days. Hence, we see technologically brilliant, morally corrupt crypto-czars who hide their operations and are in and out of jail. The public perception of the leaders of tech companies is also changing rapidly. Technocrat is no longer an aspiring title even in the Indian political world.

This change is most evident among AI companies where the model is created on data that does not belong to them. It is not a surprise that the engineers see nothing wrong with algorithms that collude to manipulate the whole market. The backlash against manipulation is slow to build but it simmers under the surface till it becomes ground swell. At that stage, punitive actions by even the judiciary are welcomed by the masses. Smart leadership intuitively understands this flow and self-corrects but public policy cannot wait for self-correction.

K Yatish Rajawat is a public policy researcher and works at the Gurgaon-based think and do tank Centre for Innovation in Public Policy (CIPP). Views expressed in the above piece are personal and solely that of the author. They do not necessarily reflect News18’s views.

first published:May 30, 2024, 12:19 IST
last updated:May 30, 2024, 12:19 IST