On March 9, economist Ariel Setton shared a screenshot on X. He had found the same product on Argentine e-commerce platform Mercado Libre listed at two different prices: one from his account and another from a different account. The price difference was 20%. "Since when does Mercado Libre have dynamic pricing?" he asked in his post. Within hours, hundreds of users replicated the test and shared their own results.
Despite the fact that the e-commerce giant offers a price automation tool, which it defines as a tool that automatically adjusts product prices to ensure greater competitiveness based on the seller's chosen strategy, the spokesperson for the company founded by Marcos Galperin denied having implemented any personalized pricing system. "At Mercado Libre, there are no prices based on individual users," she said, arguing that it was a mistake due to tests with "control groups," a common practice in digital platforms to measure the impact of promotions or commercial strategies.
Since when does @Mercadolibre have dynamic pricing? Same product, same moment, different user, 20% difference.
— Ariel Setton (@arisetton) February 18, 2026
Is that difference set by MELI or the seller?
And for you: what price are you offered? https://t.co/8S9zNHBRAT pic.twitter.com/2FUCia5jxG
In the analog economy, the ability to apply personalized pricing on a large scale was nearly impractical: no merchant could accurately know how much each customer was willing to pay. Prices had to be, by necessity, general, and the variables were reduced to "charging by the face," an intuitive and informal practice. But in the digital economy, that restriction has vanished. Platforms track every search, every click, every purchase, every second spent on screen. They know what you look at, what you discard, what you revisit, and when you decide to buy. With that information, prices cease to be a public signal and become a private variable, potentially different for each user. What was once a theoretical hypothesis (charging each person according to their willingness to pay) is now, technically, possible.
What Setton sparked a debate about with his post, even without intending to, is one of the oldest and most urgent discussions in economics, now reactivated by the technological advancements in AI applied to e-commerce: what are the macroeconomic consequences of people paying the same or different prices for the same good? And its more unsettling version in this technological context: who decides how much something is worth to you?
Platforms track every search, every click, every purchase, every second spent on screen. They know what you look at, what you discard, what you revisit, and when you decide to buy.
There’s an idea that has permeated economic thought since the mid-19th century: one peso is worth more to a poor person than to a rich person. When you have little, each additional peso solves a basic and urgent need (food, shelter, health). When you have a lot, each additional peso satisfies increasingly less urgent needs. Economists call this the "diminishing marginal utility of money," and it is one of the pillars of modern economic theory. Its uncomfortable consequence is that a uniform price is not neutral. Applying the same burden to someone with little and someone with much produces unequal sacrifices. For two centuries, that argument has been the foundation of progressive taxation. Today, for the first time in history, there is technology available to apply it to market prices as well.
The problem is that no one is using it for that.
Explanatory video published by Mercado Libre in its Learning Center
This phenomenon, known as price discrimination, occurs when platforms establish technological mechanisms to predict what their customers are willing to pay and, consequently, raise their prices as much as possible.
Amazon didn’t jump into price discrimination overnight. It was a slow and gradual process, and the first time it was noticed was by accident, thanks to the discovery of someone outside the company (just like, coincidentally, in the case of Mercado Libre). In 2000, a DVD user discovered something strange: if he deleted his browser cookies, the price of a product dropped. The platform was recognizing him as a frequent customer and charging him more. Amazon had been differentiating DVD prices based on demographic data, purchase history, and online behavior of its users. When the practice became public, consumers reacted with outrage and immediately raised the issue of fairness. Amazon hastily issued a statement claiming it was just conducting an experiment with random discounts (the same, again coincidentally, as Mercado Libre argued), and offered refunds to those who had paid above the average price.
The denial was rushed and unconvincing. But it worked: the controversy died down, and Amazon moved on.
Twenty-five years later, the system is incomparably more sophisticated. Various market analyses based on price monitoring suggest that on platforms like Amazon, prices are adjusted continuously (in some cases several times a day for the same product) taking into account factors such as demand, stock availability, and consumer behavior. It’s not a human who decides how much to charge you: it’s a system that continuously learns from your purchasing patterns, your location, your browsing history, and your willingness to pay.
But the most recent discussion isn’t limited to personalized pricing; it also concerns what happens when the algorithms of different sellers interact with each other. Some research has raised the possibility that, in certain contexts, automated pricing systems may learn strategies that reduce competition, even without explicit coordination between companies.

According to a lawsuit filed by the U.S. Federal Trade Commission (FTC), Amazon developed an internal project called "Project Nessie," aimed at determining whether other online selling platforms were following its prices, and to raise the prices of its own products that would likely be matched by competitors. According to the FTC, after competitors began to match or raise their prices, Amazon continued selling the product at an inflated price, resulting in an excessive profit of one billion dollars. "Amazon used Project Nessie to extract over a billion dollars directly from the pockets of Americans," stated the FTC.
According to the complaint, every time a competitor raised their prices, the algorithm did the same; when they lowered them, it followed suit. The artificial intelligence algorithms quickly learned that raising prices prompted competitors to mimic them, boosting profits for all sellers at the expense of consumer surplus. No executive meetings, no explicit agreements, no collusion in the legal sense of the term. The algorithms had learned on their own not to compete.
The artificial intelligence algorithms quickly learned that raising prices prompted competitors to mimic them, boosting profits for all sellers at the expense of consumer surplus. No executive meetings, no explicit agreements, no collusion.
The Amazon case is neither an exception nor an isolated abuse. It’s the model that the rest of global e-commerce (including Mercado Libre) observes, studies, and replicates. Prices are becoming fluid, personalized, and opaque. They are not only determined by supply and demand but also by who you are, where you are, what you buy, how you buy, what you search for online, and how much the algorithm estimates you are willing to pay for a product.
But while the Amazon case might seem distant for an Argentine user, the Uber case does not. The ride-hailing platform has been applying dynamic pricing in Argentina for years and explains it transparently on its own site: when the demand for rides increases significantly (rainy days, concerts, football finals), the dynamic fare is automatically activated to encourage more drivers to head to the area. Prices go up, supply increases, and the market balances out. The argument is clear and economically sound.
What Uber does not explain with the same clarity is what other types of information the algorithm takes into account when calculating the fare, nor what happens on the other side of the equation: that of the driver.
A study presented in 2025 at FAccT, the leading global academic conference on algorithmic fairness, based on the analysis of 1.5 million rides from 258 drivers, found that after the introduction of dynamic pricing, drivers' wages decreased in real terms, Uber's commission increased, and job allocation became less predictable. In some rides, the platform's commission exceeded 50% of the total paid by the passenger.

Dynamic pricing, in the case of Uber, is not just a tool for balancing supply and demand. It is also a mechanism for distributing value among three parties (passenger, driver, and platform) in an opaque manner that systematically favors the latter. The passenger pays more in times of high demand. The driver does not necessarily earn more. The platform is the one that captures the surplus.
Neoclassical economists argue that price discrimination can increase overall welfare because it allows transactions to occur that would not happen with a single price: if the uniform price is $100 and someone can only pay $60, with discrimination the transaction occurs and both gain something. In that sense, there is a genuine efficiency argument.
But there are three objections that more recent theory documents well.
First, the efficiency argument only works if the efficiency gains are distributed: for the seller to identify the buyer with fewer resources and charge them less requires that the algorithm has that explicit goal. In practice, the goal is to maximize revenue, not to maximize access.
Second, in highly concentrated markets (like Mercado Libre in Argentina, the leader in e-commerce in the region), price discrimination does not produce the competitive benefits that theory predicts. The buyer has nowhere else to go.
Third, the distributive impact of algorithmic pricing is not one-dimensional. In theory, price discrimination can expand access by allowing consumers with lower willingness to pay to access goods that would otherwise be out of their reach. However, that outcome depends on a demanding assumption: that lower prices are effectively assigned to those with lower incomes or less ability to pay.
In practice, pricing systems do not seek to maximize access but rather revenue. This implies that price variations tend to correlate not with consumer need but with observable behavior: urgency, frequency, lower price sensitivity, or lower ability to compare. In contexts of high market concentration, where alternatives are limited, these characteristics can translate into higher prices precisely for those with less room to maneuver.
In that sense, rather than guaranteeing a progressive effect, algorithmic pricing introduces a risk: that price personalization does not reduce but rather reproduces (or even amplifies) pre-existing inequalities.

For years, the platforms have made the same argument: dynamic prices benefit everyone. More efficiency, more competition, better prices for consumers. The empirical evidence accumulated over the last decade tells a different story.
A study published in 2022 by Alexander MacKay and Samuel Weinstein in the Washington University Law Review demonstrated something that contradicts the most basic economic intuition: algorithmic pricing can lead to prices above the competitive level, and this consumer harm can be initiated by a single firm employing an algorithm that outperforms its competitors. No agreement between companies is necessary for prices to rise. The growing use of algorithmic pricing results in higher prices for consumers: algorithms do not intensify competition; rather, they do the opposite. This conclusion aligns with the previously mentioned FTC lawsuit against Amazon.
Algorithmic pricing introduces a risk: that price personalization does not reduce but rather reproduces (or even amplifies) pre-existing inequalities.
Moreover, algorithmic price discrimination systematically occurs when online platforms charge frequent consumers, with a history of previous purchases on their own sites, higher prices than new consumers for the same products at the same time. In Chinese academic literature, the phenomenon, dubbed "big data killing," was first documented on a large scale: In 2018, users of the Didi transportation platform discovered that frequent customers were paying more than new ones for the same ride. The complaint went viral, generating a wave of investigative journalism and leading to state regulation.
Finally, according to an article published in the University of Chicago Law Review, when algorithms do not target actual consumer preferences but rather their misconceptions (exploiting cognitive biases), the harm to consumers is even greater and can also reduce overall market efficiency. Algorithms do not just read what you want: they learn where you are vulnerable and when you have less capacity to resist paying more.
Three distinct mechanisms, one common direction: from the consumer to the platform.

In recent years, the concern of state governments in the United States has grown exponentially, and many are taking action. According to a study by Consumer Reports, in the first seven months of 2025, state legislators introduced 51 bills in 24 states aimed at regulating algorithmic pricing, compared to just 10 bills introduced throughout 2024. Most of these measures target rental pricing software, accused of facilitating price manipulation in the real estate market. Others propose to limit surveillance-based pricing tactics, which adjust what consumers pay based on their personal data, location, or browsing behavior.
Pricing algorithms are an "extractive innovation": a technological advance that harms consumers rather than benefiting them, transferring wealth to companies. The result is a decrease in consumer welfare.
Marx envisioned a society where distribution would occur according to each person's needs, without price mediation. Algorithmic pricing seems, superficially, to move in that direction: charging each person according to their ability to pay. But the difference is crucial: for Marx, the criterion is the recipient's need; in the algorithm, it is the seller's extraction capacity. The result is the opposite: those who need it the most do not pay less, but rather those who are least able to resist paying more.

Algorithmic pricing implies that a private platform knows in detail the economic situation and consumption habits of each buyer to set a particular and exclusive price. The gap between the vast amount of information the seller possesses and their ability to process it, contrasted with the buyer's situation (who lacks tools to access all prices for the same product, its costs, or profit margins), leads to a surveillance system serving accumulation. The "redistribution" that would occur is not "emancipatory" as Marx sought, nor a tool to increase competitiveness, as classical liberals like Adam Smith or John Stuart Mill would have preferred. It is a more sophisticated form of market distortion, resulting in greater extraction and greater concentration of wealth in favor of oligopolies and monopolies.
The Mercado Libre case from March is not just an anecdote about a platform. Pricing algorithms are a widely spread "extractive innovation" among e-commerce platforms. It is a technological advance that harms consumers rather than benefiting them, transferring wealth to companies. The result is a reduction in consumer welfare, through higher prices without any increase in product quality. Based on the cases of Amazon and Uber, recent empirical evidence and classical economic theory lead us to an uncomfortable conclusion: those who need it the most do not pay less.
Enjoyed the read? The Wizards are who keep 421 alive. Join and get the digital magazine, exclusive content, and more.