Full Paper
Consumer shopping and research habits have shifted dramatically since the explosive rise in popularity of large language models (LLMs) such as ChatGPT, Gemini, and Claude — now acting as the primary source of information for many consumers. While traditional marketers measure brand awareness using "share of voice" (a brand's share of online conversation) or "share of market" (a brand's share of total market revenue), a new metric has arisen from this consumer trend, titled "share of model". "Share of model," sometimes called "AI share of voice," aims to measure different AI models' awareness and recommendation of a brand's products in comparison to its competitors. Optimizing a brand’s "share of model" is also referred to as Artificial Intelligence Optimization (AIO) or Generative Engine Optimization (GEO), and is already making a huge impact on e-commerce today. Another emerging technology is agentic AI, where AI assistants act autonomously with users' permission. Agentic commerce is an even further extension of this concept, in which AI acts on a user's behalf to not just research but also complete the purchase. According to UVA Darden research, nearly 60% of consumers have used AI to help them shop, with industry data showing that those using AI for shopping outspend non-AI users by 1.3x each month (UVA Darden Report; IAB). It’s clear that this is a valuable market for marketers and business leaders to capitalize on — and it’s happening now. Another key difference in AI recommendations is that they're limited, unlike a search engine with multiple pages, AI models often recommend a select few products in their response without further prompting. For brands, this is critical: in order to be competitive online in the AI era, they must show up in AI models' recommendations favorably. However, this can prove to be a challenge as information regarding how AI models are trained and how specific responses are generated lacks transparency. Share of Model optimization represents a new form of algorithmic market manipulation that systematically undermines the four pillars of consumer autonomy: informed consent, sovereignty, transparency, and algorithmic fairness, demanding urgent industry adoption of ethical frameworks to protect consumers.
Share of Model optimization is not just a speculative future trend — whilst information is limited on the subject, the introduction of new marketing technologies and sizable investments from corporations indicate a significant shift in the consumer shopping journey. Consumer habits have already shifted dramatically; in fact, according to Bain, "ChatGPT usage continues to climb fast. The number of prompts grew by nearly 70% during the first half of 2025," with "shopping queries [doubling] in the six months from January to June" (Bain). LLMs like ChatGPT threaten traditional search engines like Google, whose revenue is primarily driven by consumer advertising. According to McKinsey, "44 percent of users who have tried AI-powered search say that it has become their 'primary and preferred' source for internet searching, compared with 31 percent who prefer using traditional search" (McKinsey Report). In an effort to maintain market share, Google introduced AI Overview, a Gemini-powered summary that answers consumers' questions by pulling from and bypassing relevant pages. Popular AI-driven browsers such as Perplexity AI’s Comet browser indicate that AI-powered browsing may become the new norm for consumers. Additionally, marketing technology leader HubSpot released its AI "share of voice" tracking tool earlier this year, whilst certain marketing agencies have pivoted towards "share of model" specializations, such as marketing agency Jellyfish, which is partially responsible for coining "share of model" as an industry term. E-commerce giants are in the race too — with Walmart recently announcing a partnership with OpenAI to lead the frontier of AI-powered shopping on their website for 270 million consumers. This partnership, amongst other things, "will allow customers and members to complete purchases from Walmart directly within ChatGPT," signaling a huge shift away from the traditional shopping experience (Walmart). Amazon recently introduced their "Help Me Decide" feature, complementing the rest of their AI shopping experience by making direct and personalized consumer recommendations rather than just summaries. "The tool helps customers pick the right product, quickly," through "[analyzing] your browsing history and preferences," targeted specifically at indecisive shoppers (Amazon News). It's clear that these strategic investments by the two leading e-commerce marketplaces indicate a trend that's already arrived. These developments promise an enhanced shopping experience for consumers, but have created an environment where consumer choice is increasingly mediated by AI systems, raising critical questions about whether companies are using this influence ethically.
With substantial corporate investment driving AI-powered shopping adoption, companies are employing increasingly sophisticated strategies to manipulate AI-driven consumer behavior in their favor. As already mentioned, companies with autonomy over their own marketplace can create their own AI recommendation bot, like Amazon's "Help Me Decide." This strategy gives companies end-to-end influence over product recommendations, which is an expensive but powerful investment that acts as a conversion tool, data collection tool, and personalization tool in combination. Additionally, Amazon uses a dynamic-pricing strategy that reportedly changes "prices on millions of items every few minutes," based on consumer demand to maximize revenue (Netguru). The sophistication and future impact of these investments cannot be overstated, as "LLM-powered recommendation engines [go far] beyond traditional models by understanding context, intent, and relationships between products" (Netguru).
Companies are also aiming to influence third-party LLMs, where best practices are still highly speculative as the way AI models are trained is non-transparent and constantly evolving. Mint Studios, a content marketing agency, has found that "LLMs work in a way almost opposite to Google. Whereas Google rewards older, high authority pages, LLMs (at the moment) reward new, recent content." Additionally, Mint Studios advises that brands "add FAQs to [their] existing content" to "give LLMs bite-sized, ready-to-lift snippets that map perfectly to how people phrase prompts." Furthermore, a content strategy that prioritizes "bottom-of-funnel content" can thrive as it is most important that AI recommends your content when a user is ready for conversion (Mint Studios). While these suggestions, similar to that of traditional Search Engine Optimization (SEO) encourage ethical manipulation, there are clear issues. For one, AI prioritizing new content may actually increase the likelihood of lower-quality and unvetted content, which could harm consumers. Brands being incentivized to consistently push new content to show up in AI recommendations could have negative implications on the quality of content. FAQs can also become a breeding ground for basic consumer marketing, where consumers are unaware they are being directly marketed to, with the AI assistant restating brands own FAQ section as objective. Understanding these Share of Model optimization strategies is just the first step in addressing potential consumer harm — on paper, they are not negative, but they signal room for manipulation of information and misuse of brand influence.
The rise of agentic commerce and Share of Model optimization poses significant threats to consumers without industry adherence to informed consent, consumer sovereignty, transparency, and algorithmic fairness that underpin consumer autonomy. Informed consent is a principle that consumers have the necessary information to make a voluntary decision in regards to their participation, in this case, the use of an AI tool. Consumer sovereignty is the economic principle that consumers purchasing power gives consumers influence over how and what goods are produced. Transparency is an underlying principle of consumer and business trust where actions, motivations, and processes are not obscured in a way that harms consumer choice. Algorithmic fairness is the principle that algorithms, in this case, AI, must treat consumers equally and not discriminate based on race, ethnicity, or other demographic features. Analyzing Share of Model optimization and the current state of agentic e-commerce through this framework reveals multiple negative consequences on consumers that demand change.
Share of Model optimization systematically violates the informed consent principle because while consumers are often consenting, they are doing so without sufficient information. One key example is Google’s AI Overview, which, regardless of consumer preference, is considered a "core Google Search feature," and "cannot be turned off" (Google). While it could be argued that users consent to Google’s AI Overview by using Google’s search engine, the sheer popularity of the browser leaves the average consumer agreeing without conscious choice. Consumer autonomy requires both accurate information and the ability to choose how that information is used, which is ultimately violated by non-disableable AI features such as Google's AI overview. Additionally, LLMs have been shown to hallucinate, "a phenomenon wherein a large language model...perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate" (Algolia). While often disclosed, consumers have been shown to put far too much trust into AI’s output, potentially fueled by the popularity and excitement regarding advancements in AI’s performance. The gap between these principles and reality is huge: "90% of shoppers believe retailers should be required to openly disclose how they use customer data in applying AI usage," and "80% of shoppers want retailers to seek explicit consent to use their data for AI" (Talkdesk).
Agentic or AI-assisted commerce has creeped into consumers' lives as the mediator and even occasionally the autonomous decision maker, violating consumer sovereignty through choice manipulation. The idealized social contract between AI and the user in a shopping journey suggests that AI is acting with a fiduciary duty, or consumers best interest. However, assuming Share of Model has been optimized, AI would favor company interests over the end consumers interest, unbeknownst to consumer. In the case of an in-house AI assistant, like that of Amazon’s Help Me Decide, a conflict of interest arises in which the AI cannot truly act with fiduciary duty without harming Amazon’s bottom line. According to Talkdesk, "79% [of consumers] refrain from purchasing product recommendations because they are not tailored to their interests but appear to be the top-selling products that retailers are pushing for sales purposes," a trend likely to carry over into AI-powered recommendations (Talkdesk). AI assistants' output of limited recommendations can actually harm consumers by artificially narrowing the marketplace, ultimately restricting consumer sovereignty. AI recommendations are often not made in a vacuum – AI agents and LLMs with access to "contextual signals," for example, an upcoming baby shower, could "capture [consumer] intent before a consumer even visits a product page or compares different options," skewing the consumer research journey in favor of businesses (McKinsey). Lastly, consumer sovereignty is further eroded by massive resource barriers that prevent small businesses from competing in AI recommendation results. There is a huge technical and financial gap for smaller companies to compete — for example, eBay's in-house training of Meta's Llama model, which they call e-Llama, required "training the 70 billion model on 1 trillion tokens took around one month, or about 340k GPU-hours," an unreplicable investment for many businesses (eBay). For general Share of Model optimization, organizations with strategic partnerships and large budgets are likely to prosper, whilst smaller businesses suffer from limited visibility, inevitably hurting consumers with limited marketplace diversity.
Transparency, or a lack thereof, is a widespread issue in the AI industry — and is even more important when it comes to AI recommendations and agentic shopping. Whilst AI hallucination is largely responsible for the lack of trust between a consumer and their AI assistant, the issue of trust goes much further — to the company itself. Consumers are concerned and unsure of how their data is being used, with "only 28% of shoppers are confident retailers have the proper security measures to protect data used by AI technology" (Talkdesk). This ties in directly with informed consent; users are not sure whether their data is protected or how it is used after their ‘consent’ is given. When it comes to product recommendations, there is a concern of "creating a 'black box' experience — where it's unclear why a product was recommended" (UVA Darden). This is especially concerning for companies with full marketplace autonomy, like Amazon, where it’s unclear how the "Help Me Decide" feature selects products and risks anti-competitive practice, and Amazon’s AI could potentially recommend an Amazon Basics product over a competitor due to training bias. Taking this further, a McKinsey report about agentic shopping found that "when an AI agent shops [on one’s] behalf, trust becomes abstract, filtered through layers of data, automation, and institutional frameworks" (McKinsey). For a positive, or ‘enhanced’ consumer experience to occur, it’s important that trust is not left to the abstract, but made clear through transparent business practice and documentation.
Algorithmic bias is one of the most concerning issues in violating consumer autonomy and consumer rights. Unfortunately, due to how AI models are trained, biases and stereotypes can be exemplified: "AI models are trained on data sets...with historical or representational bias [which can] lead to unfair representation or discrimination," in responses (Harvard DCE). Consumers are already noticing these issues arise in their e-commerce journey, with "64% hav[ing] received an AI-powered product recommendation that did not match their preferences, interests, or previous shopping behaviors," an issue that arises even more for "Hispanic (72%) and Black (69%) respondents" (Talkdesk). This discrimination has even influenced avoidant consumer behavior, with 60% of consumers avoiding AI recommendations due to bias concerns, rising to 75% for Hispanic and 67% for Asian consumers (Talkdesk). These statistics reveal that algorithmic bias is not just a theoretical concern but a present reality undermining consumer trust and market access for many.
So far, a lack of consumer protection boils down to a few key arguments, often made in favor of innovation. For one, many believe that AI is a net positive on the consumer shopping experience, and ethical guidelines could stifle the positive impact being created by advanced personalization. Second, some would argue that Share of Model optimization is not yet established enough, leaving AI recommendations organic and unbiased in nature. Third, some would argue that consumer autonomy is not undermined because consumers have the final decision in whether or not they use or purchase a product based on a recommendation, so further protections aren't needed. While all of these arguments have merit, they fail to recognize the sustainable and positive impact created in markets by establishing consumer protections early. Regarding the first argument, personalization achieved through manipulation and violations of informed consent is not beneficial to consumers long-term, and likely has a net negative effect. The second claim that AI recommendations are organic in nature and merit-based ignores the extensive industry focus on Share of Model optimization strategies already being employed by companies and agencies early to the space. While Share of Model optimization is not widespread yet, protecting consumers before harm occurs is far more effective than starting prevention after damage is done. Moreover, the 'organic' nature of recommendations also naturally biases certain demographics and companies that are seen favorably by AI models from their training content.
Finally, while the argument that consumer autonomy is preserved because consumers make the final purchase decision—or in the case of AI agents, authorize the final purchase decision—is strong, it fundamentally understates how feasible consumer avoidance of AI has become. For instance, a consumer could boycott Google for another search engine that doesn't employ AI overviews; however, as AI is integrated into virtually every popular product, such avoidance is not feasible for the average consumer. Genuine consumer choice is eliminated before the final decision to purchase by bypassing the traditional consumer funnel and entrenching user data in future training. Overall, while these arguments are worth mentioning, a targeted approach to protecting consumers by upholding all four pillars of consumer autonomy is more than just important; it's necessary.
Addressing these issues in practicality is tough — but not impossible. In order to restore informed consumer consent, brands must move past superficial disclosures and towards customizable opt-out mechanisms that allow consumers to choose whether they interact with AI assistants, how their AI assistant uses their data, and if their AI assistant can use contextual user data in their recommendations. In order to restore consumer sovereignty, equitable regulatory limitations on brand power must be implemented to ensure a fair opportunity for brand visibility. In order to restore transparency, brands could estimate and disclose the percentage of users negatively affected by hallucination, incentivizing them to improve model accuracy, not just revenue. Additionally, for brands with on-platform AI assistants, such as Amazon, third-party algorithmic auditing could address any concern of anti-competitive practices. In order to battle algorithmic bias, it’s important that brands are required to audit their algorithm for bias and disclose their findings in annual reports alongside the steps they’re taking to address it. AI assistants in this case must be treated like an extension of the company and cannot be allowed to repeatedly violate consumers' rights without intervention. Of course, most in the industry will be resistant to such change; it is expensive, however, companies that take the first steps to protect consumers will be leaders of the consumer-first future.
Share of Model optimization and agentic e-commerce signal a critical inflection point in the future of e-commerce, where technology companies and regulatory agencies must act to preserve consumer autonomy before anti-competitive and unethical business practices occur. Monetary incentives, such as increased revenue and decreased competition, are far too strong to leave this space undiscussed and unregulated. Without an intervention, AI manipulation may become the new industry norm, entrenching anti-consumer practices in the most innovative and explosive industry to-date. The window for proactive action is already closing — many consumers have already started to integrate AI into their decision making, behavior, and daily routines, a trend that is forecasted to continue strongly into the future. Companies that adapt quickly to implement consumer-first frameworks will not only meet evolving consumer expectations but will also help establish a healthy AI-first marketplace that benefits all, not just the few. The choice is clear: act now to preserve consumer autonomy, or risk fostering a digital economy where manipulation is the norm. The question is not whether AI will transform commerce, but rather whether businesses allow it to transform consumers into mere data points — or preserve their fundamental right to autonomy.
Citations
"17 Proven LLM Use Cases in E-Commerce That Boost Sales in 2025." Netguru, www.netguru.com/blog/llm-use-cases-in-e-commerce. Accessed 3 Nov. 2025.
Amazon Staff. "Amazon's New AI-Powered Shopping Feature 'Help Me Decide' Makes It Easy to Quickly Pick the Right Product." About Amazon, 23 Oct. 2025, www.aboutamazon.com/news/retail/amazon-things-to-buy-help-me-decide-gen-ai.
"Ethical Considerations of AI in Retail." Talkdesk, www.talkdesk.com/blog/ethical-considerations-ai-retail/. Accessed 3 Nov. 2025.
"Find Information in Faster & Easier Ways with AI Overviews in Google Search." Google Search Help, support.google.com/websearch/answer/14901683?hl=en. Accessed 3 Nov. 2025.
Harkness, Lisa, et al. "How Generative AI Can Boost Consumer Marketing." McKinsey & Company, Dec. 2023, www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/how-generative-ai-can-boost-consumer-marketing.
"How Customers Are Using AI Search." Bain, 8 Aug. 2025, www.bain.com/insights/how-customers-are-using-ai-search/.
"How Large-Language Models Are Changing Ecommerce." Algolia, 12 Jun. 2024, www.algolia.com/cms/render/live/en/sites/www/home/blog/ai/blogentry/llms-changing-ecommerce.html.
"HubSpot AI Share of Voice Tool." HubSpot, www.hubspot.com/aeo-grader/share-of-voice. Accessed 3 Nov. 2025.
"Is It Possible to Influence Visibility on LLMs? If so, How?" Mint Studios, www.mintcopywritingstudios.com/blog/influence-visibility-llms. Accessed 3 Nov. 2025.
Mackey, Caroline. "Nearly 60% Use AI to Shop — Here's What That Means for Brands and Buyers." Darden Report Online, 17 Jun. 2025, news.darden.virginia.edu/2025/06/17/nearly-60-use-ai-to-shop-heres-what-that-means-for-brands-and-buyers/.
Parsons, L. "AI Will Shape the Future of Marketing." Harvard DCE, 14 Apr. 2025, professional.dce.harvard.edu/blog/ai-will-shape-the-future-of-marketing/.
"Scaling Large Language Models for E-Commerce: The Development of a Llama-Based Customized LLM." eBay Inc., 17 Jan. 2025, innovation.ebayinc.com/stories/scaling-large-language-models-for-e-commerce-the-development-of-a-llama-based-customized-llm-for-e-commerce/.
Schumacher, Katharina, and Roger Roberts. "The Agentic Commerce Opportunity: How AI Agents Are Ushering in a New Era for Consumers and Merchants." McKinsey & Company, 17 Oct. 2025, www.mckinsey.com/capabilities/quantumblack/our-insights/the-agentic-commerce-opportunity-how-ai-agents-are-ushering-in-a-new-era-for-consumers-and-merchants.
"Walmart Partners with OpenAI to Create AI-First Shopping Experiences." Walmart Corporate, 14 Oct. 2025, corporate.walmart.com/news/2025/10/14/walmart-partners-with-openai-to-create-ai-first-shopping-experiences.
"When AI Guides the Shopping Journey: Opportunities for Marketers in the Age of AI Driven Commerce." IAB, 10 Nov. 2025, www.iab.com/insights/when-ai-guides-the-shopping-journey/.