Overview of Emergent Abilities in AI
- World Scholars Review
- 1 day ago
- 16 min read

Authors: Janhavi Jere and Vaidehi Jere
Mentor: Eszter Varga-Umbrich. Eszter is currently a doctoral candidate in the Department of Engineering at the University of Cambridge.
Abstract
As artificial intelligence (AI) systems scale in size and complexity, they sometimes begin to display unexpected and unprogrammed capabilities that arise abruptly. These are known as emergent abilities—behaviors not explicitly coded or anticipated during training. Alongside these capabilities, new risks also emerge—particularly due to the technology’s unpredictable and difficult-to-control nature. Despite these concerns, emergent abilities hold significant promise across disciplines—enabling complex problem-solving, enhancing creativity, and accelerating scientific discovery. In this paper, we will define the concept of emergence and outline theories about the conditions under which it occurs, and examine key examples of emergent abilities in AI. We also explore the associated risks and consider counterarguments that question whether these abilities are truly emergent. Understanding these emergent phenomena is essential for guiding the responsible and effective use of AI in society. As AI continues to evolve and permeate society, understanding these phenomena is essential to ensure its responsible and beneficial use. Research in this area is increasingly vital as AI systems grow more powerful and influential in all aspects of life.
Introduction
Over the past decades, AI has achieved remarkable milestones across a wide range of fields-from accelerating research to personalizing online shopping experiences-, AI is increasingly integrated into daily life. It is also present in fields such as psychology, where AI can interpret patients’ emotions based on their symptoms (Chen et al. 2024). In law, they provide accessible legal services to users (Nay et al. 2023). As the amount of data available has skyrocketed, the analytical capabilities of AI have led to a surge in improved medical treatments, qualitative research, and efficient decision-making (Berens et al. 2023; Yamansavascilar, Ozgovde, and Ersoy 2025; Duede et al. 2024).
The idea of emergence predates AI. According to Nobel Prize-winning physicist Philip Anderson’s essay, "More is Different," he described emergence as the process by which quantitative changes in a system result in qualitative changes in behavior (Anderson 1972).
Emergence is a phenomenon widely observed across natural and social systems. For instance, a single metronome ticking on its own seems simple, but dozens of them can synchronize through shared vibrations. Similarly, a lone bird follows simple rules, but flocks form complex, coordinated flight patterns (Artime and De Domenico 2022). Likewise, a single slow car does not cause a traffic jam—but the collective behavior of hundreds of vehicles can give rise to complex dynamics like congestion or coordinated flow. These examples illustrate how complex behaviors emerge from relatively simple rules as systems increase in size or complexity.
In AI, emergence is typically observed as a nonlinear improvement in model performance on tasks like reasoning, translation, or code generation. These abilities emerge after a particular threshold—such as scale, data quantity, or training complexity—is surpassed (Jason Wei et al. 2022). Importantly, they often appear abruptly and are not present in smaller versions of the same model. Sometimes, the use of specific techniques, such as fine-tuning, chain-of-thought prompting, or instruction tuning, can also trigger the appearance of such behaviors (Jason Wei et al. 2023). By providing more specific prompts or intermediate instructions to guide the model, LLMs can demonstrate emergent abilities (Jason Wei et al. 2023).
As AI continues to evolve, so too does the range of capabilities it exhibits. In its earlier stages, AI systems primarily relied on rule-based logic and narrow, task-specific programming. These systems were limited to the abilities explicitly defined by human developers, often referred to as "hard-coded" behaviors. For example, a chess-playing AI might follow a fixed evaluation function crafted by experts, with no capacity to learn or innovate beyond its preset instructions.
The rise of machine learning (ML) fundamentally changed this paradigm. Instead of being manually programmed with all possible behaviors, ML models learn patterns and decision strategies directly from data. This shift allowed AI systems to adapt to a broader range of tasks without task-specific coding. More recently, the rise of deep learning—particularly in the form of large-scale neural networks—has led to even more sophisticated capabilities.
In particular, large language models (LLMs) and multi-agent systems have demonstrated behaviors that were never explicitly taught during training. These are known as emergent abilities—unexpected behaviors that arise only after a model is trained on massive datasets and reaches a certain level of complexity (Jason Wei et al. 2022). Unlike traditional performance improvements that scale linearly, emergent behaviors often appear suddenly and unpredictably, suggesting a qualitative shift in the model’s internal representations or problem-solving strategies.
A striking illustration of emergent behavior can be found in multi-agent reinforcement learning environments. In a study conducted by OpenAI, AI agents were placed in a simulated environment to play a game of hide-and-seek. Initially, the agents used basic strategies such as running and hiding behind walls. However, over time, without being explicitly instructed on how to collaborate or manipulate the environment, the agents learned to stack blocks to create shelters, lock each other out of spaces, and counter the opponent’s strategies (Baker et al. 2020). These behaviors emerged from simple reward functions and extensive training, highlighting the system’s ability to develop creative problem-solving tactics as seen on Figure 1.

Another well-known example of emergent behavior is AlphaGo, the AI model developed by DeepMind to play the board game called Go. AlphaGo learned to play through a combination of reinforcement learning and self-play. During its match against world champion Lee Sedol, AlphaGo made moves that initially seemed like errors to experienced players, but were later recognized as highly innovative and strategically profound. These unexpected strategies, which arose from the model’s training process rather than explicit programming, played a key role in its historic victory (Lyre 2019). As AI grows and receives more training data, it can adapt to its environment, leading to the emergence of new abilities.
Emergent behavior is a relatively new and not yet fully understood phenomenon in AI. As models scale in size and complexity, they not only become more powerful but also more unpredictable (Tang and Kejriwal 2024; Jason Wei et al. 2022). By researching these emergent abilities, researchers can gain a better understanding of and predict this phenomenon. They can correctly identify patterns and conditions in which emergent capabilities can be observed. While these new capabilities offer promise, they also introduce significant challenges. Emergent behaviors can lead to bias, hallucinated outputs, or unethical recommendations (Berti, Giorgi, and Kasneci 2025; Betley et al. 2025). As these abilities are better understood, they have the potential to enhance sectors beyond computer science, such as business, biology, medicine, law, and others.
This review focuses on emergent abilities in AI, primarily on their appearance in LLMs. To examine how these abilities arise, we draw on the five conditions proposed by David C. Krakauer et al.—scaling, criticality, compression, novel bases, and generalization (Krakauer, Krakauer, and Mitchell 2025). Each condition is discussed in detail alongside real-world case studies that illustrate its relevance. In addition, we address some counterarguments to the concept of emergence, including critiques of whether such behaviors are genuinely novel and critiques of quantifying data. Ultimately, this review seeks to clarify what causes emergent behaviors in AI systems, how they manifest, and why they are significant for both research and society.
Conceptual Foundations of Emergence
Understanding how and why emergent abilities arise in AI systems requires both a conceptual framework and empirical evidence. A recent proposal by Krakauer et al. (2025) outlines five conditions that characterize the emergence of novel capabilities in AI: scaling, criticality, compression, novel bases, and generalization (Krakauer, Krakauer, and Mitchell 2025). This section explores each of these conditions in depth, pairing their theoretical definitions with real-world examples from LLMs and other deep learning architectures. This integrated approach allows us to examine not only how emergent behaviors are defined, but also where and when they have been observed in practice.
Scaling
Scaling describes how certain abilities only surface when a model reaches a sufficient size in parameters and training data. Research by Wei et al. (2022) (Jason Wei et al. 2022) found that LLMs like GPT-3 exhibit qualitatively new behaviors only after reaching a specific scale threshold. Tasks such as symbolic reasoning, complex arithmetic, and question answering became achievable only after crossing these thresholds. These capabilities are notably absent in smaller models—even when trained on the same data and subjected to similar few-shot prompting strategies as seen in Figure 2—indicating that scale is a critical enabler of emergence. The study suggests that emergent abilities are unlikely to manifest in smaller models because they lack the complexity and capacity needed to internalize the patterns and abstractions that give rise to emergence. Some studies challenged the conclusion that emergent abilities are restricted to larger models. The researchers demonstrated that smaller models trained on simplified datasets, such as those with restricted vocabulary and simpler language structures, can also exhibit emergent capabilities (Muckatira et al. 2024; Eldan and Li 2023). This suggests that although scale facilitates emergence, data suited to model size also plays a critical role in enabling emergent abilities.

Another well-known example of emergent behavior is AlphaGo, the AI model developed by DeepMind to play the board game called Go. AlphaGo learned to play through a combination of reinforcement learning and self-play. During its match against world champion Lee Sedol, AlphaGo made moves that initially seemed like errors to experienced players, but were later recognized as highly innovative and strategically profound. These unexpected strategies, which arose from the model’s training process rather than explicit programming, played a key role in its historic victory (Lyre 2019). As AI grows and receives more training data, it can adapt to its environment, leading to the emergence of new abilities.
Emergent behavior is a relatively new and not yet fully understood phenomenon in AI. As models scale in size and complexity, they not only become more powerful but also more unpredictable (Tang and Kejriwal 2024; Jason Wei et al. 2022). By researching these emergent abilities, researchers can gain a better understanding of and predict this phenomenon. They can correctly identify patterns and conditions in which emergent capabilities can be observed. While these new capabilities offer promise, they also introduce significant challenges. Emergent behaviors can lead to bias, hallucinated outputs, or unethical recommendations (Berti, Giorgi, and Kasneci 2025; Betley et al. 2025). As these abilities are better understood, they have the potential to enhance sectors beyond computer science, such as business, biology, medicine, law, and others.
This review focuses on emergent abilities in AI, primarily on their appearance in LLMs. To examine how these abilities arise, we draw on the five conditions proposed by David C. Krakauer et al.—scaling, criticality, compression, novel bases, and generalization (Krakauer, Krakauer, and Mitchell 2025). Each condition is discussed in detail alongside real-world case studies that illustrate its relevance. In addition, we address some counterarguments to the concept of emergence, including critiques of whether such behaviors are genuinely novel and critiques of quantifying data. Ultimately, this review seeks to clarify what causes emergent behaviors in AI systems, how they manifest, and why they are significant for both research and society.
Criticality
Criticality describes the point at which a model experiences a sudden and dramatic improvement in performance (Krakauer, Krakauer, and Mitchell 2025). In physics, criticality marks the point at which a system undergoes a fundamental change (e.g., water boiling into steam). In AI, it describes a sudden onset of new capabilities that did not gradually improve but instead appear to "switch on." In one of the previously described case studies, researchers tested six language models using few-shot prompting, a method in which multiple examples are provided to guide the model toward the correct answer. As shown in Figure 2, the study found that, at a certain scale, each model’s accuracy suddenly increased. This abrupt improvement marked the emergence of previously unseen capabilities (Jason Wei et al. 2022), indicating that the model had reached a critical point.
Compression
Compression involves the model distilling complex input data into simpler internal representations. This allows it to detect patterns and structures that were not explicitly labeled, facilitating the emergence of high-level behavior. This capability to become efficient allows for a path towards emergent behavior (Krakauer, Krakauer, and Mitchell 2025). By compressing information efficiently, a model can uncover relationships that even human researchers may overlook.
One notable example involves a model trained to play the game of Othello without being given its rules. Instead, it was exposed only to sequences of legal moves. By compressing complex information about the game into simpler internal representations, the model learned to play Othello using legal moves (Li et al. 2024). This study demonstrates how efficient compression can lead to the emergence of general principles and abstract reasoning, even in the absence of explicit instruction.
Novel Bases
As large models learn from vast datasets, they often develop new internal structures that were not predefined by their architecture or training goals (Jiang 2023). These are known as novel bases: latent dimensions or representational schemes that organize concepts, patterns, or reasoning in efficient ways. Unlike memorized patterns or traditional symbolic systems, novel bases arise as a product of scale, data diversity, and optimization pressure (Krakauer, Krakauer, and Mitchell 2025).
For example, in some language models, neurons have been found to respond not just to specific words or phrases, but to abstract concepts such as sentiment, formality, or sarcasm—despite never being explicitly labeled as such during training. These internal units represent a kind of emergent structure: the model invents its own conceptual building blocks to make sense of language in a high-dimensional space. These novel bases allow the system to reason, generalize, or interpret in ways that go beyond surface-level pattern recognition. In one study analyzing the internal activations of LLMs, researchers found that certain directions in the model’s embedding space corresponded to complex concepts like gender bias or geographic knowledge (Nanda, Burns, and Turner 2023). These axes were not explicitly defined in the training data, but emerged naturally as the model learned to represent and manipulate meaning more efficiently—demonstrating the formation of novel bases.
Generalization
Generalization is the model’s ability to apply its knowledge to tasks or domains it has not explicitly encountered during training. This type of cross-domain generalization is one of the core goals of ML: to build models that are neither overly specialized to their training data nor entirely unstructured, but instead capable of adapting flexibly to novel scenarios.
Emergent generalization is often observed through the use of prompting techniques. They can enhance the performance of general-pretrained LLMs to the level of finetuned models. In one study, GPT-3 demonstrated basic mathematical reasoning in zero-shot settings even though it was never explicitly trained for it. This implies that the model had internalized structural regularities from its training corpus and was able to recombine them in novel contexts (Brown et al. 2020). Another case study demonstrated this idea by utilizing zero-shot learning, a technique in which a prompt is provided to the model without any prior information about the expected answer. They concluded that basic LLMs already possess basic pattern proficiency even when they are not trained; however, they need to be trained to solve complex problems (Mirchandani et al. 2023). Below, we explore the four common emergent abilities of AI.
In-context learning: The first ability is in-context learning (ICL). A model learns from both the structure of preexisting knowledge and the information in the prompt. Based on contextual feedback, the model dynamically adjusts its responses. The model learns from the context it’s given, giving rise to the emergent ability (Jerry Wei et al. 2023).
Chain of Thought Reasoning: CoT, short for Chain of Thought Reasoning, is the ability of a model to exhibit a human-like thought process to get to an answer. This behavior is usually exhibited in larger models, which solve more complex problems step-by-step (Rueda et al. 2025).
Instruction Following: Instruction following, or IF, is similar to ICL. It is when the model performs the target task it is meant to accomplish, except that, unlike ICL, it needs to be finetuned (Liu et al. 2023).
Reinforcement Learning From Humans: The last ability is reinforcement learning from humans (RLFH). Here, the quality of the model’s output improves through exposure to well-crafted prompts and human preferences. This is similar to prompt engineering, a primary technique for observing emergent abilities in AI. This method utilizes the accuracy and specificity of the user-provided prompt to yield an accurate and specific response from the model (Dong et al. 2025). RLFH also tests the model’s capability to reason its response, rather than simply responding to a stagnant machine (Boiko, MacKnight, and Gomes 2023).
Counterclaims to Emergence
As interest in emergent abilities grows, so too has the volume of research investigating—and challenging—this phenomenon. Most support the theory that as model size increases and prompting techniques improve, models demonstrate sudden increases in accuracy and previously unseen abilities (Jason Wei et al. 2022; Snell et al. 2024). However, some researchers challenge the view that these abilities appear spontaneously. Instead, they argue that what is perceived as emergence is actually the result of a continuous, quantifiable accumulation of learned behaviors and incremental improvements in model performance.
Conceptual Critique
One major critique is that these so-called emergent abilities are not truly "emergent" in a spontaneous or novel sense. Instead, they are just a combination of the context of the prompt, accumulated learned behaviors, and existing knowledge in the database (Lu et al. 2024). This position suggests that models may be relying on preexisting information to demonstrate new abilities rather than acquiring them spontaneously. It questions the possibility of innovation through emergence and whether these are truly demonstrations of creativity and advanced reasoning, or rather just an accumulation of past data and learned behaviors.
Limitations in Quantification
Another counter-theory for emergent abilities is that they represent measurable improvements, rather than sudden jumps in accuracy and ability. One paper suggests that emergent abilities are often a mirage. When discrete metrics are used, linear improvements in a model’s capabilities may appear too abrupt. On the other hand, if linear or continuous metrics are used, there are observable and continuous improvements in the model’s abilities (Schaeffer, Miranda, and Koyejo 2023). This implies that emergent abilities in LLMs do not suddenly appear, but can be accurately measured as constant improvements over time. Another paper proposes that although LLMs have displayed dramatic leaps in performance, these changes can be traced as gradual advances when analyzed using a concept called "infinite resolution." This is where the scale is increased to a theoretically infinite range, revealing linear improvements in abilities, even in smaller models (Hu et al. 2024). There appear to be sudden jumps due to ineffective resolution while measuring data.

Conclusion
Emergent abilities in AI represent one of the most intriguing and consequential developments in modern technology. These capabilities arise as models scale in size and are trained on large, diverse datasets, often resulting in new behaviors that were never explicitly programmed. Abilities such as advanced reasoning, language translation, and multi-step problem-solving tend to surface once specific thresholds are crossed.
Understanding and predicting these abilities is essential—not only for harnessing their potential to transform industries, but also for addressing their associated risks, including the spread of harmful content, biased outputs, and opaque decision-making. The five conditions of emergence—scaling, criticality, compression, novel bases, and generalization—offer a framework for interpreting how and when these behaviors appear.
While many researchers embrace the concept of emergence, others argue that these capabilities may be explainable by more traditional means, such as incremental learning, dataset design, or prompting strategies. This ongoing debate underscores the importance of rigorous research and critical inquiry.
Ultimately, developing a clearer understanding of emergent abilities—alongside their causes, limitations, and ethical implications—is essential. As AI continues to advance and integrate into nearly every aspect of society, responsible development, informed governance, and continued investigation into emergence will be critical for shaping a future where AI serves human goals safely and effectively.
References
Anderson, Philip. 1972. “More Is Different.” https://www.tkm.kit.edu/downloads/TKM1_2011_more_is_different_PWA.pdf.
Artime, Oriol, and Manlio De Domenico. 2022. “From the Origin of Life to Pandemics: Emergent Phenomena in Complex Systems.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 380 (2227). https://doi.org/10.1098/rsta.2020.0410.
Baker, Bowen, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew, and Igor Mordatch. 2020. “Emergent Tool Use from Multi-Agent Autocurricula.” https://arxiv.org/abs/1909.07528.
Berens, Philipp, Kyle Cranmer, Neil D. Lawrence, Ulrike von Luxburg, and Jessica Montgomery. 2023. “AI for Science: An Emerging Agenda.” https://arxiv.org/abs/2303.04217.
Berti, Leonardo, Flavio Giorgi, and Gjergji Kasneci. 2025. “Emergent Abilities in Large Language Models: A Survey.” https://arxiv.org/abs/2503.05788.
Betley, Jan, Daniel Tan, Niels Warncke, Anna Sztyber-Betley, Xuchan Bao, Martín Soto, Nathan Labenz, and Owain Evans. 2025. “Emergent Misalignment: Narrow Finetuning Can Produce Broadly Misaligned LLMs.” https://arxiv.org/abs/2502.17424.
Boiko, Daniil A., Robert MacKnight, and Gabe Gomes. 2023. “Emergent Autonomous Scientific Research Capabilities of Large Language Models.” https://arxiv.org/abs/2304.05332.
Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. “Language Models Are Few-Shot Learners.” https://arxiv.org/abs/2005.14165.
Chen, Dian, Ying Liu, Yiting Guo, and Yulin Zhang. 2024. “The Revolution of Generative Artificial Intelligence in Psychology: The Interweaving of Behavior, Consciousness, and Ethics.” Acta Psychologica 251: 104593. https://doi.org/https://doi.org/10.1016/j.actpsy.2024.104593.
Dong, Zhichen, Zhanhui Zhou, Zhixuan Liu, Chao Yang, and Chaochao Lu. 2025. “Emergent Response Planning in LLMs.” https://arxiv.org/abs/2502.06258.
Duede, Eamon, William Dolan, André Bauer, Ian Foster, and Karim Lakhani. 2024. “Oil & Water? Diffusion of AI Within and Across Scientific Fields.” https://arxiv.org/abs/2405.15828.
Eldan, Ronen, and Yuanzhi Li. 2023. “TinyStories: How Small Can Language Models Be and Still Speak Coherent English?” https://arxiv.org/abs/2305.07759.
Hu, Shengding, Xin Liu, Xu Han, Xinrong Zhang, Chaoqun He, Weilin Zhao, Yankai Lin, et al. 2024. “Predicting Emergent Abilities with Infinite Resolution Evaluation.” https://arxiv.org/abs/2310.03262.
Jiang, Hui. 2023. “A Latent Space Theory for Emergent Abilities in Large Language Models.” https://arxiv.org/abs/2304.09960.
Krakauer, David C., John W. Krakauer, and Melanie Mitchell. 2025. “Large Language Models and Emergence: A Complex Systems Perspective.” https://arxiv.org/abs/2506.11135.
Li, Kenneth, Aspen K. Hopkins, David Bau, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. 2024. “Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task.” https://arxiv.org/abs/2210.13382.
Liu, Peiyu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, and Ji-Rong Wen. 2023. “Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study.” https://arxiv.org/abs/2307.08072.
Lu, Sheng, Irina Bigoulaeva, Rachneet Sachdeva, Harish Tayyar Madabushi, and Iryna Gurevych. 2024. “Are Emergent Abilities in Large Language Models Just in-Context Learning?” https://arxiv.org/abs/2309.01809.
Lyre, Holger. 2019. “Does AlphaGo Actually Play Go? Concerning the State Space of Artificial Intelligence.” https://arxiv.org/abs/1912.10005.
Mirchandani, Suvir, Fei Xia, Pete Florence, Brian Ichter, Danny Driess, Montserrat Gonzalez Arenas, Kanishka Rao, Dorsa Sadigh, and Andy Zeng. 2023. “Large Language Models as General Pattern Machines.” https://arxiv.org/abs/2307.04721.
Muckatira, Sherin, Vijeta Deshpande, Vladislav Lialin, and Anna Rumshisky. 2024. “Emergent Abilities in Reduced-Scale Generative Language Models.” https://arxiv.org/abs/2404.02204.
Nanda, Neel, Collin Burns, and Alex Turner. 2023. “Investigating Bias Representations in LLMs via Activation Steering.” https://www.lesswrong.com/posts/cLfsabkCPtieJ5LoK/investigating-bias-representations-in-llms-via-activation.
Nay, John J., David Karamardian, Sarah B. Lawsky, Wenting Tao, Meghana Bhat, Raghav Jain, Aaron Travis Lee, Jonathan H. Choi, and Jungo Kasai. 2023. “Large Language Models as Tax Attorneys: A Case Study in Legal Capabilities Emergence.” https://arxiv.org/abs/2306.07075.
Rueda, Alice, Mohammed S. Hassan, Argyrios Perivolaris, Bazen G. Teferra, Reza Samavi, Sirisha Rambhatla, Yuqi Wu, et al. 2025. “Understanding LLM Scientific Reasoning Through Promptings and Model’s Explanation on the Answers.” https://arxiv.org/abs/2505.01482.
Schaeffer, Rylan, Brando Miranda, and Sanmi Koyejo. 2023. “Are Emergent Abilities of Large Language Models a Mirage?” https://arxiv.org/abs/2304.15004.
Snell, Charlie, Eric Wallace, Dan Klein, and Sergey Levine. 2024. “Predicting Emergent Capabilities by Finetuning.” https://arxiv.org/abs/2411.16035.
Tang, Zhisheng, and Mayank Kejriwal. 2024. “Humanlike Cognitive Patterns as Emergent Phenomena in Large Language Models.” https://arxiv.org/abs/2412.15501.
Wei, Jason, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, et al. 2022. “Emergent Abilities of Large Language Models.” https://arxiv.org/abs/2206.07682.
Wei, Jason, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.” https://arxiv.org/abs/2201.11903.
Wei, Jerry, Jason Wei, Yi Tay, Dustin Tran, Albert Webson, Yifeng Lu, Xinyun Chen, et al. 2023. “Larger Language Models Do in-Context Learning Differently.” https://arxiv.org/abs/2303.03846.
Yamansavascilar, Baris, Atay Ozgovde, and Cem Ersoy. 2025. “LLMs Are Everywhere: Ubiquitous Utilization of AI Models Through Air Computing.” https://arxiv.org/abs/2503.00767.
Comments