
South Korea leads global surge in low-quality AI content consumption
South Korea leads global surge in low-quality AI content consumption
- South Korea has emerged as the top consumer of AI-generated low-quality content.
- The country experienced the largest increase in generative AI usage globally in 2025, with significant adoption rates.
- Experts and government officials are concerned about the implications of AI slop on content ecosystems and are implementing regulations.
Story
In South Korea, there has been a rapid increase in the consumption of low-quality content generated by artificial intelligence, termed 'AI slop.' This trend has been particularly noticeable as of 2025, when a significant number of South Koreans began adopting generative AI technologies. By the conclusion of the first half of 2025, around 25.9 percent of the population reported utilizing these AI tools, with this figure climbing to 30.7 percent by the end of the year. Consequently, South Korea’s international ranking for AI adoption improved dramatically, moving from the 25th to the 18th position in just six months, marking the country as the largest mover in AI adoption during that period. The surge in AI slop consumption can be attributed to South Korea’s cultural inclination toward swift adaptation to technological innovations, driven by a historical pattern of necessity for rapid change since the 1997 financial crisis. This contrasts starkly with Japan’s historically slower approach, creating a societal environment where prompt adaptation is viewed as crucial to staying relevant. Experts like Lim Joon-ho from the Electronics and Telecommunications Research Institute emphasize that societal norms in South Korea tend to create a collective momentum toward the adoption of new trends. The accessibility of technology also plays a significant role, with high literacy rates and widespread 5G smartphone usage facilitating easy engagement with platforms like YouTube. However, the increased availability of low-quality content raises concerns among experts about the impact on overall content quality and the potential for misuse in scams or other malicious activities, particularly since many AI-generated works are created with minimal human oversight. In response, the government has implemented stricter regulations, requiring AI service providers to clearly label AI-generated content, aiming to mitigate the risks associated with AI slop while promoting the consumption of higher-quality materials.