top of page

Kristina Fort on AI Power, Regulation, and the Global Race for Control

Oct 1, 2025

Kristina Fort examined the limits of assessing AI capabilities, the growing geopolitical competition around AI, and the challenges of regulating rapidly evolving technologies.

In a recent interview for Denník N, Kristina Fort reflected on how artificial intelligence has moved from a predominantly technical domain into a political and strategic issue. Early discussions around AI, she noted, were largely dominated by technical experts, while perspectives from social sciences and humanities were missing – limiting the ability to address broader societal and governance questions.


Fort pointed to the United Kingdom as an environment where AI became politically visible earlier, with the political representation recognising sooner how consequential the technology would become.


According to her, 2024 marked a clear turning point. The debate shifted away from building institutions centred on AI safety toward competition over technological leadership. Artificial intelligence is increasingly seen as so central to the future that control over it has become strategically important.


“Artificial intelligence will be so fundamental for the future that it matters who controls it and who manages to develop the most powerful models first.”

She explained that earlier efforts at cooperation are giving way to a fragmented landscape in which states act independently, aiming either to become the leading power or to remain close to the eventual leader. This trend is visible in the United Kingdom and across other countries.


Who holds power over AI matters primarily because of security concerns, Fort stressed. While discussions about the possible nationalisation of the AI industry in the United States have emerged due to its growing importance, no such step has been taken. Instead, she pointed to a clear trend of AI companies moving closer to the political establishment. In China, where industry and politics are already closely intertwined, this connection is even stronger in the AI sector.


Planning regulation under these conditions remains difficult. The biggest challenge for governments is the speed at which AI is developing. Most public administrations lack the capacity to anticipate possible AI-driven scenarios or to assess their broader impact. Fort highlighted the United Kingdom as a partial exception, referencing its AI Security Institute, which focuses on evaluating AI models, producing forecasts, and strengthening societal resilience.


At the European level, she described the EU AI Act as an attempt to introduce flexibility into regulation. While the legislation remains relatively unspecific in many areas, it is complemented by additional self-regulatory codes that clarify expectations without direct sanctions. The Act is based on classification and proportionality: systems that are not considered high-risk face significantly lower regulatory burdens, while larger actors are subject to stricter requirements.


However, Fort pointed out that the AI Act does not address legal liability for AI models, an issue that was meant to be covered by a separate directive that was later abandoned. As a result, early court cases involving AI developers are now emerging in both the EU and the United States.


Responsibility also increasingly shifts to EU member states. Monitoring and enforcement will take place at the national level, requiring designated authorities. Poland is establishing a new specialised agency, while in the Czech Republic the task is expected to fall to an existing office. Member states also need to transpose the AI Act into national law. At the same time, some countries, including Poland, are lobbying to soften the regulation to support the development of the European AI industry.


Fort concluded that the most effective response at the national level is to build state capacity – by creating sufficient enforcement roles, developing the ability to anticipate future scenarios, preparing crisis plans, and establishing expert advisory bodies capable of guiding governments as AI’s impact continues to grow.

PRINCEPS Risk Intelligence Institute, z.s.

Karlova 455/48, 110 00, Prague 1, Czech Republic

ID: 18009794

  • LinkedIn
bottom of page