Translation 24/7
Our Customer Service is available on site, online and via phone.

In 2024, the European Union introduced a comprehensive regulatory framework for artificial intelligence (AI). The objective was to create a structured environment for the use of AI systems. This new regulation, known as the “AI Act” could have significant implications for the language industry, particularly for translation companies.

The translation industry and AI: A familiar relationship

The translation industry was among the first to embrace AI and machine learning technologies in the early 2010s with the partial adoption of machine translation. Thus, the transition to neural machine translation engines and the widespread use of large language models (LLMs) in 2023 didn’t represent a drastic leap. Instead, these technologies have been adopted gradually, allowing the language industry to become accustomed to their limitations, learn their risks, and leverage their potential. According to a 2024 survey by Slator, two-thirds of language service providers regularly use some form of AI-based technology. This includes not only translation but also other aspects of services. For instance, Villam Language Services, a Budapest-based translation company, integrates LLM capabilities into its project management processes, such as making decisions that can’t be regulated algorithmically. Another goal is to reduce response times. The industry largely agrees that the new technology should be used wisely and that users must be protected from the risks associated with its misuse.

 

The AI Act: A groundbreaking regulation
The AI Act is the world’s first comprehensive regulation that imposes differentiated obligations on AI system developers, users, and distributors based on various levels of risk. The translation industry is primarily affected in terms of its use of AI.

What is the AI Act, and how does it affect the language industry?

The AI Act regulates AI systems and general purpose AI (GPAI) models. It distinguishes four risk categories: unacceptable, high, limited, and minimal risk. Each category comes with different obligations for developers, users, and distributors. The goal is clear: to ensure the safety and transparency of AI systems while fostering innovation and strengthening (regional) competitiveness.

A risk-based approach

The AI Act focuses on assessing and managing the risks of AI systems. It doesn’t focus on the technology itself but rather how it is used. The legislation differentiates between the following risk categories:

  • Unacceptable risk: AI systems that could cause serious social or personal harm, such as those using manipulative techniques, fall into this category. The AI Act simply bans these use cases. Examples include social scoring systems and AI systems that manipulate individuals.
  • High risk: AI systems used in biometric identification, education, or workforce management are subject to special regulations. If a company uses AI to evaluate employee performance, it could be classified as a high-risk system and must meet strict requirements. For a translation agency, this would likely apply more to its role as a customer rather than as a provider.
  • Limited risk: These systems must meet transparency requirements. For example, if a translation company uses AI for machine translation, it must ensure that users are clearly informed that they are coming into contact with AI.
  • Minimal risk: AI systems that don’t fall into the above categories are subject to minimal regulation (e.g. computer games or email spam filters).

The AI Act’s impact on translation companies

For translation companies, the AI Act could bring significant changes. The new regulation may require agencies to conduct more thorough reviews of their AI-based tools, especially in high-risk scenarios. Documentation, risk management, technical compliance, and data processing practices could all face stricter regulations.

For example, a translation company will have a duty to inform clients if it uses AI for translation tasks. However, it may also be required to produce detailed technical documentation if it plans to automate project management or HR processes. Moreover, the company must ensure that its AI system complies with legal requirements and operates under human supervision to prevent erroneous decisions.

The territorial scope of the AI Act

The AI Act applies not only to companies operating within the EU but also to those that make their AI systems available in the EU, regardless of their location. This is particularly important for translation agencies operating globally, which must take the AI Act’s provisions into consideration if they offer their services in the EU. For example, a translation agency based in Hungary cannot claim exemption just because it uses software developed in the United States.

When does the new regulation come into force?

The AI Act came into force on 2 August 2024, and its various provisions will take effect at different times:

  • 2 February 2025: The first provisions of the AI Act come into force, including the prohibition of certain AI systems.
  • 2 August 2025: The obligations related to general purpose AI models (GPAI) will take effect.
  • 2 August 2026: The remaining obligations, including those concerning high-risk AI systems, will come into force.
  • 2 August 2027: The regulation will be fully enforced, covering AI systems that require third-party conformity assessment under EU regulations and AI systems that serve as safety components of such products.

New supervisory bodies and fines

The AI Act establishes new European supervisory bodies responsible for ensuring compliance with the regulation. These bodies, which are responsible for ensuring that AI systems meet the standards, can impose substantial fines on organisations that fail to comply. For translation companies, staying up-to-date on their own AI systems is crucial.

The AI Act and the GDPR: A combined approach

The General Data Protection Regulation (GDPR) primarily focuses on data protection and the processing of individuals’ personal data. Its main objective is to ensure the protection of personal data and to regulate how companies collect, store, and process these data.

The AI Act, however, introduces a broader regulatory framework that emphasises the safety, transparency, and reliability of AI systems. While the GDPR mainly pertains to data processing and protection, the AI Act focuses on the overall functioning, risk management, technical documentation, and human oversight of AI systems.

For translation companies, this means that in addition to protecting the data of their clients and employees, they must also consider the full operation and impact of their AI systems. When applied together, these two regulations ensure that AI systems meet the highest levels of safety and data protection standards while enhancing operational efficiency.