Navigating the New EU AI Regulation


Navigating the New EU AI Regulation

The European Commission recently proposed a new regulation on artificial intelligence (AI), aiming to create a legal framework that promotes trust, innovation and human rights in the development and use of AI systems. The AI Act is expected to have a significant impact on the AI industry and its stakeholders, as well as on the consumers and users of AI systems.

Charles-Manns-Konica-Minolta-AI-Act.picWhat are the implications and the main challenges for businesses needing to comply with this new regulation? We asked Charles G. Manns, AI task force lead and cybersecurity and compliance manager for Konica Minolta’s video solution services and DX – Solutions Development Centre (SDC). He shared with us his insights and opinions on the AI act, how it will affect Konica Minolta’s AI projects and products and what steps the company is taking to ensure compliance and ethical standards.

Could you give us an overview of the AI act? What are its main objectives, and what changes now?

The European AI Act is a comprehensive and ambitious regulation that aims to create a uniform, harmonized and safe approach to AI in the EU. So, its main objectives are to mitigate any risks associated with AI, to establish legal certainty, to foster trust and confidence in AI systems among the public and the private sector, to protect the rights and safety of the users and the consumers, and of course to facilitate investment in AI to stimulate innovation and competitiveness.

The AI Act represents a major change in the way AI is regulated and it also imposes new obligations and responsibilities on the developers, providers and users of AI systems, such as conducting risk assessments, ensuring data quality, providing transparent information and documentation, registering high-risk AI systems in a public database, and reporting any incidents or malfunctions. The AI act also establishes a system of conformity assessments and certifications to verify that the AI systems comply with the rules, and it sets out sanctions and penalties for non-compliance.

What were the main motivations behind the adoption of this Act?

The AI Act was mainly necessary to address the potential harm that AI systems can pose to people and society, such as privacy violations, discrimination, manipulation, or physical harm. By adopting this new regulation, the EU also means to address the potential lack of transparency associated with AI technologies, ensuring that both users and developer comprehend how AI systems work. Last but not least, the act was also motivated by the need to maintain the EU standards – set and achieved with previous regulations such as the GDPR – in terms of privacy, data protection and transparency in high-risk AI systems, in order to address the ethical and legal challenges posed by AI technologies.

What changes for businesses in terms of compliance?

There are surely new compliance requirements for businesses, and the main focus is on maintaining transparency in the AI system capabilities, so that everything can be accounted for. In general, we can say that the key requirements for all businesses are:

  • risk assessments
  • data governance
  • transparency
  • human oversight measures.

When we talk about human oversight measures, we mean that human interaction must be there to back up the AI solution, and this is mainly focused on high-risk AI applications, which have much stricter requirements in these terms.

What are the positive effects of the AI Act on businesses that rely on AI technologies?

From a business perspective, the AI Act can definitely improve consumer trust because it gives the consumers something tangible that businesses must comply with. Also, having standardized regulations helps make it clearer for us when we develop AI systems and helps us understand how to use internal AI systems as well. So that is positive both for the consumer and the processes.

As always, there are obviously a few negative aspects to consider. For high-risk AI solutions, the AI Act is going to potentially increase costs, as there are a lot of administrative burdens, such as risk assessments and documentation, and for smaller businesses this might be a bit of an issue. This also stands for innovative startups who want to get their AI solutions out fast, as this will definitely slow them down.

Where does Konica Minolta stand in terms of the ethical deployment of AI?

When it comes to ethics, in Konica Minolta we follow a structured procedure in terms of AI development. Firstly, together with the Responsible AI Office (RAO), we have the creation and application of an in-house assessment, needed for assessing the risks associated with new AI-driven products and services. The checklist is guided by various established guidelines, and we use it in each division during the product planning phase so that we are able to identify all potential risks and mitigate them.

Once we are done with the risk assessment phase, the results are reviewed by the AI Ethics Committee. The latter addresses the potential problems emerged, such as inappropriate use of AI, and it also regularly reviews and updates the checklist to ensure that the AI development guidelines and the ethical guidelines are aligned with the evolving standards and practices that the EU AI Act brings in.

We also invest in compliance and ethical training. We train the developers and the other key stakeholders on the ethics and compliance standards of the EU AI Act. Moreover, we’ve got the AI task force, the GDPR team and the internal compliance team. This is how we are dedicated to AI regulations.

Do all of Konica Minolta’s products align with the new EU AI Act?

As we said, Konica Minolta is strategically aligned in technology development to fully embrace the new standard set by the EU AI Act. We want to – and we do – comply with these regulations, but we also want to enhance our reputation and trustworthiness and lead the market for responsible AI deployment. That’s why we have accelerated our technological developments, specifically on enhancing AI transparency and reliability, and we’re doing this across our product portfolio and the services that we offer. This means we use slightly more technical tools to make sure that me meet these privacy requirements – such as data anonymization, encryption and so on. We already had high standards, now the EU has provided us with a wide-perspective framework for AI governance, ethics and responsible compliance.

So not only we meet the new regulatory requirements set by the EU, but we also offer this enhanced value to customers by ensuring safety and efficacy, using the positive parts of the AI Act as a selling point to customers.

Could you make an example?

To provide a tangible example, we can mention our FORXAI Mirror. This tool is primarily designed to assist workers and businesses by verifying that the correct Personal Protective Equipment (PPE) is worn. This system features a camera integrated with advanced machine learning algorithms to monitor PPE compliance and provide real-time feedback via a user-friendly interface. It also offers optional integration with access control systems, to enhance safety protocols. And it strictly follows the process I have previously described. This means that to develop it we adhere to protecting people’s personal data, we deploy our encryption tools, and we thoroughly test documents of our AI models.

This is how we make sure that we comply with the EU AI act by encompassing everything that we’ve discussed, specifically focusing on the main points, i.e. the privacy, the transparency, the risk assessment and explainability.

To sum up, what do you think are the key takeaways for businesses from the AI Act?

 I think there are four key points all businesses should put in place to comply with the new European AI Act:

  • Evaluate the classification of your AI systems under the AI Act, identify the potential high-risk areas and establish continuous monitoring and assessment framework to continuously ensure compliance.
  • Enhance the transparency and documentation about your data sourcing, data management, AI training processes and methodologies, and communicate the capabilities and limitations of your AI systems to the consumers and users.
  • Invest in compliance and ethical AI training for your developers and key stakeholders and set up an internal compliance team dedicated to AI.
  • Invest in AI technology and processes that facilitate the EU AI Act requirements, such as data anonymization, bias mitigation, compliance detection, and encryption tools.

Konica Minolta: a customer-centric approach to AI services

Since its inception in 1873, Konica Minolta has been at the forefront of technological innovation, offering cutting-edge solutions that integrate AI, automation, and hybrid cloud technologies. The company’s customer-centric, digital-first approach enables it to create smarter workflows and more dynamic decision-making for its customers. The company’s vision for AI is to enhance the digital workspace by providing solutions that improve the efficiency, productivity and quality of work, while ensuring responsible and ethical use of AI.

In line with the EU AI Act, Konica Minolta is diligently monitoring its AI services to ensure they adhere to the highest standards of safety and ethics. This includes keeping a close watch on the types of applications deployed and on the nature of data utilized by the solutions, to determine the risks associated. Konica Minolta’s proactive approach ensures that its services not only comply with current regulations but also respect the trust our customers place in the company.

At Konica Minolta, we strongly believe that this commitment to responsible AI use is essential for fostering innovation while safeguarding individual rights and societal values. Contact us, if you have any questions on the Konica Minolta approach to AI.