Home Technology Look to GDPR to Predict the Future of AI in Europe

Look to GDPR to Predict the Future of AI in Europe

92
0


The promise of the worldwide artificial intelligence market is staggering, and Europe, with its 450 million consumers, is a location for American tech companies wishing to tap into the opportunity. While Europe has adopted GDPR as a way to ensure consumer protection in online technology, adhering to these laws will also apply to AI technology. US companies need to make sure they incorporate GDPR into AI as a certain way to future-proof AI technology.

GDPR is the key

The EU’s General Data Protection Regulation (GDPR), which went into force May of 2018, paved the way for a new approach to privacy – digital and otherwise – but isn’t the only such government to assist consumers in using personal data in a geographic region. Some US states followed suit, with California passing the California Privacy Rights Act (CPRA) and recently announcing that it will study the development, use and risks of AI in California. Now, the EU’s AI Act , first proposed in April 2021 by the European Commission and to be finalized at the end of 2023, will be the world’s first comprehensive AI law. Some say it could lead to setting a worldwide standard, according to the Brookings Institute.  

As any firm doing business in Europe knows, GDPR enforces a broad definition of personal data covering any information related to an identifiable, living individual stored anywhere. Such personal data is subject to a significant number of protections that fully apply to certain AI products, present and future, with some financial implications and technology revisions for those who ignore GDPR’s current requirements and the imminent AI Act. In recent months, there have been fines for GDPR infractions for large and smaller companies as data privacy becomes embedded in European law.

According to Doug McMahon, partner at international law firm McCann FitzGerald, who specializes in IT, IP, and the implementation of GDPR, companies should now look to the future. “If I’m a company that breaches the GDPR when creating a large language model and I’m told I can no longer process any EU citizens’ personal data to train my model, this is potentially worse than a fine because I have to retrain my model.” The advice is to think now about GDPR for any AI product.

Optimizing regulation, IP, and taxes

McMahon advises U.S. AI companies wishing to succeed in the European market. While companies can do business there while being located domestically in the US, “from a data protection perspective, having a base in the EU would be ideal because the company’s European customers will have questions about your GDPR compliance. Established in Europe and directly subject to GDPR will help you sell into Europe.”

The next step requires some research since the EU has 27 member states and 27 regulators, with not all regulators being alike, he says. Plus, no U.S. company wants to deal with the regulator in each nation where it does business, which would be the case without an EU office. While a choice of regulator is unlikely to be the main factor in deciding where to locate a European base, companies will want to pick an EU location “with regulators that are used to regulating highly complex data protection companies that process lots of personal data, such as in the social media space, that have a legal infrastructure with advisors who are very familiar with complex processing of personal data and a court system well versed in the realm of data protection,” says McMahon.

As stated by Brian McElligott, a partner and head of the AI practice at international law firm Mason Hayes Curran, seeking a European location offering a “knowledge development” or “patent box” can benefit U.S. AI firms. Available in nations like Ireland,  “the Knowledge Development Box covers copyrighted software, which is exactly the legal manifestation of AI technology,” he says. Assuming an American company located in a nation like Ireland, “if your technology is protected by a patent or copyrighted software, you can look to reduce the taxation on profits from licensed revenues from your technology covered by those patents/copyrighted software down to an effective tax rate of 6.25%.”

Most important actions

Even if a U.S. AI company chooses not to open an EU office, fundamental steps must be taken to stay on the good side of privacy requirements. Notes Jevan Neilan, head of the San Francisco office at Mason Hayes Curran, “The difficulty for these businesses is having a lawful data set or a data set that can be used lawfully. It’s a challenging prospect for business, particularly when you’re a startup.

“From the ground up, you should be building in privacy,” he advises. ”There might be imperfect compliance at the development stages, but ultimately, the application of the large language model needs to be compliant at the end point of the process.” The guiding principle should be “trustworthy AI,” he says.

In fact, it’s been mentioned that the likely transparency requirements for AI that interact with humans, such as chatbots and emotion-detection systems, will lead to global disclosure on most websites and apps. Says McMahon: “The first piece of advice is to look at your training dataset and make sure you have a proper data protection notice available on your website to give to users and make sure that there’s an opt-out mechanism if you’re the creator of the AI data set.”

Keep individual privacy in mind

The AI market is so promising that it’s attracting companies of all sizes. According to McMahon, “Most of the companies will be using a license from, say, OpenAI to use their API. They’ll be implementing that, and then they’ll be providing services to users. In that case, they need to define their end user and if they’re offering a service to individuals or a service to a business. If the former, they need to think about what data are they collecting about them and how they will meet their transparency obligations, and in either case, they need to have a GDPR compliance program in place.”

But the due diligence doesn’t end for smaller companies leveraging third-party large language models, he adds. “The provider of the underlying architecture must be able to say they’ve created their models in compliance with EU GDPR and that they have processes in place that evidence they’ve thought about that,” insists McMahon.

The expanding regulations environment might challenge U.S. firms wanting to enter the large European AI market. Still, in the end, these rules will be helpful, according to McElligott. “Those who are looking to Europe with their AI models should look at GDPR and the AI Act and conduct a threshold analysis to determine whether their AI products might be classed as high risk,” he advises. The increasing regulations “might create a temporary slowdown of investment or in the progression of the tech in Europe versus the U.S., but ultimately, greater consumer confidence in the EU’s trustworthy AI approach could boost the market,” he says.

Featured Image Credit: Provided by the Author; Pixabay; Pexels; Thank you!

Gary Dempsey

Gary Dempsey is VP of technology, consumer and business services for IDA Ireland. In this role he assists companies with their evaluation of Ireland as a location for International operations across a range of sectors including; media & communications; e-commerce; internet; consumer products; B2C, B2B, professional business services and industrial products & services.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here