The European Union has introduced its guidelines for the use of general-purpose artificial intelligence, moving forward with its significant regulation despite opposition from the US government and major tech companies. The finalized code outlines regulations that will soon be enforced for advanced AI models like OpenAI’s GPT-4 and Google’s Gemini, including measures to protect copyright and the possibility of independent risk assessments for highly sophisticated systems.
Despite pressure from US tech firms and European businesses, the EU remains steadfast in implementing its strict AI regulations, which are considered the toughest in the world. Calls for a two-year delay from top European companies like Airbus and BNP Paribas have been met with resistance, as unclear regulations threaten the EU’s competitiveness in the global AI landscape.
Although facing criticism for potential dilution of rules due to pressure from the US and tech giants, the EU has emphasized the importance of ensuring the safety and transparency of advanced AI models in Europe. Tech companies will now need to decide whether to adhere to the code, which still requires formal approval from the European Commission and member states.
As part of the code, companies will be required to implement technical measures to prevent their AI models from reproducing copyrighted content, as well as conducting risk assessments outlined in the AI act. Companies offering advanced AI models must monitor their creations post-release and allow external evaluators to assess their capabilities, with some flexibility in identifying potential risks.
Efforts are underway to streamline the complex timeline of the AI act within the European Commission and various European countries, with discussions around delaying upcoming regulations for high-risk AI systems, such as those involving biometrics and facial recognition, set to take effect in the future.