Skip to content

Artificial Intelligence (AI)

          On April 28, 2022, Covington convened experts across our practice groups for the Covington Robotics Forum, which explored recent developments and forecasts relevant to industries affected by robotics.  One segment of the Robotics Forum covered risks of automation and AI, highlights of which are captured here.  A full recording of the Robotics Forum is available here until May 31, 2022.

            As AI and robotics technologies mature, the use-cases are expected to grow in increasingly complex areas and to pose new risks. Because lawsuits have settled prior to a court deciding liability questions, no settled case law yet exists to identify where the liability rests between robotics engineers, AI designers, and manufacturers.  Scholars and researchers have proposed addressing these issues through products liability and discrimination doctrines, including the creation of new legal remedies specific to AI technology and particular use-cases, such as self-driving cars.  Proposed approaches for liability through existing doctrines have included:

Continue Reading Robotics Spotlight: Risks of Automation and AI

            On April 28, 2022, Covington convened experts across our practice groups for the Covington Robotics Forum, which explored recent developments and forecasts relevant to industries affected by robotics.  Winslow Taub, Partner in Covington’s Technology Transactions Practice Group, and Jennifer Plitsch, Chair of Covington’s Government Contracts Practice Group, discussed the robotics issues presented in private transactions

This quarterly update summarizes key federal legislative and regulatory developments in the first quarter of 2022 related to artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), and data privacy, and highlights a few particularly notable developments in the States.  In the first quarter of 2022, Congress and the Administration focused

A recent AAA study revealed that, although the pandemic has resulted in fewer cars on the road, traffic deaths have surged.  Speeding, alcohol-impairment, and reckless driving has caused the highest levels of crashes seen in decades, and the National Safety Council estimates a 9% increase in roadway fatalities from 2020.  Autonomous vehicles (AVs) have the

In 2021, European lawmakers and agencies issued a number of proposals to regulate artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAV”), and data privacy, as well as reports and funding programs to pursue the developments in these emerging areas.  From the adoption of more stringent cybersecurity standards for IoT devices to the deployment of standards-based autonomous vehicles, federal lawmakers and agencies have also promulgated new rules and guidance to promote consumer awareness and safety. While our team tracks developments across EMEA, this roundup focuses on a summary of the key developments in Europe in 2021 and what is likely to happen in 2022.

Part I: Internet of Things

With digital policy being a core priority for the current European Commission, the EU has pursued a range of initiatives in the area of IoT.  These developments tend to be interspersed throughout a range of policy and legislative decisions, which are highlighted below.

Connecting Europe Facility and IoT Funding

In July 2021, the European Parliament and Council of the EU adopted a regulation establishing the Connecting Europe Facility (€33.7 billion for 2021-2027) to accelerate investment in trans-European networks while respecting technological neutrality.  In particular, the regulation noted that the viability of “Internet of Things” services will require uninterrupted cross-border coverage with 5G systems, to enable users and objects to remain connected while on the move.  Given that 5G deployment in Europe is still sparse, road corridors and train connections are expected to be key areas for the first phase of new applications in the area of connected mobility and therefore constitute vital cross-border projects for funding under the Connecting Europe Facility.  The Parliament had also called earlier for “stable and adequate funding” for investments in AI and IoT, as well as for building transport and ICT infrastructure for intelligent transport systems (ITS), to ensure the success of the EU’s data economy.

In May 2021, the Council adopted a decision establishing a specific research funding programme (€83.4 billion for 2021-2027) under Horizon Europe.  In specifying the EU’s priorities, the decision identified the importance of IoT in health care, cybersecurity, key digital technologies including quantum technologies, next generation Internet, space, and satellite communications.
Continue Reading EMEA IoT & CAV Legislative and Regulatory Roundup 2021 and Forecast 2022

As 2021 comes to a close, we will be sharing the key legislative and regulatory updates for artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), and privacy this month.  Lawmakers introduced a range of proposals to regulate AI, IoT, CAVs, and privacy as well as appropriate funds to study developments

If there is a silver lining to most crises, the accelerated move toward digitized commerce globally and in Africa may be one positive outcome of the COVID-enforced lockdown. It is welcome news there that the South African Minister of Communications and Digital Technologies (“Minister”) published the Draft National Data and Cloud Policy (in Government Gazette

On 27 October 2021, the U.S. Food and Drug Administration (“FDA”), Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (“MHRA”) (together the “Regulators”) jointly published 10 guiding principles to inform the development of Good Machine Learning Practice (“GMLP”) for medical devices that use artificial intelligence and machine learning (“AI/ML”).

Purpose

AI

On 22 September 2021, the UK Government published its 10-year strategy on artificial intelligence (“AI”; the “UK AI Strategy”).

The UK AI Strategy has three main pillars: (1) investing and planning for the long-term requirements of the UK’s AI ecosystem; (2) supporting the transition to an AI-enabled economy across all sectors and regions

In April 2021, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “Regulation”), which would establish rules on the development, placing on the market, and use of artificial intelligence systems (“AI systems”) across the EU. The proposal, comprising 85 articles and nine annexes, is part of a wider package of Commission initiatives aimed at positioning the EU as a world leader in trustworthy and ethical AI and technological innovation.

The Commission’s objectives with the Regulation are twofold: to promote the development of AI technologies and harness their potential benefits, while also protecting individuals against potential threats to their health, safety, and fundamental rights posed by AI systems. To that end, the Commission proposal focuses primarily on AI systems identified as “high-risk,” but also prohibits three AI practices and imposes transparency obligations on providers of certain non-high-risk AI systems as well. Notably, it would impose significant administrative costs on high-risk AI systems of around 10 percent of the underlying value, based on compliance, oversight, and verification costs. This blog highlights several key aspects of the proposal.

Definition of AI systems (Article 3)

The Regulation defines AI systems as software using one or more “techniques and approaches” and which “generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with.” These techniques and approaches, set out in Annex I of the Regulation, include machine learning approaches; logic- and knowledge- based approaches; and “statistical approaches, Bayesian estimation, [and] search and optimisation methods.” Given the breadth of these terms, a wide range of technologies could fall within scope of the Regulation’s definition of AI.

Territorial scope (Article 2)

The Regulation would apply not only to AI systems placed on the market, put into service, or used in the EU, but also to systems, wherever marketed or used, “where the output produced by the system is used in the Union.” The latter requirement could raise compliance challenges for suppliers of AI systems, who might not always know, or be able to control, where their customers will use the outputs generated by their systems.

Prohibited AI practices (Article 5)

The Regulation prohibits certain AI practices that are deemed to pose an unacceptable level of risk and contravene EU values. These practices include the provision or use of AI systems that either deploy subliminal techniques (beyond a person’s consciousness) to materially distort a person’s behaviour, or exploit the vulnerabilities of specific groups (such as children or persons with disabilities), in both cases where physical or psychological harm is likely to occur. The Regulation also prohibits public authorities from using AI systems for “social scoring”, where this leads to detrimental or unfavourable treatment in social contexts unrelated to the contexts in which the data was generated, or is otherwise unjustified or disproportionate. Finally, the Regulation bans law enforcement from using ‘real-time’ remote biometric identification systems in publicly accessible spaces, subject to certain limited exceptions (such as searching for crime victims, preventing threat to life or safety, or criminal law enforcement for significant offenses).

Classification of high-risk AI systems (Article 6)

The Regulation classifies certain AI systems as inherently high-risk. These systems, enumerated exhaustively in Annexes II and III of the Regulation, include AI systems that are, or are safety components of, products already subject to EU harmonised safety regimes (e.g., machinery; toys; elevators; medical devices, etc.); products covered by other EU legislation (e.g., motor vehicles; civil aviation; marine equipment, etc.); and AI systems that are used in certain specific contexts or for specific purposes (e.g.; for biometric identification; for educational or vocational training, etc.).


Continue Reading European Commission Proposes New Artificial Intelligence Regulation

We and the third parties that provide content, functionality, or business services on our website may use cookies to collect information about your browsing activities in order to provide you with more relevant content and promotional materials, on and off the website, and help us understand your interests and improve the website. To exercise your right to opt-out of the sale or sharing of your personal information for targeted advertising purposes, click “reject.” Privacy Policy

AcceptReject