Photo of Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as co-chair of Covington’s global and multi-disciplinary Internet of Things (IoT) group. She represents and advises content distributors, broadcast companies, trade associations, and other media and technology entities on a wide range of issues. Jennifer has more than two decades of experience advising clients in the communications, media and technology sectors, and has served as a co-chair for these practices for more than 15 years. On IoT issues, she collaborates with Covington's global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including legal issues with respect to connected and autonomous vehicles, internet connected devices, smart ecosystems, and other IoT products and services.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements with cable, satellite, and telco companies, network affiliation and other program rights agreements for television companies, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

On April 25, 2023, four federal agencies — the Department of Justice (“DOJ”), Federal Trade Commission (“FTC”), Consumer Financial Protection Bureau (“CFPB”), and Equal Employment Opportunity Commission (“EEOC”) — released a joint statement on the agencies’ efforts to address discrimination and bias in automated systems. 

The statement applies to “automated systems,” which are broadly defined “to mean software and algorithmic processes” beyond AI.  Although the statement notes the significant benefits that can flow from the use of automated systems, it also cautions against unlawful discrimination that may result from that use. 

The statement starts by summarizing the existing legal authorities that apply to automated systems and each agency’s guidance and statements related to AI.  Helpfully, the statement serves to aggregate links to key AI-related guidance documents from each agency, providing a one-stop-shop for important AI-related publications for all four entities.  For example, the statement summarizes the EEOC’s remit in enforcing federal laws that make it unlawful to discriminate against an applicant or employee and the EEOC’s enforcement activities related to AI, and includes a link to a technical assistance document.  Similarly, the report outlines the FTC’s reports and guidance on AI, and includes multiple links to FTC AI-related documents.

After providing an overview of each agency’s position and links to key documents, the statement then summarizes the following sources of potential discrimination and bias, which could indicate the regulatory and enforcement priorities of these agencies.

  • Data and Datasets:  The statement notes that outcomes generated by automated systems can be skewed by unrepresentative or imbalanced data sets.  The statement says that flawed data sets, along with correlation between data and protected classes, can lead to discriminatory outcomes.
  • Model Opacity and Access:  The statement observes that some automated systems are “black boxes,” meaning that the internal workings of automated systems are not always transparent to people, and thus difficult to oversee.
  • Design and Use:  The statement also notes that flawed assumptions about users may play a role in unfair or biased outcomes.

We will continue to monitor these and related developments across our blogs.

Continue Reading DOJ, FTC, CFPB, and EEOC Statement on Discrimination and AI

This quarterly update summarizes key legislative and regulatory developments in the fourth quarter of 2022 related to Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and autonomous vehicles (“CAVs”), and data privacy and cybersecurity.

Artificial Intelligence

In the last quarter of 2022, the annual National Defense Authorization Act (“NDAA”), which contained AI-related provisions, was enacted into law.  The NDAA creates a pilot program to demonstrate use cases for AI in government. Specifically, the Director of the Office of Management and Budget (“Director of OMB”) must identify four new use cases for the application of AI-enabled systems to support modernization initiatives that require “linking multiple siloed internal and external data sources.” The pilot program is also meant to enable agencies to demonstrate the circumstances under which AI can be used to modernize agency operations and “leverage commercially available artificial intelligence technologies that (i) operate in secure cloud environments that can deploy rapidly without the need to replace operating systems; and (ii) do not require extensive staff or training to build.” Finally, the pilot program prioritizes use cases where AI can drive “agency productivity in predictive supply chain and logistics,” such as predictive food demand and optimized supply, predictive medical supplies and equipment demand, predictive logistics for disaster recovery, preparedness and response.

At the state level, in late 2022, there were also efforts to advance requirements for AI used to make certain types of decisions under comprehensive privacy frameworks.  The Colorado Privacy Act draft rules were updated to clarify the circumstances that require controllers to provide an opt-out right for the use of automated decision-making and requirements for assessments of profiling decisions.  In California, although the California Consumer Privacy Act draft regulations do not yet cover automated decision-making, the California Privacy Protection Agency rules subcommittee provided a sample list of related questions concerning this during its December 16, 2022 board meeting.

Continue Reading U.S. AI, IoT, CAV, and Privacy Legislative Update – Fourth Quarter 2022

Many employers and employment agencies have turned to artificial intelligence (“AI”) tools to assist them in making better and faster employment decisions, including in the hiring and promotion processes.  The use of AI for these purposes has been scrutinized and will now be regulated in New York City.  The New York City Department of Consumer

This quarterly update summarizes key federal legislative and regulatory developments in the second quarter of 2022 related to artificial intelligence (“AI”), the Internet of Things, connected and automated vehicles (“CAVs”), and data privacy, and highlights a few particularly notable developments in U.S. state legislatures.  To summarize, in the second quarter of 2022, Congress and the Administration focused on addressing algorithmic bias and other AI-related risks and introduced a bipartisan federal privacy bill.

Artificial Intelligence

Federal lawmakers introduced legislation in the second quarter of 2022 aimed at addressing risks in the development and use of AI systems, in particular risks related to algorithmic bias and discrimination.  Senator Michael Bennet (D-CO) introduced the Digital Platform Commission Act of 2022 (S. 4201), which would empower a new federal agency, the Federal Digital Platform Commission, to develop regulations for online platforms that facilitate interactions between consumers, as well as between consumers and entities offering goods and services.  Regulations contemplated by the bill include requirements that algorithms used by online platforms “are fair, transparent, and without harmful, abusive, anticompetitive, or deceptive bias.”  Although this bill does not appear to have the support to be passed in this Congress, it is emblematic of the concerns in Congress that might later lead to legislation.

Additionally, the bipartisan American Data Privacy and Protection Act (H.R. 8152), introduced by a group of lawmakers led by Representative Frank Pallone (D-NJ-6), would require “large data holders” (defined as covered entities and service providers with over $250 million in gross annual revenue that collect, process, or transfer the covered data of over five million individuals or the sensitive covered data of over 200,000 individuals) to conduct “algorithm impact assessments” on algorithms that “may cause potential harm to an individual.”  These assessments would be required to provide, among other information, details about the design of the algorithm and the steps the entity is taking to mitigate harms to individuals.  Separately, developers of algorithms would be required to conduct “algorithm design evaluations” that evaluate the design, structure, and inputs of the algorithm.  The American Data Privacy and Protection Act is discussed in further detail in the Data Privacy section below.

Continue Reading U.S. AI, IoT, CAV, and Data Privacy Legislative and Regulatory Update – Second Quarter 2022

          On April 28, 2022, Covington convened experts across our practice groups for the Covington Robotics Forum, which explored recent developments and forecasts relevant to industries affected by robotics.  One segment of the Robotics Forum covered risks of automation and AI, highlights of which are captured here.  A full recording of the Robotics Forum is available here until May 31, 2022.

            As AI and robotics technologies mature, the use-cases are expected to grow in increasingly complex areas and to pose new risks. Because lawsuits have settled prior to a court deciding liability questions, no settled case law yet exists to identify where the liability rests between robotics engineers, AI designers, and manufacturers.  Scholars and researchers have proposed addressing these issues through products liability and discrimination doctrines, including the creation of new legal remedies specific to AI technology and particular use-cases, such as self-driving cars.  Proposed approaches for liability through existing doctrines have included:

Continue Reading Robotics Spotlight: Risks of Automation and AI

A recent AAA study revealed that, although the pandemic has resulted in fewer cars on the road, traffic deaths have surged.  Speeding, alcohol-impairment, and reckless driving has caused the highest levels of crashes seen in decades, and the National Safety Council estimates a 9% increase in roadway fatalities from 2020.  Autonomous vehicles (AVs) have the

As 2021 comes to a close, we will be sharing the key legislative and regulatory updates for artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), and privacy this month.  Lawmakers introduced a range of proposals to regulate AI, IoT, CAVs, and privacy as well as appropriate funds to study developments

Last week, the office of Acting FCC Chairwoman Jessica Rosenworcel released a draft Notice of Inquiry (NOI) regarding spectrum availability and requirements to support the growth of Internet of Things (IoT).  The FCC will consider this NOI, which is intended to collect information and does not propose rules, in its next Open Commission Meeting scheduled