As early as this week, the Federal Senate of the Brazilian National Congress may vote a potentially historic tax reform, revamping a tax system that has been in place since the 1960s and has increased in complexity, inefficiency, and compliance cost over the years.

The reform is a draft constitutional amendment (PEC) that requires a favorable vote by at least three-fifths of the members of each chamber of Congress in two rounds of voting (308 in the House of Deputies and 49 in the Senate).

The House approved the amendment on July 7, 2023, with 382 and 370 votes in the first and second rounds, respectively.  The Senate must now vote on the amendment.

Pressure Politics in the Senate

The reform is largely focused on consumption taxes, creating a full-fledged value-added tax (VAT) for Brazil, although it also includes changes in property taxes.  Its outline, political economy, and approval process was discussed in this blog post.

The Senate rapporteur’s report includes key changes to the House-approved draft text.

The Senate is under pressure to establish a tax ceiling for the VAT.  President Luiz Inácio Lula da Silva’s administration is pursuing a strategy to increase government revenue in order to achieve the country’s ambitious new fiscal framework goals.  Private sector groups are concerned the administration might push for a VAT rate higher than the existing tax level, increasing the burden on companies.  They are also concerned about the scope of the proposed Selective Tax on goods and services with negative health and environmental externalities.  The opposition in Congress in echoing these fears.Continue Reading Key Vote on Tax Reform Expected in Brazil’s Senate

On October 10, 2023, California Governor Gavin Newsom signed S.B. 362, the Delete Act (the “Act”), into law.  The new law represents a substantive overhaul of California’s existing data broker statute, which requires data brokers to register with the California Attorney General annually.  The passage of the Act follows

Continue Reading California Amends Data Broker Law

A would-be technical development could have potentially significant consequences for cloud service providers established outside the EU. The proposed EU Cybersecurity Certification Scheme for Cloud Services (EUCS)—which has been developed by the EU cybersecurity agency ENISA over the past two years and is expected to be adopted by the European Commission as an implementing act in Q1 2024—would, if adopted in its current form, establish certain requirements that could:

  1. exclude non-EU cloud providers from providing certain (“high” level) services to European companies, and
  2. preclude EU cloud customers from accessing the services of these non-EU providers.

Data Localization and EU Headquarters

The EUCS arises from the EU’s Cybersecurity Act, which called for the creation of an EU-wide security certification scheme for cloud providers, to be developed by ENISA and adopted by the Commission through secondary law (as noted in an earlier blog). After public consultations in 2021, ENISA set up an ad hoc working group tasked with preparing a draft.

France, Italy, and Spain submitted a proposal to the working group advocating to add new criteria to the scheme in order for companies to qualify as eligible to offer services providing the highest level of security. The proposed criteria included localization of cloud services and data within the EU – meaning in essence that providers would need to be headquartered in, and have their cloud services provided from, the EU. Ireland, Sweden and the Netherlands argued that such requirements do not belong in a cybersecurity certification scheme, as requiring cloud providers to be based in Europe reflected political rather than cybersecurity concerns, and therefore proposed that the issue should be discussed by the Council of the EU.Continue Reading Implications of the EU Cybersecurity Scheme for Cloud Services

Yesterday, the White House issued a Fact Sheet summarizing its Executive Order on a comprehensive strategy to support the development of safe and secure artificial intelligence (“AI”).  The Executive Order follows a number of actions by the Biden Administration on AI, including its Blueprint for an AI Bill of Rights and voluntary commitments from certain developers of AI systems.  According to the Administration, the Executive Order establishes new AI safety and security standards, protects privacy, advances equity and civil rights, protects workers, consumers, and patients, promotes innovation and competition, and advances American leadership.  This blog post summarizes these key components.

  • Safety & Security StandardsThe Executive Order sets out several required actions for developers of AI systems.  Notably, the White House, “in accordance with the Defense Production Action,” will require companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model and provide results of all red-team safety tests to the government. 

Relatedly, the Executive Order directs certain federal agencies to undertake the following actions and initiatives:

  • National Institute of Standards and Technology:  establish standards for red-teaming required before the public release of an AI system. 
    • Department of Homeland Security:  apply the NIST standards to use of AI in critical infrastructure sectors and establish an AI Safety and Security Board. 
    • Departments of Energy and Homeland Security:  address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks; it also calls for the creation of standards for biological synthesis screening.
    • Department of Commerce:  develop guidance for content authentication and watermarking to label content generated by AI and received by the government; it also suggests that federal agencies would be required to use these tools.
    • National Security Council & White House Chief of Staff:  develop a National Security Memorandum that ensures that the United States military and intelligence community use AI safely, ethically, and effectively.

Continue Reading Biden Administration Announces Artificial Intelligence Executive Order

On 26 October 2023, the UK’s Online Safety Bill received Royal Assent, becoming the Online Safety Act (“OSA”).  The OSA imposes various obligations on tech companies to prevent the uploading of, and rapidly remove, illegal user content—such as terrorist content, revenge pornography, and child sexual exploitation material—from their services, and

Continue Reading UK Online Safety Bill Receives Royal Assent

On October 17, 2023, the U.S. Government Accountability Office (“GAO”) published a report on mergers and acquisitions (“M&A”) in the defense industrial base. The report details the current M&A review process of the Department of Defense (“DOD”) and provides recommendations to proactively assess M&A competition risks.

Currently, DOD’s Industrial Base

Continue Reading GAO Recommends Increased Guidance for DOD Mergers & Acquisitions Review

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in the United States. The previous article in this series covered the European Union.

Future of AI Policy in the U.S.

U.S. policymakers are focused on artificial intelligence (AI) platforms as they explode into the mainstream.  AI has emerged as an active policy space across Congress and the Biden Administration, as officials scramble to educate themselves on the technology while crafting legislation, rules, and other measures to balance U.S. innovation leadership with national security priorities.

Over the past year, AI issues have drawn bipartisan interest and support.  House and Senate committees have held nearly three dozen hearings on AI this year alone, and more than 30 AI-focused bills have been introduced so far this Congress.  Two bipartisan groups of Senators have announced separate frameworks for comprehensive AI legislation.  Several AI bills—largely focused on the federal government’s internal use of AI—have also been voted on and passed through committees. 

Meanwhile, the Biden Administration has announced plans to issue a comprehensive executive order this fall to address a range of AI risks under existing law.  The Administration has also taken steps to promote the responsible development and deployment of AI systems, including securing voluntary commitments regarding AI safety and transparency from 15 technology companies. 

Despite strong bipartisan interest in AI regulation, commitment from leaders of major technology companies engaged in AI R&D, and broad support from the general public, passing comprehensive AI legislation remains a challenge.  No consensus has emerged around either substance or process, with different groups of Members, particularly in the Senate, developing their own versions of AI legislation through different procedures.  In the House, a bipartisan bill would punt the issue of comprehensive regulation to the executive branch, creating a blue-ribbon commission to study the issue and make recommendations.

I. Major Policy & Regulatory Initiatives

Three versions of a comprehensive AI regulatory regime have emerged in Congress – two in the Senate and one in the House.  We preview these proposals below.

            A. SAFE Innovation: Values-Based Framework and New Legislative Process

In June, Senate Majority Leader Chuck Schumer (D-NY) unveiled a new bipartisan proposal—with Senators Martin Heinrich (D-NM), Todd Young (R-IN), and Mike Rounds (R-SD)—to develop legislation to promote and regulate artificial intelligence.  Leader Schumer proposed a plan to boost U.S. global competitiveness in AI development, while ensuring appropriate protections for consumers and workers.Continue Reading Spotlight Series on Global AI Policy — Part II: U.S. Legislative and Regulatory Developments

On October 13, 2023, the Food and Drug Administration (FDA) announced that the guidance entitled “Prior Notice of Imported Food Questions and Answers (Edition 4): Guidance for Industry,” originally released as draft guidance on September 13, 2022, has been made final. FDA received no comments on the draft

Continue Reading FDA Announces Availability of Final Guidance for Industry: Prior Notice of Imported Food Questions and Answers (Edition 4)

October 17, 2023, Covington Alert

What You Need to Know

  • On October 4, 2023, Deputy Attorney General Lisa Monaco provided new and expanded policy guidance on corporate criminal enforcement, announcing a new Mergers and Acquisitions Safe Harbor Policy (“Safe Harbor Policy”).
  • The Safe Harbor Policy provides acquiring companies an opportunity to avoid criminal charges if they voluntarily self-disclose misconduct at acquired companies within six months of a merger or acquisition (“M&A”), fully cooperate in any DOJ investigation, engage in timely and appropriate remediation within one year of the transaction closing date, and pay restitution or disgorgement, as appropriate.
  • The Safe Harbor Policy—which we expect will be formalized in writing and incorporated into the Justice Manual—appears to draw heavily on policies and guidance from the Criminal Division dating back to 2008, but that will now be formalized, clarified, and applied across the Department, with different parts of the Department “tailor[ing] its application . . . to fit their specific enforcement regime.”
  • As with all of the Department’s recent policy announcements concerning the benefits of voluntary disclosure, significant questions remain. We discuss some of those below, and we will be watching to see how DOJ applies the Safe Harbor Policy in practice. At a minimum, however, companies should ensure that their pre- and post-closing diligence and integration processes are designed to quickly identify legacy or ongoing misconduct at acquired companies so that they may have an opportunity to consider the expected benefits and burdens associated with a voluntary disclosure under the Safe Harbor Policy.
  • In addition to announcing the Safe Harbor Policy, Deputy Attorney General Monaco noted a “dramatic” expansion in national security enforcement, new enforcement tools that the Department is deploying, continued focus on incentivizing companies to seek compensation clawbacks from individual wrongdoers, and even more policy changes to come. Deputy Attorney General Monaco’s announcement follows recent shifts in enforcement remedies sought by the Department, such as divestiture in certain criminal antitrust cases—an unprecedented remedial measure.

Continue Reading DOJ Provides Further Voluntary Disclosure Incentives, This Time Linked to M&A Transactions, and Signals Other Areas of Focus

Only one claim survived dismissal in a recent putative class action lawsuit alleging that a pathology laboratory failed to safeguard patient data in a cyberattack.  See Order Granting Motion to Dismiss in Part, Thai v. Molecular Pathology Laboratory Network, Inc., No. 3:22-CV-315-KAC-DCP (E.D. Tenn. Sep. 29, 2023), ECF 38.

Continue Reading All but One Claim in Pathology Lab Data Breach Class Action Tossed on Motion to Dismiss