On the wire

US employers adapt to evolving AI and labour standards

16th February 2026

As US federal and state governments diverge on AI and labour policies, companies face increasing compliance challenges ahead of 2026, with new laws impacting hiring, layoffs, diversity, and time-off.

Last year reset expectations for US employers, as a patchwork of state rules and contested federal actions reshaped how companies manage technology, workforce reductions, discrimination risks and time-off obligations. According to reporting by the Associated Press, efforts in Washington to impose a long-term federal shield against state AI rules faltered, while the Senate later moved to preserve states’ powers, underscoring the unsettled regulatory terrain that employers must navigate heading into 2026.

Artificial intelligence moved from theoretical risk to everyday operational challenge. California’s new Transparency in Frontier Artificial Intelligence Act establishes disclosure, reporting and whistleblower requirements for frontier systems, and other states are adopting strict obligations for employers that use AI in hiring and personnel decisions. Industry briefings and state-level modelling show employers will face duties such as impact assessments, notice and appeal rights for candidates and employees, and prompt reporting where algorithmic bias is identified, signalling substantial compliance work ahead.

Layoffs became a central labour-relations issue as companies reorganised. New state statutes create notice windows and definitions that differ from the federal WARN framework, obliging employers to revisit their reduction-in-force playbooks, separation agreements and disparate-impact analyses before executing workforce changes. Employers with multistate headcounts should treat mass-notice and reporting rules as a primary operational risk.

Workplace diversity programmes found themselves under intensified legal and political scrutiny. Changes in federal agency posture and executive actions that rolled back prior regulatory guidance have left employers uncertain about how to align inclusion initiatives with evolving civil-rights enforcement priorities. Businesses seeking to avoid investigations are increasingly turning to outside counsel to reassess policies, training and decision-making processes through the lens of current enforcement signals.

Agreements that limit post-employment mobility also remain in flux. Even as federal attempts to impose a nationwide non-compete moratorium stalled in Congress and drew heavy industry lobbying, states continue to tighten or constrain restrictive-covenant regimes. That divergence means employers must audit existing contracts and develop state-tailored strategies to protect trade secrets and client relationships without running afoul of newly enacted local rules.

Time-off obligations continue to proliferate and to vary widely by jurisdiction, complicating workforce management for on-site and remote employees alike. Recent state-level enhancements to notice and leave requirements , alongside novel protected reasons for absence , make a single national policy impractical for many employers; targeted, jurisdictionally aware leave matrices and centralised compliance monitoring will be essential.

For 2026 the consistent theme is fragmentation: federal posture on AI and labour policy may shift, but states are actively filling governance gaps now. Employers should prioritise granular mapping of legal obligations where their workers are located, update contracts and HR processes to reflect state-specific rules, and invest in compliance workflows for AI tools and mass-employment actions to reduce regulatory, litigation and reputational risk.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:
– Paragraph 1: [2], [3]
– Paragraph 2: [4], [7]
– Paragraph 3: [5]
– Paragraph 4: [6]
– Paragraph 5: [3], [6]
– Paragraph 6: [5]
– Paragraph 7: [7], [2]

Source: Noah Wire Services

Verification / Sources

  • https://www.jdsupra.com/legalnews/employment-law-in-the-us-the-top-5-2040693/ – Please view link – unable to able to access data
  • https://apnews.com/article/39d1c8a0758ffe0242283bb82f66d51a – In May 2025, House Republicans proposed a 10-year ban on state and local governments regulating artificial intelligence (AI) within a broader legislative package. This measure aimed to establish a consistent federal framework for AI governance, favouring a light-touch approach sought by the tech industry. Supporters argued that AI transcends state borders and federal oversight is necessary to avoid regulatory fragmentation. However, the provision faced significant hurdles in the Senate, where members from both parties questioned its alignment with budgetary rules and expressed concerns about federal overreach. Critics, including state lawmakers and attorneys general, decried the bill as undermining state authority, especially in light of recent state-level laws targeting AI-generated deepfakes and biases. While tech leaders like OpenAI’s Sam Altman and Microsoft’s Brad Smith advocated for a unified federal approach, others warned of dangers in delaying regulation. Despite bipartisan interest in regulating AI, little federal progress has been made, apart from a forthcoming law on AI-generated revenge porn. The debate highlights the tension between innovation, consumer protection, and regulatory jurisdiction in the rapidly evolving AI sector.
  • https://apnews.com/article/20beeeb6967057be5fe64678f72f6ab0 – In July 2025, the U.S. Senate overwhelmingly voted 99-1 to remove a controversial 10-year moratorium on state-level regulation of artificial intelligence from President Trump’s comprehensive tax and spending proposal, ‘One Big Beautiful Bill.’ This marked a major defeat for Big Tech companies, including OpenAI and Google, which had lobbied for the moratorium to prevent fragmented state regulations that could stifle AI innovation. However, critics across the political spectrum argued that the provision would block vital legislation on AI safety, especially since Congress has not passed new tech regulations in decades. Senator Marsha Blackburn (R-TN), a persistent critic of Big Tech, led the push against the moratorium, stating it would allow companies to exploit vulnerable groups without oversight. Democrats also praised the decision, stressing the importance of protecting communities and developing national AI safety laws. AI safety experts echoed these concerns, warning against unchecked corporate power. Ultimately, only Senator Thom Tillis supported keeping the moratorium. The Senate now aims to finalize the budget bill by July 4 for the President’s approval.
  • https://www.itpro.com/business/policy-and-legislation/california-ai-safety-law-signed-what-it-means – California has officially enacted the Transparency in Frontier Artificial Intelligence Act (TFAIA), the first AI safety legislation in the U.S., aiming to enhance oversight of advanced AI development. The law requires companies to publicly disclose how they mitigate serious risks, report critical safety incidents involving physical harm, and comply with international standards. It also establishes whistleblower protections and introduces civil penalties for noncompliance. Key features include the formation of CalCompute, a public computing consortium to foster ethical and sustainable AI, and authority for the state to annually update the law based on technological advances. Written by Senator Scott Wiener and signed by Governor Gavin Newsom, the TFAIA replaces a previously vetoed broader bill. It includes scaled-down penalties—$1 million for a first offense—compared to stronger proposed measures in states like New York. The law puts California in opposition to federal deregulation efforts under President Trump, who has promoted unrestricted AI development. Newsom emphasized the state’s leadership role in ensuring both innovation and public safety, especially in the absence of federal legislation.
  • https://www.hklaw.com/en/insights/publications/2025/07/washingtons-mini-warn-act-goes-into-effect-on-july-27-2025 – Washington State’s ‘mini’ version of the federal Worker Adjustment and Retraining Notification (WARN) Act imposes significant obligations on employers with 50 or more full-time employees in the state who are planning shutdowns or mass layoffs. Employers should review their workforce management and notification procedures to ensure compliance with these new requirements. Under the new statute, mass layoffs and business closings will trigger the notice requirements: A ‘mass layoff’ is defined as a reduction in force that is not the result of a business closing and results in the loss of 50 or more employees (excluding part-time employees) in a 30-day period. A ‘business closing’ is defined as the permanent or temporary shutdown of a single site of employment or one or more facilities or operating units that will result in the loss of 50 or more employees (excluding part-time employees). Unlike the federal WARN Act, a mass layoff under Washington’s new statute is not limited to employees at a single site of employment.
  • https://www.gtlaw.com/en/insights/alerts/2025/05/gt-advisory_use-of-ai-in-recruitment-and-hiring–considerations-for-eu-and-us-companies.pdf?hash=B30F8E129115BD1DEF0E016C55605FAB&rev=84a213fac965473e84c1e4f2fcb9a6a6&sc_lang=en – In contrast to the EU, the United States does not currently have uniform AI regulations on a federal level. Though the Biden administration had tasked government agencies such as the Department of Labor and the Equal Employment Opportunity Commission with monitoring the use of AI tools and issuing guidance to enhance compliance with anti-discrimination and privacy laws, in January 2025, President Trump expressed his support for deregulation, issuing an executive order entitled ‘Removing Barriers to American Leadership in Artificial Intelligence Issues.’ Federal agencies have since removed all previously issued guidance on AI use. In response to the executive order advocating for AI deregulation, regulations governing the use of AI have been introduced and passed on the state level. However, legislation passed does not always become legally binding.
  • https://www.consultils.com/post/us-ai-hiring-laws-compliance-guide-2026 – The Colorado Artificial Intelligence Act (CAIA)—one of the most comprehensive state AI laws—will take effect on June 30, 2026. Its core feature is a dual obligation structure for ‘Developers’ and ‘Deployers.’ Employers that use AI in any employment decisions (e.g., hiring, termination, promotion) are categorized as ‘Deployers.’ Any AI system affecting such decisions is considered a ‘High-Risk System.’ Key employer obligations under CAIA include: Risk Management: Employers must implement policies and procedures for managing AI risk and conduct annual impact assessments for each high-risk system to ensure no algorithmic discrimination occurs. Notice and Appeal Rights: Job candidates and employees must be informed if AI is used in employment decisions. Where technically feasible, human review must be offered, along with a clear appeals process. Timely Reporting: If algorithmic discrimination is discovered, the employer must report it to the State Attorney General within 90 days.
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is temporarily stored in your browser and helps our team to understand which sections of the website you find most interesting and useful.

More information about our Cookie Policy

Send this to a friend