Artificial intelligence has quietly become part of how most businesses hire. Applicant tracking systems that rank resumes. Video interview tools that analyze tone and word choice. Job board algorithms that surface candidates based on predictive scoring. Many small business owners are using these tools without realizing they have crossed into regulated territory.
In 2026, that is no longer a safe assumption. A growing number of states have passed laws specifically governing the use of AI in employment decisions — and the compliance obligations apply to businesses of all sizes.
Here is what you need to know.
Which states have AI employment laws in effect
**Colorado** was the first state to pass comprehensive AI regulation. The Colorado AI Act, which took effect in 2026, requires employers using high-risk AI systems in employment decisions to conduct annual impact assessments, notify employees when AI played a substantial role in a consequential decision, and publish a plain-language disclosure on their website explaining how the system works.
**California** amended the California Consumer Privacy Act effective January 1, 2026, to give employees new rights around automated decision-making in employment. Workers can now request human review of any AI-driven employment decision and opt out of certain automated profiling activities.
**Illinois** enacted the Artificial Intelligence Video Interview Act, which requires employers to notify applicants before AI is used to analyze video interviews, explain how the AI works, and obtain written consent. Illinois also prohibits using AI to discriminate based on protected characteristics.
**New York City** has required bias audits for automated employment decision tools since 2023. That law continues to expand in scope and enforcement activity has increased.
**Texas and Utah** have enacted their own AI transparency laws with requirements that overlap with but differ from the state laws above.
At least 22 additional states have pending AI employment legislation as of early 2026. This is one of the fastest-moving areas of employment law right now.
What counts as a high-risk AI system
The definition varies by state, but the common thread is any system that makes or substantially influences a consequential employment decision. That includes decisions about hiring, firing, compensation, promotion, performance evaluation, and scheduling.
Practically speaking, if you are using any of the following, you may be covered:
- Applicant tracking systems that rank or score resumes automatically
- Video interview platforms that analyze candidate behavior or language
- Assessments or personality tests that generate algorithmic scores
- Scheduling software that uses predictive modeling to assign or limit hours
- Performance management tools that generate automated ratings
The threshold is not whether AI makes the final call — it is whether AI substantially influenced a decision that affects an employee's livelihood.
What the compliance obligations actually require
Requirements vary by state, but across the active laws, employers are generally expected to:
**Disclose** when AI is used in an employment decision and explain in plain language what the system evaluates and how it works.
**Audit** AI systems for bias and disparate impact on protected groups, typically on an annual basis.
**Notify** employees and applicants before AI is used to evaluate them, and in some states obtain explicit consent.
**Provide human review** when an employee challenges an AI-driven decision.
**Retain records** of how AI systems were used in employment decisions for a defined period.
Failing to meet these requirements can result in enforcement actions, civil penalties, and private lawsuits. In Colorado, penalties can reach $20,000 per violation.
Why this is harder than it sounds for small businesses
Most small business owners did not choose an AI-powered hiring tool — they chose an applicant tracking system that happened to include AI features, often as a default. Many of these tools do not clearly communicate when AI is active or what it is evaluating.
This creates a compliance gap: the employer is legally responsible for disclosures and audits they may not even know are required, for systems they did not fully understand when they purchased them.
The first step is a vendor audit. Review every tool in your hiring and HR tech stack and ask each vendor directly: does this product use AI or automated scoring in any employment-related decisions? If yes, what compliance documentation do you provide?
How a PEO helps
A Professional Employer Organization stays current on the employment law landscape as part of its core function. Quality PEOs are actively monitoring AI employment regulations and updating their internal processes, vendor agreements, and client guidance accordingly.
If you are using HR technology through a PEO, the PEO typically takes on responsibility for ensuring that technology is compliant in the states where you operate — including maintaining audit documentation, posting required disclosures, and updating practices when new requirements take effect.
For small businesses that lack the internal legal and HR resources to track fast-moving regulatory changes, this is one of the most practical reasons to work with a PEO.
Already working with a PEO that has not addressed AI compliance? PEO Alternatives compares the top providers so you can find one that takes compliance seriously.