Companies Race to Govern AI Under New Rules
Businesses step up AI oversight as regulation tightens
Companies are moving fast to put guardrails on artificial intelligence. New rules are taking shape in Europe and guidance is spreading in the United States and elsewhere. Executives say they want the benefits of AI, but they also want to avoid legal risk and reputational damage. The result is a wave of internal policies, audits, and training programs across sectors from finance and healthcare to retail and logistics.
The European Union’s AI Act set the tone in 2024. It is the first broad law to classify AI by risk and to impose duties on developers and users. As European Commission President Ursula von der Leyen said, “The AI Act is a global first.” In the U.S., there is no single federal law. But the White House issued an AI executive order in late 2023. Agencies such as the Federal Trade Commission (FTC), the Consumer Financial Protection Bureau, and the Equal Employment Opportunity Commission have warned they will enforce existing laws against unfair, deceptive, or discriminatory AI. FTC Chair Lina Khan put it bluntly: “There is no AI exemption from the laws on the books.”
What the new rules require
The EU law takes effect in stages. Bans on certain practices, such as social scoring and untargeted scraping of facial images, apply within months. Most obligations for “high-risk” systems, like AI used in hiring or credit scoring, apply within about two years of entry into force. Providers of general-purpose AI models face new transparency and safety duties as well. Penalties can be steep. The law allows fines up to €35 million or 7% of global annual turnover for the most serious violations.
The U.S. approach relies on sector rules and voluntary frameworks. The National Institute of Standards and Technology published the AI Risk Management Framework to guide developers and users. Banking regulators have reminded lenders that model risk management rules apply to AI. The Securities and Exchange Commission has proposed rules on predictive data analytics for brokers and advisers. In the U.K., regulators are sharing a common set of AI principles without a single new statute.
Experts say the patchwork will continue. But many provisions point in the same direction: document models, manage risks, and keep a human in charge of important decisions.
Inside the corporate response
Large firms are creating formal AI governance programs. Chief data officers and compliance teams are drafting rules for data, testing, and monitoring. Procurement teams are adding AI clauses to vendor contracts. Smaller firms are adopting templates and checklists. Boards are asking for dashboards and risk maps.
Common elements include:
- Model inventory: A registry of all AI systems in use, their purpose, owners, data sources, and risk levels.
- Risk classification: A simple method to rate systems as low, medium, or high risk, with controls that match the level.
- Documentation: Model cards, data sheets, and decision logs to explain how systems work and how they were tested.
- Human oversight: Clear rules on when a person must review or approve an AI-driven decision.
- Red-teaming and testing: Adversarial tests for safety, bias, accuracy, and security before and after deployment.
- Transparency: Notices to customers and employees when AI is used, and channels to appeal or correct outcomes.
- Third-party management: Contract terms on data rights, security, benchmarks, and audit rights for AI vendors.
Several companies are also setting up internal “AI review boards”. These groups can greenlight projects, block them, or require changes. They often include legal, security, data science, compliance, and business leaders. Training programs are spreading, too, with short modules on prompt use, data privacy, and the limits of AI output.
Why the pressure is rising
Adoption is accelerating. Generative AI tools are moving from pilots to production. Productivity gains are the lure. A 2023 McKinsey report estimated that generative AI could add $2.6 trillion to $4.4 trillion to the global economy each year. But errors, bias, and data leaks can erase those gains. Lawsuits are mounting over copyrighted training data and false statements produced by chatbots. Regulators are asking who is accountable when AI harms someone.
Sam Altman, chief executive of OpenAI, told U.S. lawmakers in 2023: “Regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.” That view has spread across the industry. Many firms now seek to prove not only that AI works, but that it is safe, fair, and explainable.
The cost of compliance—and the tools market
Compliance is not free. Banks and insurers already pay for model risk staff and validation tools. They are now expanding those programs to cover AI. Manufacturers and retailers are building new capability from scratch. Budget lines include data labeling, red-teaming, legal reviews, and continuous monitoring.
A growing set of vendors promise to help. Some products scan code and prompts for data leaks. Others automate documentation. Cloud providers offer “guardrails” and evaluation suites. Consultants sell AI governance playbooks for different sectors. Buyers say the market is fragmented. Integrating tools with existing systems remains a challenge.
What small businesses need to know
Small and midsize firms face a tougher balance. They want AI to cut costs and compete. But they have fewer resources. Experts say a simple, risk-based approach is enough for most cases:
- Start with a short policy on acceptable AI use.
- Create a basic inventory of AI tools and vendors.
- Prioritize high-impact use cases for extra checks.
- Use human review for hiring, lending, or safety-related decisions.
- Be transparent with customers and employees.
- Review contracts for data rights and indemnities.
Industry groups and standards bodies offer free resources. The NIST framework is voluntary and practical. ISO is developing technical standards on AI management and auditing. Trade associations provide sector-specific guidance.
Risks, rights, and the road ahead
Supporters of strict rules say guardrails build trust and protect rights. Critics fear heavy compliance will slow innovation and favor incumbents. The EU has promised tailored support and lighter duties for startups and small firms. In the U.S., the debate continues in Congress, while states explore their own rules.
Global firms worry about overlapping regimes. They must map requirements across the EU, U.K., U.S., Canada, and Asia. Many are adopting a “highest common denominator” approach. They implement controls that can satisfy multiple regulators at once.
Despite the uncertainty, some themes are clear. Documentation, testing, and oversight are here to stay. Clear accountability is essential. Firms that manage AI risks well may gain an edge with customers and regulators.
The stakes are high. AI can speed up service, cut waste, and unlock new products. It can also entrench bias, spread errors, and expose data. The way businesses govern AI now will shape outcomes for years.
Bottom line
AI is moving from experiment to infrastructure. Lawmakers are catching up. Companies are building controls in response. The goal is not to stop AI, but to use it responsibly. As the FTC’s Khan warned, markets will not overlook harm because a tool is new or clever. And as the EU’s von der Leyen noted, the world is watching the first comprehensive attempt to set the rules. Businesses that act early on governance will be better prepared for audits, for customers’ questions, and for the next wave of regulation.