Roundup of the Top AI Stories for Business this Week

Getting your Trinity Audio player ready...

As AI adoption accelerates, reality is challenging initial expectations. California’s rejection of AI safety regulations and studies showing mixed results from AI coding tools suggest the technology isn’t a simple fix. While 77% of workers report AI actually decreasing productivity, experts argue the solution lies in hybrid approaches combining AI with human expertise. Indeed’s research confirms no skills will be fully automated, pointing to a future where AI enhances rather than replaces human capabilities. Success depends on thoughtful integration, not wholesale automation.

1. Why Hybrid AI Is The Next Big Thing In Tech
The emerging field of Hybrid AI represents a significant advancement in artificial intelligence by combining different AI techniques and models to achieve superior results compared to single approaches. While generative AI and Large Language Models have shown impressive capabilities, they also have inherent limitations. Hybrid approaches are proving particularly valuable in critical applications like healthcare, where they combine traditional machine learning’s precision with generative AI’s user-friendly interfaces. This combination enables both accurate analysis and clear communication, making it especially effective in high-stakes scenarios where reliability and accessibility are equally important.

2. There are ‘literally zero’ skills where AI could replace a human
According to a comprehensive analysis by Indeed, there are no skills that are “very likely” to be completely replaced by AI, with less than 3% of analysed skills even being “likely” to be replaced. The study reveals that AI-related jobs constitute only about 0.1% of U.S. jobs, suggesting that fears of widespread job displacement may be overblown. While AI can support human work in fields like HR, it cannot fully replace human capabilities. The research also points to an impending labour shortage expected by early 2026, emphasizing that the focus should be on leveraging AI as a tool to enhance human work rather than viewing it as a replacement for workers.

3. AI Is Supposed To Make Work Better. Is It Doing The Opposite?
A striking disconnect has emerged between executive expectations and workplace reality regarding AI implementation. While 96% of C-suite leaders expect AI to increase productivity, 77% of employees report that AI has actually made them less productive. The main challenges include inadequate training and support, increased workload from learning and implementing AI systems, and a significant gap between leadership expectations and worker experience. To address these issues, experts recommend bringing in outside expertise, reconsidering how productivity is measured, and shifting toward skill-based hiring practices. The findings suggest that successful AI integration requires a more thoughtful and comprehensive approach than many organizations are currently taking.

4. Devs gaining little (if anything) from AI coding assistants
Recent research has cast doubt on the effectiveness of AI coding assistants, with a study finding no significant productivity gains from tools like GitHub Co-pilot. The research actually found these tools introduced 41% more bugs into the code. However, experiences vary significantly across organizations, with some reporting no benefits while others claim substantial productivity increases of two to three times. The mixed results suggest that AI coding tools serve better as supplements to human programming rather than revolutionary replacements. Key challenges include maintaining code consistency and the increased need for code review, indicating that the technology, while promising, may not yet live up to its ambitious productivity claims.

5. California governor blocks landmark AI safety bill
Governor Gavin Newsom has vetoed what would have been a ground-breaking AI safety bill in California. The legislation would have introduced the first significant AI regulations in the US, requiring safety testing for advanced AI models and mandatory “kill switches.” Newsom justified his decision by arguing the bill was too broad and could hamper innovation, potentially driving AI developers out of the state. Major tech companies, including Open AI, Google, and Meta, had strongly opposed the bill. While this decision leaves AI development largely unregulated at the state level, Newsom did sign 17 other AI-related bills, including measures to combat misinformation and deepfakes.