H&M is creating digital clones of 30 real models for an upcoming campaign, enabling AI-generated imagery that can be styled and posed without traditional photoshoots. Using the digital twins, the shoot can be executed without stylists, make-up artists, or even photographers while ensuring the models retain ownership and compensation rights for their digital twins.
Old Navy is testing radar-powered AI across its 1,200 U.S. stores. This system will provide real-time inventory tracking, enabling employees to quickly locate products and restock shelves, enhancing the overall shopping experience. It will also help to inform inventory decisions and reduce stockouts and overstocking at individual stores.
Alibaba and BMW are developing an Intelligent Personal Assistant that will feature two AI agents—Travel Companion and Car Genius—capable of providing personalized services like restaurant recommendations and real-time navigation. It will use natural commands rather than touch screens, including gesture recognition, eye tracking, and body position awareness.
Amazon is testing Interests AI and Health AI, two groundbreaking tools that use conversational AI to curate product recommendations and provide medical guidance. It uses an LLM to translate everyday phrases into queries that traditional search engines can turn into product recommendations.
The initiatives signal Amazon's aggressive push to integrate generative AI across its e-commerce and healthcare platforms.
Sauce, a leading first-party delivery platform, launches an AI-powered customer retention system that automatically generates personalized promotions and order suggestions. The module promises restaurants a 30-50% increase in orders by using advanced AI to analyze customer preferences and craft targeted re-engagement strategies.
Cornerstone is transforming corporate training to create personalized learning simulations, creating 30-minute training simulations in about 10 minutes. The system features intelligent virtual mentors with real-time multilingual capabilities, enabling companies to rapidly develop immersive training experiences that adapt to individual learning styles.
Oracle’s new tool lets enterprises build autonomous agents trained on internal workflows—think customer support, finance ops, HR tasks—using pre-built templates paired with natural language prompts.
Epoch AI published the GATE (Growth and AI Transition Endogenous) model, designed to analyze the economic impacts of AI-driven automation. It predicts trillion‐dollar infrastructure investments, 30% annual growth, and full automation within decades. If you’re not convinced, it lets you tweak the scenarios in an interactive sandbox environment.
If you’re wondering why your timeline was filled with Ghibli-style avatars all of a sudden, it’s thanks to OpenAI’s step-change improvement in image generation. Users can describe an image in ChatGPT, and GPT-4o will generate it within a minute.
Anthropic’s chatbot can now search the web in real-time, having previously been self-contained. Unlike traditional search engines that give you a list of links, Claude delivers them in a conversational format. For knowledge workers, this could mean hours saved reviewing information manually.
Elon Musk’s AI startup xAI has acquired the social media platform X in an all-stock transaction. The company has already begun to integrate xAI's Grok chatbot into the social network, using X's vast user data for training.
The chatter among AI researchers last week was around Moore’s Law for AI Agents, signalling the length of tasks that AI can do is doubling about every 7 months. They’re mostly software tasks and cover >50% success rate—so there are still performance limitations.
Nonetheless, some suggest the long-term trendline probably underestimates current and future progress due to the test-time compute. Hold onto your hats!
AI models struggle with basic time-reading tasks, getting analog clock and calendar questions wrong up to 80% of the time. Researchers from the University of Edinburgh found that while AI excels at complex reasoning, reading hands of a clock remains a significant challenge, highlighting critical gaps in visual perception and numerical reasoning.
Invisible, trusted to train 80% of leading foundation models, helped You.com rate 20,000 AI answers to see which were actually on point. That led to a 70% jump in relevant news results and vastly improved user experience. Need help with your AI?
Researchers at the University of Edinburgh have built a robot that can navigate a real kitchen and make a cup of coffee—no pre-programmed steps, just natural commands and real-world improvisation. It responds to voice prompts, adapts when objects move, and even finds the mug on its own. A small but meaningful step toward AI systems that can operate in unpredictable, human environments. Your next intern? Maybe not yet—but it’s closer than you think.