Galadrim applied AI-powered video analysis to objectively measure the cosmetic gesture and enrich Chanel’s understanding of the customer experience.

Context

Chanel’s Neuroscience team wanted a scientific, repeatable way to study how customers use skincare and make-up products during video-based tests. The core goal was to transform the inherently subjective notion of “ease of use” into quantifiable data about how a person applies a product.

A working hypothesis guided the study: fewer application movements generally correlate with greater ease of use.

Objective

  • Capture the way a product is applied (who, where, how).

  • Turn raw video into structured metrics—notably counts and categories of movements—that can be compared across products, protocols, and participants.

Solution: AI Gesture Recognition for Cosmetic Application

We implemented video recognition algorithms in Python using MediaPipe to analyse application sequences.

  • Body & hand landmarking: The system detects and tracks the head, hand, and fingers of the participant during application.

  • Motion capture: It records trajectories and movement patterns throughout the routine.

  • Semantic classification: Using AI, movements are counted and categorised into meaningful gesture types (e.g., half-circle massage, tapping), enabling like-for-like comparisons.

This pipeline turns each test video into objective measurements that can be aggregated or reviewed at the level of a single participant, a specific product, or an entire study.

What This Enables

  • Objective ease-of-use indicators: Movement counts and types provide a concrete basis for comparing application experiences.

  • Consistent evaluation across tests: A standardised, automated approach reduces variability inherent in manual scoring.

  • Actionable insight for product teams: Teams can spot patterns (e.g., excessive tapping or complex sequences) and optimise formulas, textures, or applicators accordingly.

Technologies used