The “black box” nature of many AI models causes hesitation in trusting their predictions, especially in sensitive fields like medicine and finance, where interpretability and accountability are essential. Overcoming this challenge has become a central focus in AI development. Time series classification (TSC) is a critical task with numerous real-world applications, such as detecting anomalies in ECG data for healthcare and identifying fault patterns in electronic signals for industrial manufacturing.
That’s why we are so excited to introduce our paper “MIX: A Multi-view Time-Frequency Interactive Explanation Framework for Time Series Classification” just accepted at NeurIPS 2025! 🎉 It’s our novel approach, designed to finally open up the black box and empower humans to truly understand and interpret complex time series models.
Current explanation way focus to one perspective of the data. 🧩
Most current explanation methods suffer from tunnel vision. They almost exclusively focus on a single perspective: the timeline. By analysing which time steps or segments are important, they often completely ignore crucial patterns hidden in the data’s frequency.
While a recent method, SpectralX, made progress by including frequency, it is still limited to a single, fixed configuration. Ultimately, relying on just one view of the data provides an incomplete and less reliable explanation of how a model truly makes its decisions.
An Overview of MIX: Multi-view, Interaction & Traversal 🚀
Our MIX Framework introduces a totally new concept to the AI research community: multi-view explanations. It’s built on three powerful, interactive ideas.
🖼️ Multi-view Explanations: Seeing the Full Picture
We have introduced a new problem in XAI: how do you explain a model’s decision from many different perspectives at once? In time series, we create these “views” using a powerful signal processing tool (Haar DWT) that acts like a set of different camera lenses. Some lenses capture the broad, long-term trends, while others zoom in on high-frequency details. MIX intelligently gathers the most important features from all views, giving you explanations at various levels of granularity.
🤝 View-Interaction: Making the Views Work Together
This is where MIX truly shines. We’ve created the first-ever interactive mechanism where the views don’t just exist in isolation, but they talk to each other! The strongest, most confident explanation from one view helps to refine and improve the explanations in all the others. This connection ensures that every individual explanation becomes more faithful and robust.
🗺️ View-Traversal: Finding the Most Important Features
Finally, MIX goes on a treasure hunt. Instead of just giving you a ranked list of features from a single perspective (like traditional methods), it travels across all the views in a smart, greedy search. It identifies the absolute most critical features overall, no matter which view they came from. The result is a comprehensive “greatest hits” list that gives users a complete, holistic understanding of the model’s decision.
How Does This New Perspective Perform? 📊
Our research shows significant improvements in both faithfulness (how true the explanation is) and robustness (how stable it is). MIX proved its effectiveness across 11 diverse datasets and 3 different deep learning architectures compared to 4 state-of-the-art methods. The results are consistently clear. ✅
Why This is a Game-Changer 🚀
This is not just a fascinating research paper; it’s a practical tool with a huge real-world impact. For every Data Scientist and AI Engineer out there, MIX provides a golden opportunity to finally understand time series classifiers from multiple angles. ✨