Deep learning-based human activity recognition (HAR) systems, despite achieving high accuracy, remain vulnerable to adversarial attacks that pose severe threats to safety-critical deployments. This paper presents a comprehensive framework for sensor-specific targeted adversarial attacks on multimodal HAR systems. We propose a hybrid optimization strategy combining momentum-based Projected Gradient Descent with adaptive Carlini-Wagner optimization, incorporating dynamic early stopping and intelligent fallback mechanisms. Our approach constrains perturbations to individual sensor modalities, enabling systematic vulnerability assessment across heterogeneous configurations. Through extensive evaluation on the MHealth dataset with 96 sensor-target combinations and 38,000+ adversarial examples, our hybrid strategy achieves 96.46% targeted attack success rate—representing 45% improvement over baseline C&W and 8% over enhanced PGD—while maintaining 49× computational efficiency. Analysis reveals accelerometers exhibit highest vulnerability (99.83%), followed by gyroscopes (96.67-99.00%) and magnetometers (91.00-95.50%). High-motion activities prove universally vulnerable (100%), while sedentary activities show sensor-dependent robustness (66-100%). Statistical validation confirms strong correlation between model confidence and vulnerability (r = 0.71, p < 0.01). Limited cross-sensor transferability (28-42%) suggests promising defense directions through sensor redundancy and ensemble methods. Our findings underscore urgent needs for adversarially robust HAR design in safety-critical applications.