Generative models have achieved remarkable success in producing realistic images and short video clips, but existing approaches struggle to maintain *persistent worldcoherence over long durations and across multiple modalities. We propose Multimodal Supervisory Graphs (MSG), a novel framework for world modeling that unifies geometry (3D structure), identity (consistent entities), physics (dynamic behavior), and interaction (user/agent inputs) in a single abstract representation. MSG represents the environment as a dynamic latent graph, factorized by these four aspects and trained with cross-modal supervision from visual (RGB-D), pose, and audio streams. This unified world abstraction enables generative AI systems to maintain consistent scene layouts, preserve object identities over time, obey physical laws, and incorporate interactive user prompts, all within one model. In our experiments, MSG demonstrates superior long-term coherence and cross-modal consistency compared to state-of-the-art generative video baselines, effectively bridging the gap between powerful short-term video generation and persistent, interactive world modeling. Our framework outperforms prior methods on metrics of identity consistency, physical plausibility, and multi-view geometry alignment, enabling new applications in extended reality and autonomous agent simulation.