Skip to main content

Welcome to the Multisensory Environments to study Longitudinal Development (MELD) Consortium!

The mission of the MELD consortium is to detail multisensory development from birth until young adulthood using a series of tasks designed within naturalistic, immersive environments. The knowledge gained will be critical in furthering our understanding of sensory development and the higher-order cognitive abilities this development scaffolds.

The MELD consortium consists of four international sites: Vanderbilt University, Yale University, Italian Institute of Technology, and University Hospital Center and University of Lausanne.

We live in a multisensory world—continually bombarded with stimuli from multiple senses. As such, one of the major jobs of the human brain is to make sense of this sensory mélange, integrating information that belongs together and segregating information that does not. We know having information from multiple senses can dramatically improve performance in a host of domains, including detecting, discriminating, and localizing objects and events. These benefits are far less well characterized in the context of the developing brain.

This gap represents a tremendous knowledge void in our understanding of human development. The work to be carried out by the consortium seeks to fill this void, and would not only hold enormous basic significance in being the first true longitudinal characterization of multisensory development, but would also have great significance in the applied, clinical, and educational arenas.

The consortium is supported through a generous, unrestricted gift provided by Reality Labs Research, a division of Meta.

This website will be updated as the consortium’s projects progress. MELD publications will be added as the consortium’s projects progress.

Data gathered by the consortium in pursuit of their mission will be made publicly available as the science progresses. Please stay tuned to this website where we will have a link to the annotated data, which will collectively be referenced as the MIND MELD dataset.