- Il Memming Park (email@example.com), The University of Texas at Austin
- Evan Archer (firstname.lastname@example.org), The University of Texas at Austin
- Jonathan Pillow (email@example.com), The University of Texas at Austin
Computational neuroscience will soon be inundated with data of an unprecedented size and quality. New techniques and projects such as the BRAIN Initiative spur an exponential increase in the number of simultaneously recorded neurons. But are we prepared to analyze such large data? In the era of “big data”, it’s tempting to think that ever-larger datasets will alone overcome scientific challenges. On the contrary, however, large and high-dimensional data bring new hurdles to analysis. Increasing population sizes leave neural models with less data per parameter; meanwhile, many of the most popular tools in neuroscience scale poorly. This calls for a new kind of spike train analysis tools which are *scalable* to many (possibly hundreds to thousands) neurons. To prepare for coming flood of data, we invite experts on scalable neural modeling/analysis tools to shed light on the future of neural models.
- Which classes of models are scalable? How can we extend current models to be more scalable?
- Which models exploit the structure of population activity, such as low dimensionality and sparseness?
- What optimization techniques are efficient for large spike train datasets?
- How can we use Bayesian formalism to reduce the data required to fit models and estimate statistics?
- What can we learn from large-scale high-dimensional data?
- What kind of statistics will be powerful enough to verify/falsify population coding theories?
- What are the important questions we think will be answered by scalable models?