JupyterCon New York 2017 was a powerful gathering of the data science and AI community that has gathered around Project Jupyter over the past 15 years. Its purpose: to share how the world's most data-driven organizations use Project Jupyter to analyze their data, share their insights, and create dynamic, reproducible data science. A sampling of the featured speakers at this inaugural conference included: Fernando Perez (Lawrence Berkeley National Laboratory); Lorena Barba (George Washington University); Demba Ba (Harvard University); Safia Abdalla (nteract); Brett Cannon (Microsoft); Jeremy Freeman (Chan Zuckerberg Initiative); Rachel Thomas (fast.ai); and Nadia Eghbal (GitHub). If you're looking for an understanding as to why Jupyter has become the new frontend for data science and AI, this video compilation of JupyterCon's live presentations will provide the insights you need.
- A front row view at each of JupyterCon's 55 sessions, 15 keynote addresses, and eight tutorials, including complete access to all of the conference's SOLD OUT talks, such as "Jupyter Widgets: Interactive controls for Jupyter" and "Deploying interactive Jupyter dashboards for visualizing hundreds of millions of data points in 30 lines of Python."
- Illuminating talks by 101 of the world's top Jupyter experts working at Harvard University, Continuum Analytics, UC Berkeley, DataScience.com, Bloomberg LLP, University of Pittsburgh, IBM, CUNY, Domino Data Lab, Cal Poly San Luis Obispo, Two Sigma, University of British Columbia, Civis Analytics, Columbia University, R-Brain, Lawrence Berkeley National Laboratory, Microsoft, and more.
- A thought provoking set of keynote addresses, such as Fernando Perez's (the progenitor of Jupyter) predictions for Project Jupyter's future; Wes McKinney's (Two Sigma Investment) vision for seamless computation and data sharing across languages; William Merchan (DataScience.com) outlining the three movements driving enterprise adoption of Jupyter; and Peter Wang (Continuum Analytics) describing the coevolution of Jupyter and Anaconda, two major players in the new open data science ecosystem.
- Four beginner-level Jupyter tutorials, including primers on using JupyterLab and creating JupyterLab extensions; using Jupyter widgets to build user interfaces; using Jupyter with visualization and analysis tools such as pandas, seaborn, Matplotlib, and scikit-learn; and an explanation of how Jupyter technology empowers research, engineering, and data science teams.
- Four intermediate level tutorials, including a walk-through of how UC Berkeley deployed a JupyterHub campus-wide for its students and researchers; a workflow for building interactive dashboards visualizing billions of data points interactively within a Jupyter notebook using just a few dozen lines of code; a detailed study on how to do interactive natural language processing using SpaCy, Jupyter Notebooks and tools like TensorFlow, NetworkX, and LIME; and a demonstration of high-level polyglot data analysis combining Jupyter notebooks with SQL, Python, and R.
- 21 sessions on breakthrough applications of Jupyter in research, education, and industry, including "Leveraging Jupyter to build an Excel-Python bridge," a talk about a democratizing data science Jupyter app that allows those who understand Excel, but know nothing about Python, to easily access machine learning models and advanced interactive visualizations; "A billion stars in the Jupyter Notebook," a talk about astronomers using the vaex and ipyvolume libraries to visualize and explore large, high-dimensional datasets within a Jupyter notebook; "Enhancing data journalism with Jupyter," a presentation that describes how Jupyter notebooks enable data journalism powered by input from the general public; and a talk on the Anaconda Project, an open source library that delivers lightweight, efficient encapsulation and portability of data science projects.
- Nine "reproducible research" sessions that examine the problems of sharing research results in an open and reproducible manner; five sessions about large scale JupyterHub deployments; five sessions on Jupyter extensions and customizations; four sessions on building the Jupyter community; four programmatic sessions; three core architecture sessions; three sessions on kernels that use the Jupyter architecture and clients for different programming languages; and three sessions on Jupyter subprojects and documentation.
- More than 65 hours of video presentations in total...and all of it on Safari.