Once you get used to developing in a Notebook environment, it can be painful to go back to traditional IDEs. In traditional IDEs, you execute your entire script and get a single output. This is great for application development, but is less than ideal when performing data analysis.
There are many things to love about using notebooks for data science. Some obvious things that data scientists all love are features like:
Single line or code block execution. Process one block of code...see output...tweak it… then repeat.
Inline plotting. You won't need to create an image file and open it up after the script runs. Instead, execute your code and see the results immediately.
- Markdown. Instead of just commenting your code, use markdown to clarify each step you are taking in your workflow. Between this and plotting, we can easily transform code into blog posts.
Going beyond these features there is something else to love about notebooks that will really helps developers and data scientists who need to use different languages for different projects - the notebook kernel support for multiple languages.
Originally notebooks were limited to scripts written in Python. Data scientists needed to set up their local environment to work with different languages. But this can sometimes become a hassle when all you care about is digging into your data set. This is no longer the case with IBM Data Science Experience- you can create notebooks using Python, R, or Scala.
Below is a screenshot from Data Science Experience that shows how easy it is to toggle between languages for your notebook. This can come in handy when collaborating with members of your team that prefer different languages. This makes it easy to have a consistent file format or structure for your data science code.
If you use any of these three languages and have not used a notebook before, sign up for Data Science Experience to see how easy it is!