This is a suggestion based on my preference with Azure Studio. I enjoy the experiment flow diagram within Azure where I can load a set of data, process it and visualize the subset in a notebook. I am also able to include a set of custom code snippets for import in the notebook.
At the moment, I have not found a simple and neat way to do so in IBM DSX. Loading a csv in a notebook takes up many lines of pasted code (i.e. loading a few csv will take up my whole screen space) before I can start writing something useful.
Adding code snippet requires the use of magic commands (%%writefile myFunctions.py) but it is not easy to track what's stored in the spark instance.
Having this allows me to better manage my files/codes in one single platform.
Why is it useful?
|Who would benefit from this IDEA?|
How should it work?