IBM Cloud Databases - Structured Ideas

This Ideas portal is being closed.  Please enter new idea at

Add the ability to add/remove JARs for an entire cluster for Python notebooks.

When working with Spark using Python, it is common to need to install PySpark libraries that require additional JARs, which may be used by the JVM side of a system exposing a Python API that the user is interacting with. Currently, the only ways to due this in DSX are either to (1) open a Scala notebook and add the JAR via %AddJar, or (2) add them directly to a folder path has is generally an internal implementation detail that we are exploiting. It would be ideal to instead be able to install JARs at the project level via an official file path in a terminal, and/or via a GUI.

  • Mike Dusenberry
  • Dec 14 2018
  • Needs review
Why is it useful?
Who would benefit from this IDEA?
How should it work?
Idea Priority
Priority Justification
Customer Name
Submitting Organization
Submitter Tags
  • Attach files