Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes exactly. At my work when a new scientist joins us we just create an account and she can get started on her research within minutes. Each user gets a contained environment in which we mount a disk of shared data.


I just did this in my University lab as well. Most people aren't savvy with Linux, so having normal accounts with Jupyter port forwarding is out of the question. JupyterHub is just about the lowest friction I can possibly make it for introducing the Python data science stack to non data scientists.


And just to be explicit for readers, Jupyter and JupyterHub also allow to work with other data science stacks, R in particular.


IMO, Jupyter Notebook is the closest equivalent to Python as what Rstudio is for R. While Pycharm and VSCode are also preferred by some Py-based Data scientists, Jupyterhub offers almost everything that a typical IDE would do along with the traditional Notebook environment which a lot of beginners these days start with. Thus much less friction while getting started.



I would be really hesitant to comapre Jupyter Notebook to an IDE.....an example is a debugger...the only visual debugger that i have come across for jupyter is pixie debugger, which is miles behind the debugger of an IDE like Pycharm.... there is a huge list of features that jupyter needs before you can compare it to an IDE


It is an interactive environment (not much use for a debugger).


FWIW I use the %debug magic command in Jupyter and it has been a great experience. I'm pretty ignorant of the enterprise debugging tools so take that with a grain of salt.


Debuggers are only really useful if you're trying to figure out why some object in your server doesn't do what you want it to.

I'd wager that almost no data scientists write object oriented code.. it's probably mostly done one calculation at a time. executed in the notebooks repl. So the value you get from ide debuggers is tiny, as you're already doing everything one step at a time.


You still write functions and may want to inspect variable state in the middle of function execution.


Correct. RStudio has this feature, where variable values can be inspected in a sidebar. This would be a really useful feature for Jupyter, especially when running a Python kernel.


There is a JupyterLab extension for that: https://github.com/lckr/jupyterlab-variableInspector


Does it work with variables that are local to a function? I don't mean inspecting global variables after having executed a cell, but local variables in the middle of a function execution.


This was my use case for it.

I ended up getting really frustrated with setting it up. Followed several different tutorials, had it blow up in a different way each time.

Going to have to revisit this to see if the documentation has gotten better.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: