• Original address: Jupyter Notebook for Beginners: A Tutorial
  • Original post by Dataquest
  • The Nuggets translation Project
  • Permanent link to this article: github.com/xitu/gold-m…
  • Translator: SergeyChang
  • Proofreader: sunhaokk, lu Chen

Jupyter Notebook tutorial for beginners

Jupyter Notebook is a very powerful tool for developing and presenting data science projects interactively. It integrates code and its output into a single document and combines visual narrative text, mathematical equations, and other rich media. Its intuitive workflow facilitates iterative and rapid development, making Notebook increasingly popular in contemporary data science, analytics, and, increasingly, scientific research. Best of all, they are completely free as part of an open source project.

The Jupyter project is a successor to the earlier IPython Notebook, which was first released as a prototype in 2010. Although many different programming languages are available in Jupyter Notebook, this article focuses on Python, which is the most common in Jupyter Notebook.

To fully understand this tutorial, you should be familiar with programming, especially Python and PANDAS. That said, if you have programming experience, Python in this article will be familiar, and pandas will be easy to understand. Notebooks of Jupyter also serve as a flexible platform to run Pandas and even Python, as will be reflected in this article.

We will:

  • Describes some of the basics of installing Jupyter and creating your first Notebook.
  • Dig deep and learn all the important terminology.
  • Discover how notes can be easily shared and published online. In fact, this article is a Jupyter notebook! Everything here is written in the Jupyter Notebook environment, which you are viewing in read-only form.

Jupyter Notebook data analysis example

We’ll answer a real question with a sample analysis, so you can see how a Notebook workflow makes tasks intuitive and understandable when we share them with others.

Suppose you’re a data analyst tasked with figuring out the history of profits at America’s biggest companies. You’ll find more than 50 years of data sets for Fortune 500 companies, collected from Fortune’s public archives, since the list was first published in 1955. We have created a CSV file of the available data (you can get it here).

As we will demonstrate, The Notebooks of Jupyter are perfect for this investigation. First, let’s install Jupyter.

The installation

The easiest way for beginners to start using The Notebooks of Jupyter is to install Anaconda. Anaconda is the most widely used Python distribution for data science and comes preloaded with all the common libraries and tools. In addition to Jupyter, several Python libraries are packaged in Anaconda, including NumPy, PANDAS, and Matplotlib, and the complete 1000+ list is exhaustive. This allows you to run your own full-fledged data science workshop without having to manage countless installation packages or worry about dependencies and installation for a particular operating system.

Installation Anaconda:

  1. Download the latest version of Anaconda that supports Python 3 (not Python 2.7).
  2. Follow the instructions on the download page or in the executable to install Anaconda.

If you are a more advanced user who already has Python installed and would prefer to manage your packages manually, you can use PIP:

pip3 install jupyter
Copy the code

Create your first Notebook

In this section, we’ll see how to run and preserve notebooks, become familiar with their structure, and understand the interfaces. We will become familiar with some of the core terminology that will lead you to a practical understanding of how to use Jupyter notebooks and set the stage for the next section, which analyzes sample data and brings to life everything we learned here.

Run Jupyter

On Windows, you can run Jupyter by adding the Anaconda shortcut to your Start menu, which will open a new TAB in your default Web browser that looks like the screenshot below.

This is the Notebook Dashboard, dedicated to managing the Notebooks of Jupyter. Think of it as a launch pad to explore, edit, and create notebooks. Think of it as a launching pad for exploring, editing, and creating your notebook.

Please note that the dashboard will only allow you access to files and subfolders contained in the Jupyter startup directory; However, the boot directory can be changed. You can also launch the dashboard on any system (or terminal on Unix systems) by entering the Jupyter notebook command; In this case, the current working directory will be the startup directory.

Astute readers might have noticed that the dashboard URL similar to http://localhost:8888/tree. Localhost is not a web site, but represents content served from your local machine (your own computer). Both Jupyter Notebook and dashboard are Web applications, and Jupyter starts a native Python server that delivers these applications to your Web browser, making it fundamentally platform-independent and opening the door to easier sharing on the Web.

The dashboard interface is mostly self-explanatory — although we’ll cover it briefly later. What are we waiting for? Browse to the folder where you want to create your first Notebook, click the New drop-down button in the upper right corner, and select Python 3 (or whichever version you prefer).

We’ll see the results soon! Your first Jupyter Notebook will open in the new TAB – each Notebook uses its own TAB because you can have multiple Notebooks open at once. If you switch back to the dashboard, you will see the new file Untitled. You should see some green text telling the notebook that it is running.

What is an IPynb file?

It is useful to understand exactly what this file is. Each.ipynb file is a text file that describes the contents of your notebook in a format called JSON. Each cell and its contents, including image attachments that have been converted to text strings, are listed along with some metadata. You can edit this yourself — if you know what you’re doing! — By selecting “Edit > Edit Notebook Metadata” from the Notebook menu bar.

You can also view the contents of your notebook file by selecting Edit from the dashboard control, but importantly you can; There’s no reason to do this except to be curious, unless you really know what you’re doing.

The interface of notebook

Now that you have an open notebook in front of you, its interface won’t look completely foreign; After all, Jupyter is really just an advanced word processor. Why not have a look? Check out the menu to get a sense of it, and in particular take a moment to browse the command palette (which is a small button with keyboard ICONS (or Ctrl + Shift + P)) and scroll down the list of commands.

You should note that there are two very important terms that may be new to you: cell and kernel. They are key to understanding Jupyter and distinguishing it from being more than just a word processor. Fortunately, these concepts are not hard to understand.

  • The kernel is a “computing engine” that executes the code contained in a Notebook document.
  • A cell is a container for text to be displayed in the Notebook or code to be executed by the Notebook kernel.

The cell

We’ll talk about the kernel later, but before we do, let’s take a look at cells. Cells form the body of a notebook. In the screenshot of the new Notebook in the previous section, the box with the green outline is an empty cell. We will cover two main cell types:

  • The code unit contains the code to be executed in the kernel, and its output is shown below.
  • The Markdown unit contains text formatted with Markdown and displays its output at run time.

The first unit in a new Notebook is always a code unit. Let’s test it with a classic Hello World example. Enter the print (‘ Hello World! ‘) to the cell, click the Run button in the toolbar above, or press Ctrl + Enter. The result should look something like this:

print('Hello World! ')
Copy the code
Hello World!
Copy the code

When you run the cell, its output will appear below it, and the label to its left will change from In [] to In [1]. The output of the code unit is also part of the documentation, which is why you can see it in this article. You can always tell the difference between code and Markdown cells because code cells have tags on the left and Markdown cells don’t. The “In” part of the tag is simply an abbreviation for “input,” and the tag number indicates the order In which cells are executed on the kernel — In this case, the cell is executed first. Run the cell again, and the label changes to In[2], because at this point the cell is the second cell running on the kernel. This makes it very useful to delve into the kernel later.

From the menu bar, click Insert and select Insert cell below to create your new code unit, and try the code below to see what happens. Did you notice anything different?

import time
time.sleep(3)
Copy the code

This unit produces no output, but it takes three seconds to execute. Notice that Jupyter changes the label to In[*] to indicate that the cell is currently running.

In general, the output of a cell comes from any text data specified to be printed during the execution of the cell, as well as the value of the last row in the cell, whether it be a single variable, a function call, or something else. Such as:

def say_hello(recipient):
    return 'Hello, {}! '.format(recipient)

say_hello('Tim')
Copy the code
'Hello, Tim! '
Copy the code

You’ll find yourself using it a lot in your own projects, as we’ll see more in the future.

Keyboard shortcuts

You may often see the border turn blue when running cells and green when editing them. There is always an Active cell that highlights its current mode, green for Edit mode and blue for Command mode.

So far, we’ve seen how to use Ctrl + Enter to run cells, but there’s more. Keyboard shortcuts are a very popular aspect of the Jupyter environment because they facilitate fast cell-based workflows. Many of these are actions that can be performed on an active unit in command mode.

Below, you’ll find a list of keyboard shortcuts for Jupyter. You may not be familiar with them right away, but this list should give you an idea of them.

  • Switch between edit and command modes and use them separatelyEscEnter.
  • In command line mode:
    • withUpDownKey to scroll up and down your cells.
    • According to theABInserts a new cell above or below the active cell.
    • MThe active cell will be converted to a Markdown cell.
    • YSets the active cell to a code cell.
    • D + D(by twiceD) will delete the active cell.
    • ZDelete the undo cell. * hold theShiftAnd at the same time according to theUpDown, select multiple cells at a time.
      • With Multple,Shift + MWill merge your selections.
  • Ctrl + Shift + -In edit mode, the active cell is split at the cursor.
  • You can also use it on the left side of your cellShift + ClickTo choose them.

You can try this on your notebook. Once you’ve had a go, create a new Markdown unit and we’ll learn how to format text in our Notebook.

Markdown

Markdown is a lightweight, easy-to-learn markup language for formatting plain text. Its syntax has a one-to-one correspondence with HTML tags, so some experience here is useful, but definitely not a prerequisite. Keep in mind that this article was written on a Jupyter Notebook, so all the narrative text and images you see were done in Markdown. Let’s introduce the basics with a simple example.

# This is a level 1 title.
## This is a secondary title.Here's some plain text that makes up paragraphs. through* * * * in bold__bold__Or,* italics *_italic_Add emphasis. Paragraphs must be separated by blank lines.* Sometimes we want to include lists. * can be indented.1. Lists can also be numbered.2. Ordered list. [It is possible to include hyperlinks] (https://www.example.comInline code uses a single inverted quote:`foo()`, the code block uses three inverted quotes: \' '\bar() \' 'may be composed of 4 Spaces: foo() Finally, adding images is also very simple:! [Alt](https://www.example.com/image.jpg)Copy the code

When attaching images, you have three options:

  • Use the URL of an image on the Web.

  • Use a local URL that is maintained with your notebook, such as in the same Git repository.

  • Add an attachment via “Edit > Insert Image” This will convert the image to a string and store it in the.ipynb file in your notebook.

  • Note that this will make your.ipynb file bigger!

Markdown has a lot of details, especially when it comes to hyperlinking, and can also simply include plain HTML. Once you find yourself pushing the limits of the basics above, you can refer to Markdown creator John Gruber’s official guide.

The kernel

Each Notebook runs a kernel in the background. When you run a code unit, the code executes in the kernel, and any output is returned to the cell to display. The state of the kernel remains the same as it switches between cells — it is related to documents, not individual cells.

For example, if you import libraries or declare variables in one cell, they will be available in another cell. In this way, you can think of a notebook document as equivalent to a script file, except that it’s multimedia. Let’s try to feel it. First, we’ll import a Python package and define a function.

import numpy as np

def square(x):
    return x * x
Copy the code

Once we have executed the above cell, we can reference NP and Square in any other cell.

x = np.random.randint(1.10)
y = square(x)

print('%d squared is %d' % (x, y))
Copy the code
1 squared is 1
Copy the code

This works regardless of the cell order in your notebook. You can try it out for yourself, and let’s print out the variables again.

print('Is %d squared is %d? ' % (x, y))
Copy the code
Is 1 squared is 1?
Copy the code

There is no doubt about the answer. Let’s try changing y.

y = 10
Copy the code

What do you think will happen if we run the cell containing the print statement again? So we get Is 4 squared Is 10, right? !

Most of the time, the workflow on your Notebook will go from top to bottom, but it’s normal to go back and make some changes. In this case, the execution order on the left side of each cell, such as In [6], will let you know if any of your cells have stale output. If you want to reset something, there are several very useful options from the kernel menu:

  • Restart: Reboots the kernel to clear all variables defined.
  • Restart and clear output: Same as above, but also erases the output that appears below your code cell.
  • Restart and Run All: Same as above, but will also run all your units, from first to last.

If your kernel has been computing but you wish to stop it, you can select the Interupt option.

Select a kernel

You may have noticed that Jupyter offers the option to change the kernel, but there are actually many different options to choose from. When you create a new note from the dashboard by selecting the Python version, you are actually choosing which kernel to use.

There are not only different versions of the Python kernel, but also (over 100 languages), including Java, C, and even Fortran. Of particular interest to data scientists are the R and Julia, as well as the IMatLab and Calysto MATLAB kernels. The SoS kernel provides multilingual support in a single notebook. Each kernel has its own installation instructions, but you may need to run some commands on your computer.

The example analysis

Now that we’ve looked at a Jupyter Notebook, it’s time to see them in action, which should give you a clearer idea of why they’re so popular. Now it’s time to start using the aforementioned Fortune 500 data set. Remember, our goal is to understand how the profits of America’s largest companies have changed over time.

It’s worth noting that everyone will have their own preferences and styles, but the general rule still applies, and you can follow the passage on your own notebook if you wish, which gives you freedom to play with it.

Name your notebook

Before you start writing your project, you may want to give it a meaningful name. Perhaps confusingly, you can’t name or rename your Notebook from the Notebook app, but must use the dashboard or your file browser to rename the.ipynb file. We’ll go back to the dashboard to rename the file you created earlier and it will have the default notebook file name of Untitled.ipynb.

You can’t rename the notebook while it’s running, so you need to close it first. The easiest way to do this is to choose File > Close and Halt from the Notebook menu. However, you can also Shutdown the Kernel by going “Kernel > Shutdown” in the notebook application or by selecting notebook in the dashboard and clicking “Shutdown” (see figure below).

You can then select your Notebook and click “Rename” in the dashboard control.

Note that closing the note TAB in your browser will not close your Notebook the same way you would close a document in a traditional application. The Notebook’s kernel will continue to run in the background and needs to be stopped before it can actually be “shut down” — though this comes in handy if you accidentally close your TAB or browser! If the kernel is shut down, you can close this TAB without worrying if it is still running.

If you give your notebook a name, open it and we can get started.

Set up the

You usually start with a unit of code dedicated to imports and Settings, so if you choose to add or change anything, you can simply edit and rerun the unit without any side effects.

%matplotlib inline

import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns

sns.set(style="darkgrid")
Copy the code

We imported pandas to manipulate our data, Matplotlib to draw diagrams, and Seaborn to make our diagrams beautiful. It is also common to import NumPy, but in this case, although we are using PANDAS, we do not need to use it explicitly. The first line is not a Python command, but uses something called line magic to instruct Jupyter to capture Matplotlib diagrams and render them in the unit output; This is one of a number of advanced features that are beyond the scope of this article.

Let’s load the data.

df = pd.read_csv('fortune500.csv')
Copy the code

It is also wise to do this in a single cell, since we need to reload it at any time.

Save and checkpoint

Now that we’ve started, the best thing to do is to store regularly. Pressing Ctrl + S can save your notebook by calling the “Save and checkpoint” command, but what is that checkpoint?

Each time you create a new notebook, you create a checkpoint file and your notebook file. This feature, which sits in a hidden subdirectory in your location called.ipynb_checkpoints, is also an.ipynb file. By default, Jupyter will automatically save your notebook every 120 seconds without changing your main notebook file. When you “save and checkpoint”, both the notebook and checkpoint files will be updated. Thus, checkpoints enable you to restore unsaved work in the event of an unexpected event. You can Revert to Checkpoint from the menu by going “File > Revert to Checkpoint”.

Investigate our data set

We are making steady progress! Our notes have been safely saved and we have loaded the dataset DF into the most commonly used PANDAS data structure, called the DataFrame, which looks like a table. So what does our dataset look like?

df.head()
Copy the code
Year Rank Company Revenue (in millions) Profit (in millions)
0 1955 1 General Motors 9823.5 806
1 1955 2 Exxon Mobil 5661.4 584.8
2 1955 3 U.S. Steel 3250.4 195.4
3 1955 4 General Electric 2959.1 212.6
4 1955 5 Esmark 2510.8 19.1
df.tail()
Copy the code
Year Rank Company Revenue (in millions) Profit (in millions)
25495 2005 496 Wm. Wrigley Jr. 3648.6 493
25496 2005 497 Peabody Energy 3631.6 175.4
25497 2005 498 Wendy’s International 3630.4 57.8
25498 2005 499 Kindred Healthcare 3616.6 70.6
25499 2005 500 Cincinnati Financial 3614.0 584

It looks good. We have the columns we need, each row corresponding to a year’s financial data for a company.

Let’s rename these columns so that we can reference them later.

df.columns = ['year'.'rank'.'company'.'revenue'.'profit']
Copy the code

Next, we need to explore our data set. Is it complete? Are pandas read as expected? Lack of value?

len(df)
Copy the code
25500
Copy the code

Well, it looks good — 500 rows a year from 1955 to 2005.

Let’s check that our data set is imported as we expect. A simple check is to see if the data types (or dtypes) have been interpreted correctly.

df.dtypes
Copy the code
year         int64
rank         int64
company     object
revenue    float64
profit      object
dtype: object
Copy the code

Looks like there’s something wrong with the profit column – we want it to be Float64 like the revenue column. This suggests that it might contain some non-integer values, so let’s take a look.

non_numberic_profits = df.profit.str.contains('[^ 0-9. -]')
df.loc[non_numberic_profits].head()
Copy the code
year rank company revenue profit
228 1955 229 Norton 135.0 N.A.
290 1955 291 Schlitz Brewing 100.0 N.A.
294 1955 295 Pacific Vegetable Oil 97.9 N.A.
296 1955 297 Liebmann Breweries 96.0 N.A.
352 1955 353 Minneapolis-Moline 77.4 N.A.

Just as we suspected! Some of these values are strings that represent missing data. Any other missing values?

set(df.profit[non_numberic_profits])
Copy the code
{'N.A.'}
Copy the code

It’s easy to explain, but what should we do? It depends on how many values are missing.

len(df.profit[non_numberic_profits])
Copy the code
369
Copy the code

It’s only a small part of our data set, though not completely irrelevant, as it’s still around 1.5%. If the rows containing N.A. are simply and evenly distributed by year, the simplest solution is to delete them. So let’s look at the distribution.

bin_sizes, _, _ = plt.hist(df.year[non_numberic_profits], bins=range(1955, 2006))
Copy the code

At a glance, we can see that the largest number of invalid values in a year was less than 25, and with 500 data points per year, deleting these values accounted for less than 4% of the data in the worst years. In fact, with the exception of the surge in the 1990s, the missing values in most years were less than half the peak. For our purposes, assume that this is acceptable, and then remove those rows.

df = df.loc[~non_numberic_profits]
df.profit = df.profit.apply(pd.to_numeric)
Copy the code

Let’s see if it works.

len(df)
Copy the code
25131
Copy the code
df.dtypes
Copy the code
year         int64
rank         int64
company     object
revenue    float64
profit     float64
dtype: object
Copy the code

Good job! We have finished setting up the data set.

If you want to make notebook into a report, you can skip the study cells we created, including the workflow using Notebook shown here, merge related cells (see the advanced features section below) and create a dataset to set the cells. This means that if we put our data elsewhere, we can re-run the installation unit to recover it.

Use Matplotlib for drawing

Next, we can solve this problem by calculating the average annual profit. Let’s draw the revenue as well, so first we can define some variables and a way to reduce our code.

group_by_year = df.loc[:, ['year'.'revenue'.'profit']].groupby('year')
avgs = group_by_year.mean()
x = avgs.index
y1 = avgs.profit

def plot(x, y, ax, title, y_label):
    ax.set_title(title)
    ax.set_ylabel(y_label)
    ax.plot(x, y)
    ax.margins(x=0, y=0)
Copy the code

Now let’s start drawing.

fig, ax = plt.subplots()
plot(x, y1, ax, 'Increase in mean Fortune 500 company profits from 1955 to 2005'.'Profit (millions)')
Copy the code

It looks like an exponent, but it has some big dips. They must correspond to the recession of the early 1990s and the dotcom bubble. It’s interesting to see that in the data. But why have profits returned to higher levels after every recession?

Maybe income tells us more.

y2 = avgs.revenue
fig, ax = plt.subplots()
plot(x, y2, ax, 'Increase in mean Fortune 500 company revenues from 1955 to 2005'.'Revenue (millions)')
Copy the code

This adds another dimension to the story. Revenue barely took a serious hit, and the accounting in the finance department did a good job.

With the help of Stack Overflow, we can overlay these graphs with their standard offsets of +/-.

def plot_with_std(x, y, stds, ax, title, y_label):
    ax.fill_between(x, y - stds, y + stds, alpha=0.2)
    plot(x, y, ax, title, y_label)

fig, (ax1, ax2) = plt.subplots(ncols=2)
title = 'Increase in mean and std Fortune 500 company %s from 1955 to 2005'
stds1 = group_by_year.std().profit.as_matrix()
stds2 = group_by_year.std().revenue.as_matrix()
plot_with_std(x, y1.as_matrix(), stds1, ax1, title % 'profits'.'Profit (millions)')
plot_with_std(x, y2.as_matrix(), stds2, ax2, title % 'revenues'.'Revenue (millions)')
fig.set_size_inches(14.4)
fig.tight_layout()
Copy the code

It’s amazing. The standard deviation is huge. While some Fortune 500 companies have made billions, others have lost billions, and the risks have increased as profits have grown over the years. Some companies may perform better than others; Are the top 10 percent more or less stable than the bottom 10 percent?

There’s a lot to look at next, and it’s easy to see how the workflow on Notebook matches up with your own thought process, so it’s time to wrap up this example. This process helped us easily explore our data sets without switching applications, and our work could be shared and reproduced immediately. If we wanted to create a more concise report for a specific target population, we could quickly refactor our work by merging units and removing intermediate code.

Share your notebook

When people talk about sharing their notebooks, they usually think of two modes. In most cases, individuals share the end result of their work, like this article itself, which means sharing a non-interactive, pre-rendered version of the notebook; However, it is also possible to collaborate on notebook with a secondary version control system such as Git.

That said, there are a number of new companies on the Web offering the ability to run interactive Jupyter Notebook in the cloud.

Before you share

When you export or save it, the shared Notebook will be displayed as it was at the moment it was exported or saved, including the output of all code units. So, to make sure your notebook is shared, there are some steps you can take before sharing:

  1. Go to “Cell > All Output > Clear”
  2. Click “Kernel > Restart & Run All”
  3. Wait for your code units to finish executing, and check that they perform as expected.

This will ensure that your notebook does not contain intermediate output, does not contain stale state, and executes in order when shared.

Export your notebook

Jupyter has built-in support for exporting HTML and PDF As well As several other formats, which can be found under File > Download As. If you want to share your Notebook with a small private group, this feature is probably what you need. In fact, many researchers at academic institutions have some sort of public or internal cyberspace, and since you can export a notebook to an HTML file, Jupyter Notebook can be a particularly convenient way for them to share their work with peers.

However, if sharing exported files doesn’t satisfy you, there are more straightforward and popular ways to share.ipynb files to the web.

GitHub

With more than 1.8 million public Notebooks on GitHub as of early 2018, it’s easily the most popular independent platform for sharing Jupyter projects with the world. GitHub has integrated support for.ipynb file rendering and you can store it directly in the gists in its website’s storehouse. If you don’t already know, GitHub is a code-hosting platform for version control and collaboration for repositories created using Git. You’ll need to create an account to use their service, and the Github standard account is free.

When you have a GitHub account, the easiest way to share a notebook on GitHub doesn’t even require Git. Since 2008, GitHub has offered Gist services for hosting and sharing snippets of code, each with its own repository. The use of the Gists shares a notebook:

  1. Go to and browse gist.github.com.
  2. Open it in a file editor.ipynbFile, select all and copy the JSON inside.
  3. Paste the JSON of the notes into the gist.
  4. Give your Gist a name and add it.iypnbSuffix, otherwise will not work properly.
  5. I’m gonna say “Create secret gist” or “Create public gist.”

This should look something like this:

If you create a public Gist, you can now share its URL with anyone, and others will be able to fork and clone your work.

Creating your own Git repository and sharing it on GitHub is beyond the scope of this tutorial, but GitHub has plenty of guidelines for you to follow.

An additional trick for those who use Git is to add an exception to the.ipynb_checkpoints directory created for Jupyter in.gitignore, because we don’t need to commit checkpoint files to the repository.

Since 2015, NBViewer has been rendering thousands of Notebook renderers every week, making it the most popular notebook renderer out there. If you’ve already put your Jupyter Notebook online somewhere, whether it’s GitHub or elsewhere, NBViewer can read your Notebook and provide a shareable URL. The free service, offered as part of the project Jupyter, can be found at nbview.jupyter.org.

Originally developed before GitHub’s Jupyter Notebook integration, NBViewer allows anyone to enter a URL, Gist ID, or GitHub username/repo/filename and present it as a Web page. The ID of a Gist is the only number at the end of its URL; At https://gist.github.com/username/50896401c23e0bf417e89e1de, for example, the last string after a backslash. If you type GitHub username/repo/filename, you will see a minimal file browser that allows you to access the user’s repository and its contents.

The NBViewer displays the notebook’S URL based on the URL of the notebook being rendered and doesn’t change, so you can share it with anyone as long as the original file stays online — NBViewer doesn’t cache files for very long.

conclusion

Starting with the basics, we’ve mastered the Jupyter Notebook workflow, delved into more advanced features of IPython, and eventually learned how to share our work with friends, colleagues, and the world. We did it all from one note!

You can see how Notebook improves the work experience by reducing context switches and mimicking natural thinking in projects. Jupyter Notebook. The capabilities of Jupyter Notebook should also be obvious, and we’ve covered a number of resources to get you started exploring more advanced features in your own projects.

If you want to provide more inspiration for your Notebooks, Jupyter has been put together (an interesting Jupyter Notebook library), you may find it helpful, and you’ll find Nbviewer’s home page linked to some examples of truly high quality Notebooks. Also check out our Notebooks list of Tips on Jupyter.

Want to learn more about Jupyter Notebooks? We have a guided program that you might be interested in.


The Nuggets Translation Project is a community that translates quality Internet technical articles from English sharing articles on nuggets. The content covers Android, iOS, front-end, back-end, blockchain, products, design, artificial intelligence and other fields. If you want to see more high-quality translation, please continue to pay attention to the Translation plan of Digging Gold, the official Weibo, Zhihu column.