hexsha stringlengths 40 40 | size int64 6 14.9M | ext stringclasses 1
value | lang stringclasses 1
value | max_stars_repo_path stringlengths 6 260 | max_stars_repo_name stringlengths 6 119 | max_stars_repo_head_hexsha stringlengths 40 41 | max_stars_repo_licenses list | max_stars_count int64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 6 260 | max_issues_repo_name stringlengths 6 119 | max_issues_repo_head_hexsha stringlengths 40 41 | max_issues_repo_licenses list | max_issues_count int64 1 67k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 6 260 | max_forks_repo_name stringlengths 6 119 | max_forks_repo_head_hexsha stringlengths 40 41 | max_forks_repo_licenses list | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | avg_line_length float64 2 1.04M | max_line_length int64 2 11.2M | alphanum_fraction float64 0 1 | cells list | cell_types list | cell_type_groups list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e7f94fdf88d919b502cc39bdcb9cd3053c9e170a | 279,537 | ipynb | Jupyter Notebook | examples/simulation_demo.ipynb | SelfExplainML/GAMMLI | 74b2e0ab9a7576ff676eef0cc4c55484fbfe6e72 | [
"MIT"
] | 3 | 2020-12-16T11:37:54.000Z | 2021-04-15T12:38:29.000Z | examples/simulation_demo.ipynb | gyf9712/GAMMLI | 94ada0be0866d607a714c546070e8cc78616895b | [
"MIT"
] | 1 | 2021-08-02T09:43:50.000Z | 2022-03-09T09:59:09.000Z | examples/simulation_demo.ipynb | gyf9712/GAMMLI | 94ada0be0866d607a714c546070e8cc78616895b | [
"MIT"
] | 3 | 2021-02-27T07:05:07.000Z | 2022-02-25T00:57:45.000Z | 240.358555 | 113,836 | 0.875541 | [
[
[
"## Regression",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nfrom sklearn.model_selection import train_test_split\nfrom collections import OrderedDict\nimport time\nfrom sklearn.metrics import mean_squared_error,roc_auc_score,mean_absolute_error,log... | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e7f950e17342caa7b34482cf65e4394b0257f660 | 535,463 | ipynb | Jupyter Notebook | assignment3/RNN_Captioning.ipynb | Purewhite2019/CS231n-2020-Assignment | c90a2e5e8fc0c38ce293cf627778a50ccf280351 | [
"MIT"
] | null | null | null | assignment3/RNN_Captioning.ipynb | Purewhite2019/CS231n-2020-Assignment | c90a2e5e8fc0c38ce293cf627778a50ccf280351 | [
"MIT"
] | null | null | null | assignment3/RNN_Captioning.ipynb | Purewhite2019/CS231n-2020-Assignment | c90a2e5e8fc0c38ce293cf627778a50ccf280351 | [
"MIT"
] | null | null | null | 631.442217 | 185,216 | 0.944809 | [
[
[
"# Image Captioning with RNNs\nIn this exercise you will implement a vanilla recurrent neural networks and use them it to train a model that can generate novel captions for images.",
"_____no_output_____"
],
[
"## Install h5py\nThe COCO dataset we will be using is stored in HDF5 fo... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"m... |
e7f952d1cd40cf789e8b4fa4c7b048a3bceece6a | 12,326 | ipynb | Jupyter Notebook | docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb | YinLiu-91/acdecom | f3ed7900f25177e3f1f3dd7368c5441185a15421 | [
"MIT"
] | 3 | 2020-09-19T11:08:18.000Z | 2021-01-20T03:52:22.000Z | docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb | YinLiu-91/acdecom | f3ed7900f25177e3f1f3dd7368c5441185a15421 | [
"MIT"
] | null | null | null | docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb | YinLiu-91/acdecom | f3ed7900f25177e3f1f3dd7368c5441185a15421 | [
"MIT"
] | 5 | 2020-07-27T10:33:24.000Z | 2021-04-16T11:29:35.000Z | 40.81457 | 916 | 0.579101 | [
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\nThe noise scattering at a compressor inlet and outlet\n==================================================\n\n In this example we extract the scattering of noise at a compressor inlet and outlet. In addition to measuring th... | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"cod... | [
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
... |
e7f96571bafb13bc3fb6691bed32646479923626 | 52,897 | ipynb | Jupyter Notebook | 20210714/analysis_for_data.ipynb | Brook1711/openda1 | 1d67912083ecf60b04daa6d9cf377339d179b1aa | [
"Apache-2.0"
] | null | null | null | 20210714/analysis_for_data.ipynb | Brook1711/openda1 | 1d67912083ecf60b04daa6d9cf377339d179b1aa | [
"Apache-2.0"
] | null | null | null | 20210714/analysis_for_data.ipynb | Brook1711/openda1 | 1d67912083ecf60b04daa6d9cf377339d179b1aa | [
"Apache-2.0"
] | 1 | 2021-07-18T16:01:56.000Z | 2021-07-18T16:01:56.000Z | 35.549059 | 148 | 0.381534 | [
[
[
"import pandas as pd\nimport json \nimport numpy as np\nimport ast\nfrom datetime import datetime\nimport plotly.graph_objs as go\nfrom plotly.offline import plot\nimport plotly.offline as offline\nfrom pandas.core.indexes import interval\nimport re",
"_____no_output_____"
],
[
"df... | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7f97e3d992f4a0aaa0fc81f120e65893f9ed1f6 | 310,584 | ipynb | Jupyter Notebook | Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb | mohityogesh44/ds-seed | e124f0078faf97568951e19e4302451ad0c7cf6c | [
"Apache-2.0"
] | null | null | null | Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb | mohityogesh44/ds-seed | e124f0078faf97568951e19e4302451ad0c7cf6c | [
"Apache-2.0"
] | null | null | null | Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb | mohityogesh44/ds-seed | e124f0078faf97568951e19e4302451ad0c7cf6c | [
"Apache-2.0"
] | null | null | null | 250.673123 | 241,808 | 0.891115 | [
[
[
"# PCA with MaxAbsScaler",
"_____no_output_____"
],
[
"This code template is for simple Principal Component Analysis(PCA) along feature scaling via MaxAbsScaler in python for dimensionality reduction technique. It is used to decompose a multivariate dataset into a set of successive... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]... |
e7f986aa7cd31850039eae3c0e03324e80197696 | 120,750 | ipynb | Jupyter Notebook | 03_first_predict_for_pl.ipynb | Grzechu11/covid-19_timeseries-PL | 241c758247af126c6eccfeb3c4a49584ffe6e614 | [
"MIT"
] | null | null | null | 03_first_predict_for_pl.ipynb | Grzechu11/covid-19_timeseries-PL | 241c758247af126c6eccfeb3c4a49584ffe6e614 | [
"MIT"
] | null | null | null | 03_first_predict_for_pl.ipynb | Grzechu11/covid-19_timeseries-PL | 241c758247af126c6eccfeb3c4a49584ffe6e614 | [
"MIT"
] | null | null | null | 345.988539 | 30,643 | 0.738692 | [
[
[
"import os\nimport pandas as pd\nimport numpy as np\nimport datetime\n\nfrom IPython import get_ipython\n\nfrom fbprophet import Prophet\n\nfrom sklearn.metrics import mean_absolute_error as mae\n\nimport matplotlib.pyplot as plt\nget_ipython().run_line_magic('matplotlib', 'inline')\n\nimport seaborn ... | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7f98ebf86a71ade2e8bba9ff796ef4fcf3b9530 | 117,853 | ipynb | Jupyter Notebook | BO_trials/multi_objective_bo.ipynb | michelleliu1027/Bayesian_PV | 5636980ae64712e7bf3c017ea4986fba7b858674 | [
"MIT"
] | 1 | 2021-09-08T07:51:19.000Z | 2021-09-08T07:51:19.000Z | BO_trials/multi_objective_bo.ipynb | michelleliu1027/Bayesian_PV | 5636980ae64712e7bf3c017ea4986fba7b858674 | [
"MIT"
] | 2 | 2021-09-29T19:11:20.000Z | 2021-09-29T23:23:22.000Z | BO_trials/multi_objective_bo.ipynb | michelleliu1027/Bayesian_PV | 5636980ae64712e7bf3c017ea4986fba7b858674 | [
"MIT"
] | 2 | 2021-09-19T05:24:32.000Z | 2021-12-06T03:39:10.000Z | 218.651206 | 62,366 | 0.897533 | [
[
[
"## Parallel, Multi-Objective BO in BoTorch with qEHVI and qParEGO\n\nIn this tutorial, we illustrate how to implement a simple multi-objective (MO) Bayesian Optimization (BO) closed loop in BoTorch.\n\nWe use the parallel ParEGO ($q$ParEGO) [1] and parallel Expected Hypervolume Improvement ($q$EHVI) ... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"... |
e7f99a1d8ad714a66ce47e93db89d0d29af182a1 | 25,777 | ipynb | Jupyter Notebook | aws/aws.ipynb | datascienceandml/data-science-ipython-notebooks | e1259144abc174c7bc4aacda0bab3578c92d7b24 | [
"Apache-2.0"
] | 5 | 2017-07-01T05:50:48.000Z | 2021-11-16T11:16:08.000Z | aws/aws.ipynb | ChristosChristofidis/data-science-ipython-notebooks | 2731cdc456f6e5bd8314f9987cb90aa681be1252 | [
"Apache-2.0"
] | null | null | null | aws/aws.ipynb | ChristosChristofidis/data-science-ipython-notebooks | 2731cdc456f6e5bd8314f9987cb90aa681be1252 | [
"Apache-2.0"
] | 10 | 2016-01-04T17:49:04.000Z | 2020-12-18T19:21:32.000Z | 27.306144 | 427 | 0.555728 | [
[
[
"<small><i>This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/data-science-ipython-notebooks).</i></small>",
"_____no_output_____"
],
[
"# Amazon Web Services (AWS)\n\n* SSH to EC2\n* S3cmd\n* ... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"... |
e7f9aaf8ca36a2b77b767e0bf9cececb6027edb6 | 362,814 | ipynb | Jupyter Notebook | day2_visualization.ipynb | lat-lukasz/dw_matrix_car | 3b32e3ed34907420cf1efcb22b5f0cef041ea7ea | [
"MIT"
] | null | null | null | day2_visualization.ipynb | lat-lukasz/dw_matrix_car | 3b32e3ed34907420cf1efcb22b5f0cef041ea7ea | [
"MIT"
] | null | null | null | day2_visualization.ipynb | lat-lukasz/dw_matrix_car | 3b32e3ed34907420cf1efcb22b5f0cef041ea7ea | [
"MIT"
] | null | null | null | 362,814 | 362,814 | 0.91712 | [
[
[
"!pip install --upgrade tables",
"Collecting tables\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/ed/c3/8fd9e3bb21872f9d69eb93b3014c86479864cca94e625fd03713ccacec80/tables-3.6.1-cp36-cp36m-manylinux1_x86_64.whl (4.3MB)\n\u001b[K |████████████████████████████████| 4.3MB 2.... | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
... |
e7f9c51de7b5a9697a5abad51534248f7dd6c256 | 1,102 | ipynb | Jupyter Notebook | 01_log_conversion.ipynb | L0D3/P191919 | 27504930834167dd33d92347c0fe65fefd0e4b7e | [
"Apache-2.0"
] | null | null | null | 01_log_conversion.ipynb | L0D3/P191919 | 27504930834167dd33d92347c0fe65fefd0e4b7e | [
"Apache-2.0"
] | null | null | null | 01_log_conversion.ipynb | L0D3/P191919 | 27504930834167dd33d92347c0fe65fefd0e4b7e | [
"Apache-2.0"
] | null | null | null | 17.492063 | 78 | 0.508167 | [
[
[
"#default_exp conversion",
"_____no_output_____"
]
],
[
[
"# Log Conversion\n\n> Converts the event logs into csv format to make it easier to load them",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2\n%matplotlib inline",
"The... | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7f9d933e76ab3226591a6c3c08916179a8965d3 | 2,381 | ipynb | Jupyter Notebook | HerniaAnnotation/HerniaCode/Notebooks/RemoveNoneClass.ipynb | molinamarcvdb/aigt | 75cadc66e3b8eb3736085a9e38ef9fb70521c94d | [
"BSD-3-Clause"
] | 26 | 2019-10-10T18:51:38.000Z | 2022-02-27T12:17:58.000Z | HerniaAnnotation/HerniaCode/Notebooks/RemoveNoneClass.ipynb | molinamarcvdb/aigt | 75cadc66e3b8eb3736085a9e38ef9fb70521c94d | [
"BSD-3-Clause"
] | 13 | 2019-11-05T01:40:00.000Z | 2022-02-08T15:29:36.000Z | HerniaAnnotation/HerniaCode/Notebooks/RemoveNoneClass.ipynb | molinamarcvdb/aigt | 75cadc66e3b8eb3736085a9e38ef9fb70521c94d | [
"BSD-3-Clause"
] | 22 | 2019-10-07T16:09:12.000Z | 2022-03-17T09:19:54.000Z | 21.645455 | 105 | 0.48971 | [
[
[
"import numpy as np\n\nclasses = ['None', 'Extob', 'Fat', 'Sack', 'Skin', 'Spchd']\n\nxFile = r\"C:\\Users\\PerkLab\\Desktop\\HerniaAnnotationData-2019-10-30\\x_train_fifth_128.npy\"\nyFile = r\"C:\\Users\\PerkLab\\Desktop\\HerniaAnnotationData-2019-10-30\\y_train_fifth_128.npy\"\n\nxOut = xFile[:-4]+... | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
e7f9e1475af4368b30e008441d6f23109fd9bb88 | 459,921 | ipynb | Jupyter Notebook | docs/source/tutorial.ipynb | milicolazo/Pyedra | 44002e2bfca852e44337df150d8ff3c231470a43 | [
"MIT"
] | 16 | 2020-10-01T19:39:03.000Z | 2022-02-17T03:43:29.000Z | docs/source/tutorial.ipynb | milicolazo/Pyedra | 44002e2bfca852e44337df150d8ff3c231470a43 | [
"MIT"
] | 47 | 2020-10-12T15:41:41.000Z | 2021-03-07T14:34:00.000Z | docs/source/tutorial.ipynb | milicolazo/Pyedra | 44002e2bfca852e44337df150d8ff3c231470a43 | [
"MIT"
] | 7 | 2020-10-15T15:11:22.000Z | 2021-08-27T23:42:15.000Z | 253.4 | 66,500 | 0.906456 | [
[
[
"# Pyedra's Tutorial\n\nThis tutorial is intended to serve as a guide for using Pyedra to analyze asteroid phase curve data.\n\n## Imports\n\nThe first thing we will do is import the necessary libraries. In general you will need the following:\n- `pyedra` (*pyedra*) is the library that we present in t... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]... |
e7f9ea122d12e059f97e56c35f83c652d20ed34e | 2,222 | ipynb | Jupyter Notebook | Notebooks/02_analyses/Fig2_Shannon_Entropy.ipynb | BioProteanLabs/SFt_pipeline | f383ee7e76e962825a0f8ed8dc34d49ec12133ce | [
"MIT"
] | null | null | null | Notebooks/02_analyses/Fig2_Shannon_Entropy.ipynb | BioProteanLabs/SFt_pipeline | f383ee7e76e962825a0f8ed8dc34d49ec12133ce | [
"MIT"
] | null | null | null | Notebooks/02_analyses/Fig2_Shannon_Entropy.ipynb | BioProteanLabs/SFt_pipeline | f383ee7e76e962825a0f8ed8dc34d49ec12133ce | [
"MIT"
] | null | null | null | 25.25 | 125 | 0.531503 | [
[
[
"# Shannon's Entropy of ABA features",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom skimage.measure import shannon_entropy\nfrom morphontogeny.functions.IO import reconstruct_ABA",
"_____no_output_____"
... | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e7f9f3dc94739d4dee9b72ef6bcbc324cbc181f4 | 25,328 | ipynb | Jupyter Notebook | 05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb | eotp/python-FU-class | f0a7518b3e3204a77e8855bef91afeaabb0d52ac | [
"MIT"
] | 1 | 2020-01-17T14:51:40.000Z | 2020-01-17T14:51:40.000Z | 05-2022-06-02/notebooks/05-00_The_pandas_library.ipynb | eotp/python-FU-WiSe1920 | 4f225430ef8a70faca8c86c77cc888524c8e0546 | [
"MIT"
] | null | null | null | 05-2022-06-02/notebooks/05-00_The_pandas_library.ipynb | eotp/python-FU-WiSe1920 | 4f225430ef8a70faca8c86c77cc888524c8e0546 | [
"MIT"
] | 1 | 2020-12-04T15:37:28.000Z | 2020-12-04T15:37:28.000Z | 22.120524 | 849 | 0.528032 | [
[
[
"# The pandas library",
"_____no_output_____"
],
[
"The [pandas library](https://pandas.pydata.org/) was created by [Wes McKinney](http://wesmckinney.com/) in 2010. pandas provides **data structures** and **functions** \nfor manipulating, processing, cleaning and crunching data. In... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"c... |
e7f9f603a770c9e210207a6e150782b5b5a198d9 | 70,311 | ipynb | Jupyter Notebook | 2018-10-30/jokes.ipynb | denistsoi/notebooks | 0f1ebe29870e4580ef6206e93e5717d645dcb5bd | [
"MIT"
] | null | null | null | 2018-10-30/jokes.ipynb | denistsoi/notebooks | 0f1ebe29870e4580ef6206e93e5717d645dcb5bd | [
"MIT"
] | null | null | null | 2018-10-30/jokes.ipynb | denistsoi/notebooks | 0f1ebe29870e4580ef6206e93e5717d645dcb5bd | [
"MIT"
] | null | null | null | 288.159836 | 2,979 | 0.695382 | [
[
[
"import requests\nr = requests.get('https://reddit.com/r/dadjokes.json')\n\n# file = open(\"jokes.json\", mode=\"w\")\n# file.write(str(r.json()));\n# file.close()",
"_____no_output_____"
],
[
"import json\nwith open(\"jokes.json\", \"r\") as json_file:\n json_data = json.load(j... | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
e7fa085833296370f082e31907b5660c97c6812b | 149,145 | ipynb | Jupyter Notebook | Spring2021/int_point.ipynb | Jhomanik/MIPT-Opt | c1629b93b7608081f2237278afd92ee426760a84 | [
"MIT"
] | 132 | 2016-09-05T09:24:55.000Z | 2022-03-28T14:10:05.000Z | Spring2021/int_point.ipynb | Jhomanik/MIPT-Opt | c1629b93b7608081f2237278afd92ee426760a84 | [
"MIT"
] | 32 | 2016-10-30T12:24:18.000Z | 2018-08-30T14:02:39.000Z | Spring2021/int_point.ipynb | Jhomanik/MIPT-Opt | c1629b93b7608081f2237278afd92ee426760a84 | [
"MIT"
] | 54 | 2017-03-09T14:20:26.000Z | 2021-12-26T08:32:51.000Z | 69.240947 | 34,336 | 0.782815 | [
[
[
"# Методы внутренней точки",
"_____no_output_____"
],
[
"## На прошлом семинаре\n\n- Задачи оптимизации с ограничениями на простые множества\n- Метод проекции градиента как частный случай проксимального градиентного метода\n- Метод условного градента (Франка-Вольфа) и его сходимост... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markd... |
e7fa116a84cb15479b1418ae869304b0b788d067 | 8,020 | ipynb | Jupyter Notebook | content/python/ml_algorithms/.ipynb_checkpoints/kNN-algorithm-checkpoint.ipynb | Palaniappan12345/mlnotes | 359c3b27629544604f7825ab45cd9438dc777753 | [
"MIT"
] | null | null | null | content/python/ml_algorithms/.ipynb_checkpoints/kNN-algorithm-checkpoint.ipynb | Palaniappan12345/mlnotes | 359c3b27629544604f7825ab45cd9438dc777753 | [
"MIT"
] | null | null | null | content/python/ml_algorithms/.ipynb_checkpoints/kNN-algorithm-checkpoint.ipynb | Palaniappan12345/mlnotes | 359c3b27629544604f7825ab45cd9438dc777753 | [
"MIT"
] | 1 | 2021-06-19T06:05:14.000Z | 2021-06-19T06:05:14.000Z | 33.140496 | 106 | 0.454239 | [
[
[
"---\ntitle: \"kNN-algorithm\"\nauthor: \"Palaniappan S\"\ndate: 2020-09-05\ndescription: \"-\"\ntype: technical_note\ndraft: false\n---",
"_____no_output_____"
]
],
[
[
"# importing required libraries\nimport pandas as pd\nfrom sklearn.neighbors import KNeighborsClassifier\nfr... | [
"raw",
"code"
] | [
[
"raw"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7fa11a3e962debcb4171974a8da4024800712ec | 16,315 | ipynb | Jupyter Notebook | examples/tutorials/02_Learning_MNIST_Digit_Classifiers.ipynb | martonlanga/deepchem | af2db874484603ade489fa513eac193b38ce6d56 | [
"MIT"
] | 1 | 2020-09-14T02:34:40.000Z | 2020-09-14T02:34:40.000Z | examples/tutorials/02_Learning_MNIST_Digit_Classifiers.ipynb | martonlanga/deepchem | af2db874484603ade489fa513eac193b38ce6d56 | [
"MIT"
] | 1 | 2020-07-13T18:59:49.000Z | 2020-07-13T18:59:49.000Z | examples/tutorials/02_Learning_MNIST_Digit_Classifiers.ipynb | martonlanga/deepchem | af2db874484603ade489fa513eac193b38ce6d56 | [
"MIT"
] | null | null | null | 44.944904 | 593 | 0.588845 | [
[
[
"# Tutorial Part 2: Learning MNIST Digit Classifiers\n\nIn the previous tutorial, we learned some basics of how to load data into DeepChem and how to use the basic DeepChem objects to load and manipulate this data. In this tutorial, you'll put the parts together and learn how to train a basic image cl... | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e7fa18eb4f321df6e6a1ec9fb6f29f25d5195f37 | 1,469 | ipynb | Jupyter Notebook | src/tests/bsplines/PPUM Basis.ipynb | certik/hfsolver | b4c50c1979fb7e468b1852b144ba756f5a51788d | [
"BSD-2-Clause"
] | 20 | 2015-03-24T13:06:39.000Z | 2022-03-29T00:14:02.000Z | src/tests/bsplines/PPUM Basis.ipynb | certik/hfsolver | b4c50c1979fb7e468b1852b144ba756f5a51788d | [
"BSD-2-Clause"
] | 6 | 2015-03-25T04:59:43.000Z | 2017-06-06T23:00:09.000Z | src/tests/bsplines/PPUM Basis.ipynb | certik/hfsolver | b4c50c1979fb7e468b1852b144ba756f5a51788d | [
"BSD-2-Clause"
] | 5 | 2016-01-20T13:38:22.000Z | 2020-11-24T15:35:43.000Z | 19.851351 | 46 | 0.446562 | [
[
[
"%pylab inline",
"_____no_output_____"
],
[
"D = loadtxt(\"basis.txt\")\nNb = (size(D,0)-1)/2\nNq = size(D,1)\nB = empty((Nb,Nq), dtype=\"double\")\nBp = empty((Nb,Nq), dtype=\"double\")\nx = D[0,:]\nfor i in range(Nb):\n B[i,:] = D[2*i+1,:]\n Bp[i,:] = D[2*i+2,:]\n ... | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
e7fa24fdd336e0ace1c6510ce504a2b0d4ac1c19 | 100,120 | ipynb | Jupyter Notebook | community/aqua/chemistry/h2_particle_hole.ipynb | Chibikuri/qiskit-tutorials | 15c121b95249de17e311c869fbc455210b2fcf5e | [
"Apache-2.0"
] | 2 | 2017-11-09T16:33:14.000Z | 2018-02-26T00:42:17.000Z | community/aqua/chemistry/h2_particle_hole.ipynb | Chibikuri/qiskit-tutorials | 15c121b95249de17e311c869fbc455210b2fcf5e | [
"Apache-2.0"
] | 1 | 2019-04-12T07:43:25.000Z | 2020-02-07T13:32:18.000Z | community/aqua/chemistry/h2_particle_hole.ipynb | Chibikuri/qiskit-tutorials | 15c121b95249de17e311c869fbc455210b2fcf5e | [
"Apache-2.0"
] | 2 | 2019-03-24T21:00:25.000Z | 2019-03-24T21:57:10.000Z | 395.731225 | 38,276 | 0.931123 | [
[
[
"## _*H2 energy plot comparing full to particle hole transformations*_\n\nThis notebook demonstrates using Qiskit Chemistry to plot graphs of the ground state energy of the Hydrogen (H2) molecule over a range of inter-atomic distances using VQE and UCCSD with full and particle hole transformations. It... | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e7fa27635f2253993edfc0d56f7f67a920d99460 | 632,090 | ipynb | Jupyter Notebook | Copy_of_Copy_of_Portfolio_Model_Base.ipynb | amritgos/MTP | 9a826cf4d8db9fd852a8fac4be2e62b8f304ba7c | [
"MIT"
] | null | null | null | Copy_of_Copy_of_Portfolio_Model_Base.ipynb | amritgos/MTP | 9a826cf4d8db9fd852a8fac4be2e62b8f304ba7c | [
"MIT"
] | null | null | null | Copy_of_Copy_of_Portfolio_Model_Base.ipynb | amritgos/MTP | 9a826cf4d8db9fd852a8fac4be2e62b8f304ba7c | [
"MIT"
] | null | null | null | 162.199128 | 433,178 | 0.767511 | [
[
[
"!pip install git+https://github.com/amritgos/FinRL.git",
"Collecting git+https://github.com/amritgos/FinRL.git\n Cloning https://github.com/amritgos/FinRL.git to /tmp/pip-req-build-blj6h990\n Running command git clone -q https://github.com/amritgos/FinRL.git /tmp/pip-req-build-blj6h990\nColle... | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
... |
e7fa2973fa3b0b47587f5e00b2ae084875eff264 | 7,410 | ipynb | Jupyter Notebook | _downloads/b1ca754f39005f3188ba9b4423f688b0/plot_otda_d2.ipynb | FlopsKa/pythonot.github.io | 3d5baca1e48e09bb076036d99a835e34af9fce80 | [
"MIT"
] | 5 | 2020-06-12T10:53:15.000Z | 2021-11-06T13:21:56.000Z | _downloads/b1ca754f39005f3188ba9b4423f688b0/plot_otda_d2.ipynb | FlopsKa/pythonot.github.io | 3d5baca1e48e09bb076036d99a835e34af9fce80 | [
"MIT"
] | 1 | 2020-08-28T08:15:56.000Z | 2020-08-28T08:15:56.000Z | _downloads/b1ca754f39005f3188ba9b4423f688b0/plot_otda_d2.ipynb | FlopsKa/pythonot.github.io | 3d5baca1e48e09bb076036d99a835e34af9fce80 | [
"MIT"
] | 1 | 2020-08-28T08:08:09.000Z | 2020-08-28T08:08:09.000Z | 51.458333 | 1,529 | 0.578677 | [
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\n# OT for domain adaptation on empirical distributions\n\n\nThis example introduces a domain adaptation in a 2D setting. It explicits\nthe problem of domain adaptation and introduces some optimal transport\napproaches to so... | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7fa355472daf463067b094765f2009b0f9b461b | 16,529 | ipynb | Jupyter Notebook | neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb | JanuaryThomas/amazon-neptune-samples | a33f481707014025b41857966144b4a59d6c553b | [
"MIT-0"
] | 298 | 2018-04-16T17:34:01.000Z | 2022-03-27T06:53:21.000Z | neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb | JanuaryThomas/amazon-neptune-samples | a33f481707014025b41857966144b4a59d6c553b | [
"MIT-0"
] | 24 | 2018-06-07T12:48:56.000Z | 2022-03-29T14:28:06.000Z | neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb | JanuaryThomas/amazon-neptune-samples | a33f481707014025b41857966144b4a59d6c553b | [
"MIT-0"
] | 132 | 2018-05-31T02:58:04.000Z | 2022-03-29T21:02:05.000Z | 38.619159 | 824 | 0.621695 | [
[
[
"# Air Routes\n\nThe examples in this notebook demonstrate using the GremlinPython library to connect to and work with a Neptune instance. Using a Jupyter notebook in this way provides a nice way to interact with your Neptune graph database in a familiar and instantly productive environment.",
"... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"... |
e7fa3d2562e08035d5ccd6fd81027779889cabc7 | 883,741 | ipynb | Jupyter Notebook | Project4_NN/test_model_unseen_data_reverse.ipynb | Rendsnack/Thesis-SMSL | 17ae162401df8e8666ad2252be26148a9d18a47a | [
"MIT"
] | null | null | null | Project4_NN/test_model_unseen_data_reverse.ipynb | Rendsnack/Thesis-SMSL | 17ae162401df8e8666ad2252be26148a9d18a47a | [
"MIT"
] | null | null | null | Project4_NN/test_model_unseen_data_reverse.ipynb | Rendsnack/Thesis-SMSL | 17ae162401df8e8666ad2252be26148a9d18a47a | [
"MIT"
] | null | null | null | 417.055687 | 50,404 | 0.934294 | [
[
[
"import librosa\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport os\nimport csv\nimport natsort\nfrom openpyxl import load_workbook\nimport random\nfrom random import randrange\nfrom sklearn.metrics import confusion_matrix, cohen_kappa_score\nimport... | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"cod... | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
... |
e7fa53fdf4a249b1e228891e77af9209ce49e139 | 4,086 | ipynb | Jupyter Notebook | day3 assign.ipynb | nandhinics/letsupgrade-assignday3 | ecb1c5de34ff98463d6d177861ccf8db232f7d33 | [
"Apache-2.0"
] | null | null | null | day3 assign.ipynb | nandhinics/letsupgrade-assignday3 | ecb1c5de34ff98463d6d177861ccf8db232f7d33 | [
"Apache-2.0"
] | null | null | null | day3 assign.ipynb | nandhinics/letsupgrade-assignday3 | ecb1c5de34ff98463d6d177861ccf8db232f7d33 | [
"Apache-2.0"
] | null | null | null | 18.916667 | 66 | 0.403084 | [
[
[
"x=input(\"Enter the altitude\")\nx= int(x)\nif (x<=1000):\n print (\"Safe to land\")\nelif (x>1000) and (x<=5000):\n print (\"Bring down to 1000\")\nelse:\n print (\"Turn around\")\n",
"Enter the altitude1000\nSafe to land\n"
],
[
"x=input(\"Enter the altitude\")\nx= int(... | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
e7fa5657947689afdb0a749cbf2e4b57ec793feb | 20,899 | ipynb | Jupyter Notebook | GitHub_MD_rendering/Operators.ipynb | kyaiooiayk/Python-Programming | b70dde24901cd24b38e2ead7c9a1b2d1808fc4b0 | [
"OLDAP-2.3"
] | null | null | null | GitHub_MD_rendering/Operators.ipynb | kyaiooiayk/Python-Programming | b70dde24901cd24b38e2ead7c9a1b2d1808fc4b0 | [
"OLDAP-2.3"
] | null | null | null | GitHub_MD_rendering/Operators.ipynb | kyaiooiayk/Python-Programming | b70dde24901cd24b38e2ead7c9a1b2d1808fc4b0 | [
"OLDAP-2.3"
] | null | null | null | 25.517705 | 269 | 0.498684 | [
[
[
"# Introduction",
"_____no_output_____"
],
[
"\n**What?** Operators\n\n",
"_____no_output_____"
],
[
"# Basic Python Semantics: Operators",
"_____no_output_____"
],
[
"In the previous section, we began to look at the semantics of Python varia... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"... |
e7fa604752370c08e6880024098cc70cb3b28a4a | 40,766 | ipynb | Jupyter Notebook | Network Evaluation Examples/Network Evaluation Example-BioPlex.ipynb | jdtibochab/network_bisb | 7adcab15c2e8ed79123153f8de38d159d103f999 | [
"MIT"
] | null | null | null | Network Evaluation Examples/Network Evaluation Example-BioPlex.ipynb | jdtibochab/network_bisb | 7adcab15c2e8ed79123153f8de38d159d103f999 | [
"MIT"
] | null | null | null | Network Evaluation Examples/Network Evaluation Example-BioPlex.ipynb | jdtibochab/network_bisb | 7adcab15c2e8ed79123153f8de38d159d103f999 | [
"MIT"
] | null | null | null | 56.30663 | 2,156 | 0.49632 | [
[
[
"import sys\nsys.path.append('/home/juan/Network_Evaluation_Tools')\n\nfrom network_evaluation_tools import data_import_tools as dit\nfrom network_evaluation_tools import network_evaluation_functions as nef\nfrom network_evaluation_tools import network_propagation as prop\nimport pandas as pd\nimport ... | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7fa6aed425a22a72dca673294c390ae24b6ba26 | 62,335 | ipynb | Jupyter Notebook | README.ipynb | kaiserho/gym-anytrading | 18d9227ac42cdb422813512dcffa56a450bc83bf | [
"MIT"
] | 1,059 | 2019-09-22T00:05:12.000Z | 2022-03-31T17:18:17.000Z | README.ipynb | kaiserho/gym-anytrading | 18d9227ac42cdb422813512dcffa56a450bc83bf | [
"MIT"
] | 63 | 2020-01-29T21:15:25.000Z | 2022-03-28T22:14:55.000Z | README.ipynb | kaiserho/gym-anytrading | 18d9227ac42cdb422813512dcffa56a450bc83bf | [
"MIT"
] | 311 | 2019-10-09T11:48:39.000Z | 2022-03-31T23:10:19.000Z | 140.394144 | 22,638 | 0.860319 | [
[
[
"# gym-anytrading\r\n\r\n`AnyTrading` is a collection of [OpenAI Gym](https://github.com/openai/gym) environments for reinforcement learning-based trading algorithms.\r\n\r\nTrading algorithms are mostly implemented in two markets: [FOREX](https://en.wikipedia.org/wiki/Foreign_exchange_market) and [St... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"... |
e7fa742ab4b349d7304dbcc221042a46c63488b3 | 1,452 | ipynb | Jupyter Notebook | pyspark/LearningPySpark_Code/BonusChapter01/HelloWorldFromPySpark.ipynb | zephyrGit/Pyspark | 6a230f5facc3e840d798a5263b362ce9676d55d7 | [
"Apache-2.0"
] | 1 | 2017-05-04T03:01:55.000Z | 2017-05-04T03:01:55.000Z | BonusChapter01/HelloWorldFromPySpark.ipynb | LittleGaintSS/Learning-PySpark | 6ea70df4efdf06642037162fa0624491cb5fa42c | [
"MIT"
] | null | null | null | BonusChapter01/HelloWorldFromPySpark.ipynb | LittleGaintSS/Learning-PySpark | 6ea70df4efdf06642037162fa0624491cb5fa42c | [
"MIT"
] | 2 | 2020-10-04T15:39:13.000Z | 2021-02-03T17:29:33.000Z | 16.314607 | 56 | 0.483471 | [
[
[
"sc",
"_____no_output_____"
],
[
"sqlContext",
"_____no_output_____"
],
[
"print(sc.version)",
"2.0.0-preview\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
e7fa744c4e5a053e43c25a7aa053cfc5ed1e3141 | 58,531 | ipynb | Jupyter Notebook | notebooks/.ipynb_checkpoints/Regression-checkpoint.ipynb | tsitsimis/gau-pro | 9662a5f65baeb93af45bcfc62de29c7f3d691d3e | [
"MIT"
] | null | null | null | notebooks/.ipynb_checkpoints/Regression-checkpoint.ipynb | tsitsimis/gau-pro | 9662a5f65baeb93af45bcfc62de29c7f3d691d3e | [
"MIT"
] | null | null | null | notebooks/.ipynb_checkpoints/Regression-checkpoint.ipynb | tsitsimis/gau-pro | 9662a5f65baeb93af45bcfc62de29c7f3d691d3e | [
"MIT"
] | null | null | null | 354.733333 | 54,880 | 0.940459 | [
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"import sys\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nsys.path.append(\"../\")\nimport gaupro as gp\nimport gaupro.kernels as kernels",
"_____no_output_____"
]
],
[
[
"## Gener... | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7fa75aeebde62c91e59d8eccd427522d8891f2b | 4,743 | ipynb | Jupyter Notebook | notebooks/video_pipeline.ipynb | Imarcos/scikit-learn-mooc | 69a7a7e891c5a4a9bce8983d7c92326674fda071 | [
"CC-BY-4.0"
] | 1 | 2022-01-25T19:20:21.000Z | 2022-01-25T19:20:21.000Z | notebooks/video_pipeline.ipynb | Imarcos/scikit-learn-mooc | 69a7a7e891c5a4a9bce8983d7c92326674fda071 | [
"CC-BY-4.0"
] | null | null | null | notebooks/video_pipeline.ipynb | Imarcos/scikit-learn-mooc | 69a7a7e891c5a4a9bce8983d7c92326674fda071 | [
"CC-BY-4.0"
] | null | null | null | 22.802885 | 94 | 0.560826 | [
[
[
"# How to define a scikit-learn pipeline and visualize it",
"_____no_output_____"
],
[
"The goal of keeping this notebook is to:\n\n- make it available for users that want to reproduce it locally\n- archive the script in the event we want to rerecord this video with an\n update in... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"c... |
e7fa7f6da732b53aa34893c5b98b77bcd2fd21b2 | 64,736 | ipynb | Jupyter Notebook | ml_foundations/05_Pipeline/05_02/End/05_02.ipynb | joejoeyjoseph/playground | fa739d51635823b866fafd1e712760074cfc175c | [
"MIT"
] | null | null | null | ml_foundations/05_Pipeline/05_02/End/05_02.ipynb | joejoeyjoseph/playground | fa739d51635823b866fafd1e712760074cfc175c | [
"MIT"
] | null | null | null | ml_foundations/05_Pipeline/05_02/End/05_02.ipynb | joejoeyjoseph/playground | fa739d51635823b866fafd1e712760074cfc175c | [
"MIT"
] | null | null | null | 108.61745 | 23,376 | 0.801069 | [
[
[
"## Pipeline: Clean Continuous Features\n\nUsing the Titanic dataset from [this](https://www.kaggle.com/c/titanic/overview) Kaggle competition.\n\nThis dataset contains information about 891 people who were on board the ship when departed on April 15th, 1912. As noted in the description on Kaggle's we... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7fa9243bccb0a66a6bf1b1fb79447bb00b21322 | 441,201 | ipynb | Jupyter Notebook | **WeatherPy**/WeatherPy.ipynb | drupps/python-api-challenge | 11256a1f407fb269b8cd607e2227bf5772b8542b | [
"ADSL"
] | null | null | null | **WeatherPy**/WeatherPy.ipynb | drupps/python-api-challenge | 11256a1f407fb269b8cd607e2227bf5772b8542b | [
"ADSL"
] | null | null | null | **WeatherPy**/WeatherPy.ipynb | drupps/python-api-challenge | 11256a1f407fb269b8cd607e2227bf5772b8542b | [
"ADSL"
] | null | null | null | 164.198362 | 43,620 | 0.872076 | [
[
[
"# WeatherPy\n----\n\n#### Note\n* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.",
"_____no_output_____"
]
],
[
[
"# Dependencies and Setup\nimport matplotlib.pyplot as plt\ni... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"m... |
e7fa92bc2076bde339fe670730bee77cd4e1118d | 360,495 | ipynb | Jupyter Notebook | Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb | sahilm89/PreciseBalance | 9df65b5956b40f18b84168b69d7ce1138b47b9d4 | [
"MIT"
] | null | null | null | Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb | sahilm89/PreciseBalance | 9df65b5956b40f18b84168b69d7ce1138b47b9d4 | [
"MIT"
] | null | null | null | Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb | sahilm89/PreciseBalance | 9df65b5956b40f18b84168b69d7ce1138b47b9d4 | [
"MIT"
] | null | null | null | 136.60288 | 36,148 | 0.850741 | [
[
[
"# Figure 6",
"_____no_output_____"
]
],
[
[
"from sympy import symbols, exp, solve, logcombine, simplify, Piecewise, lambdify, N, init_printing, Eq\nimport numpy\nimport scipy.stats as ss\nfrom sympy.physics.units import seconds, siemens, volts, farads, amperes, milli, micro, ... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"m... |
e7fa990b19934b8cbf80f255f882dda5bae3251f | 103,386 | ipynb | Jupyter Notebook | OPC_Sensor/Models With Decompositions/Models with SparsePCA/CNN/CNN_tanh_binary.ipynb | utkarshkanswal/Machine-Learning-application-on-Air-quality-dataset | 12d0aca165fe0faf503ca38bd6a391452b480565 | [
"MIT"
] | 5 | 2021-10-18T07:36:05.000Z | 2022-02-09T06:46:58.000Z | OPC_Sensor/Models With Decompositions/Models with SparsePCA/CNN/CNN_tanh_binary.ipynb | utkarshkanswal/Machine-Learning-application-on-Air-quality-dataset | 12d0aca165fe0faf503ca38bd6a391452b480565 | [
"MIT"
] | null | null | null | OPC_Sensor/Models With Decompositions/Models with SparsePCA/CNN/CNN_tanh_binary.ipynb | utkarshkanswal/Machine-Learning-application-on-Air-quality-dataset | 12d0aca165fe0faf503ca38bd6a391452b480565 | [
"MIT"
] | null | null | null | 65.600254 | 21,216 | 0.664249 | [
[
[
"import tensorflow as tf\ntf.config.experimental.list_physical_devices()",
"_____no_output_____"
],
[
"tf.test.is_built_with_cuda()",
"_____no_output_____"
]
],
[
[
"# Importing Libraries",
"_____no_output_____"
]
],
[
[
"import numpy... | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"... |
e7faa4c1166961994e97698a29a0373a6755c926 | 633,369 | ipynb | Jupyter Notebook | notebook/safa.ipynb | alineu/elastic_beams_in_shear_flow | 297a6080f61fdca2ce1fa0f953f9cf871811efcc | [
"MIT"
] | null | null | null | notebook/safa.ipynb | alineu/elastic_beams_in_shear_flow | 297a6080f61fdca2ce1fa0f953f9cf871811efcc | [
"MIT"
] | null | null | null | notebook/safa.ipynb | alineu/elastic_beams_in_shear_flow | 297a6080f61fdca2ce1fa0f953f9cf871811efcc | [
"MIT"
] | null | null | null | 1,671.158311 | 124,474 | 0.943202 | [
[
[
"import numpy as np\nimport pandas as pd\nimport warnings\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom matplotlib import rc, rcParams\nwarnings.filterwarnings('ignore')\npd.set_option('display.float_format', lambda x: '%.5f' % x)\nmatplotlib.rcParams['font.family'] = 'serif'\nrc('font',**... | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e7faabc977e6223bcaaafd9c91a26db880d59408 | 110,050 | ipynb | Jupyter Notebook | [python_datavisualization]Ex2_Plotly.ipynb | KeeLee-BNU/Python-_wanglab- | 5f869ec6f114f91a825ef14087fbfc768e180183 | [
"Apache-2.0"
] | null | null | null | [python_datavisualization]Ex2_Plotly.ipynb | KeeLee-BNU/Python-_wanglab- | 5f869ec6f114f91a825ef14087fbfc768e180183 | [
"Apache-2.0"
] | null | null | null | [python_datavisualization]Ex2_Plotly.ipynb | KeeLee-BNU/Python-_wanglab- | 5f869ec6f114f91a825ef14087fbfc768e180183 | [
"Apache-2.0"
] | null | null | null | 31.88007 | 10,988 | 0.353539 | [
[
[
"import os\nprint(os.getcwd())\nos.chdir(r'C:\\Users\\王浣清\\desktop')",
"C:\\Users\\王浣清\\Desktop\n"
],
[
"import pandas as pd\ndf = pd.read_csv('py_vislz_data.csv',sep=',',header=0)\ndf",
"_____no_output_____"
],
[
"import plotly.offline as of\nof.offline.init_no... | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7fab13d956f5c2ec2f7491a27daab6a38ba773f | 65,226 | ipynb | Jupyter Notebook | notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb | shenzhimo2/vertex-ai-samples | 06fcfbff4800e4aa9a69266dd9b1d3e51a618b47 | [
"Apache-2.0"
] | 2 | 2021-10-02T02:17:20.000Z | 2021-11-17T10:35:01.000Z | notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb | shenzhimo2/vertex-ai-samples | 06fcfbff4800e4aa9a69266dd9b1d3e51a618b47 | [
"Apache-2.0"
] | 4 | 2021-08-18T18:58:26.000Z | 2022-02-10T07:03:36.000Z | notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb | shenzhimo2/vertex-ai-samples | 06fcfbff4800e4aa9a69266dd9b1d3e51a618b47 | [
"Apache-2.0"
] | null | null | null | 36.809255 | 370 | 0.610508 | [
[
[
"# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable ... | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"cod... | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"... |
e7fac4b67f9b50bd90d726cfb0a9ca81a0ec5d39 | 36,047 | ipynb | Jupyter Notebook | sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb | P15241328/amazon-sagemaker-examples | 00cba545be0822474f070321a62d22865187e09b | [
"Apache-2.0"
] | 5 | 2019-01-19T23:53:35.000Z | 2022-01-29T14:04:31.000Z | sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb | P15241328/amazon-sagemaker-examples | 00cba545be0822474f070321a62d22865187e09b | [
"Apache-2.0"
] | 4 | 2020-09-26T01:30:01.000Z | 2022-02-10T02:20:35.000Z | sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb | P15241328/amazon-sagemaker-examples | 00cba545be0822474f070321a62d22865187e09b | [
"Apache-2.0"
] | 7 | 2020-03-04T22:23:51.000Z | 2021-07-13T14:05:46.000Z | 31.290799 | 503 | 0.558715 | [
[
[
"# Tensor analysis using Amazon SageMaker Debugger\n\nLooking at the distributions of activation inputs/outputs, gradients and weights per layer can give useful insights. For instance, it helps to understand whether the model runs into problems like neuron saturation, whether there are layers in your ... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]... |
e7fad06b9068bf385b4515650ba22feffa990492 | 22,320 | ipynb | Jupyter Notebook | site/ja/tutorials/text/word_embeddings.ipynb | mulka/docs | c285b476d3ca3ff9e031abe9c922fb5a69da9424 | [
"Apache-2.0"
] | null | null | null | site/ja/tutorials/text/word_embeddings.ipynb | mulka/docs | c285b476d3ca3ff9e031abe9c922fb5a69da9424 | [
"Apache-2.0"
] | null | null | null | site/ja/tutorials/text/word_embeddings.ipynb | mulka/docs | c285b476d3ca3ff9e031abe9c922fb5a69da9424 | [
"Apache-2.0"
] | null | null | null | 32.536443 | 429 | 0.546192 | [
[
[
"##### Copyright 2019 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https:/... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
... |
e7faef8b6d1c1427272340e09456edaf255c29fe | 1,154 | ipynb | Jupyter Notebook | a.ipynb | justinms/mgd | 232b8637ee49bfe5765742de41c864794cb7ba07 | [
"MIT"
] | null | null | null | a.ipynb | justinms/mgd | 232b8637ee49bfe5765742de41c864794cb7ba07 | [
"MIT"
] | null | null | null | a.ipynb | justinms/mgd | 232b8637ee49bfe5765742de41c864794cb7ba07 | [
"MIT"
] | null | null | null | 18.918033 | 77 | 0.532062 | [
[
[
"from __future__ import print_function\nfrom ipywidgets import interact, interactive, fixed, interact_manual\nimport ipywidgets as widgets\n\ndef f(x):\n return x\n\ninteract(f, x=10);",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
e7faf0401a80adc9a61f04fce0ae736d61b5b9ab | 101,770 | ipynb | Jupyter Notebook | pca/pca.ipynb | myusernameisuseless/python_for_data_analysis_mailru_mipt | bca30632f1b5c4608de6e1a68ffeb9e84cfc6135 | [
"Apache-2.0"
] | null | null | null | pca/pca.ipynb | myusernameisuseless/python_for_data_analysis_mailru_mipt | bca30632f1b5c4608de6e1a68ffeb9e84cfc6135 | [
"Apache-2.0"
] | null | null | null | pca/pca.ipynb | myusernameisuseless/python_for_data_analysis_mailru_mipt | bca30632f1b5c4608de6e1a68ffeb9e84cfc6135 | [
"Apache-2.0"
] | null | null | null | 102.384306 | 58,556 | 0.788543 | [
[
[
"# Методы обучения без учителя\n## Метод главных компонент",
"_____no_output_____"
],
[
"<font color = 'red'> Внимание! </font> Решение данной задачи предполагает, что у вас установлены библиотека numpy версии 1.16.4 и выше и библиотека scikit-learn версии 0.21.2 и выше. В следующ... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
... |
e7faf20f3777b2c96c391a95f0dc12b5339f4569 | 33,062 | ipynb | Jupyter Notebook | my_notebooks/eval10_experiment5.ipynb | MichelML/ml-aging | b54470c00450da7d5b50e7be4a1f162f1c4b8531 | [
"Apache-2.0"
] | 7 | 2019-07-08T06:24:53.000Z | 2022-03-22T13:41:00.000Z | my_notebooks/eval10_experiment5.ipynb | MichelML/ml-aging | b54470c00450da7d5b50e7be4a1f162f1c4b8531 | [
"Apache-2.0"
] | null | null | null | my_notebooks/eval10_experiment5.ipynb | MichelML/ml-aging | b54470c00450da7d5b50e7be4a1f162f1c4b8531 | [
"Apache-2.0"
] | 2 | 2019-08-19T13:43:49.000Z | 2019-08-25T02:01:48.000Z | 57.599303 | 1,705 | 0.633235 | [
[
[
"## Load libraries",
"_____no_output_____"
]
],
[
[
"!pip install -q -r requirements.txt",
"\u001b[31mmenpo 0.8.1 has requirement matplotlib<2.0,>=1.4, but you'll have matplotlib 3.0.2 which is incompatible.\u001b[0m\n\u001b[31mmenpo 0.8.1 has requirement pillow<5.0,>=3.0... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7fafaba15275aebab1b6617d1952ecedff094b0 | 265,282 | ipynb | Jupyter Notebook | Lab01/lmbaeza-lecture1.ipynb | lmbaeza/numerical-methods-2021 | 9e3d1ec7039067cf2a33a10328b307e7a27479c7 | [
"MIT"
] | null | null | null | Lab01/lmbaeza-lecture1.ipynb | lmbaeza/numerical-methods-2021 | 9e3d1ec7039067cf2a33a10328b307e7a27479c7 | [
"MIT"
] | null | null | null | Lab01/lmbaeza-lecture1.ipynb | lmbaeza/numerical-methods-2021 | 9e3d1ec7039067cf2a33a10328b307e7a27479c7 | [
"MIT"
] | null | null | null | 193.213401 | 54,086 | 0.883449 | [
[
[
"#Introduction to the Research Environment\n\nThe research environment is powered by IPython notebooks, which allow one to perform a great deal of data analysis and statistical validation. We'll demonstrate a few simple techniques here.",
"_____no_output_____"
],
[
"##Code Cells vs... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]... |
e7fafeff9fe7919ad9abdcc991d92ced34444a8c | 9,215 | ipynb | Jupyter Notebook | assignment_2/sudoku.ipynb | sandilya761/CSD311-Assignments | aaa59e8eb6db446611c9c637c6c0ebd8dd7c5573 | [
"MIT"
] | null | null | null | assignment_2/sudoku.ipynb | sandilya761/CSD311-Assignments | aaa59e8eb6db446611c9c637c6c0ebd8dd7c5573 | [
"MIT"
] | null | null | null | assignment_2/sudoku.ipynb | sandilya761/CSD311-Assignments | aaa59e8eb6db446611c9c637c6c0ebd8dd7c5573 | [
"MIT"
] | null | null | null | 30.716667 | 150 | 0.476831 | [
[
[
"def cartesian_product(x,y):\n \n return [a+b for a in x for b in y] \n# takes two iterable values and return the cartesian product in a list\n",
"_____no_output_____"
],
[
"# displays the game board\n\ndef display_game_board(values):\n \n print('')\n \n rows = 'A... | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7fb03f645ae1e97ff59d1fbaa1c9ac9809352f5 | 661 | ipynb | Jupyter Notebook | notebooks/notyet/basics.ipynb | liuxfiu/qmodels | f04d28923c623495d7d1cc3962fb8cac61dc2685 | [
"MIT"
] | null | null | null | notebooks/notyet/basics.ipynb | liuxfiu/qmodels | f04d28923c623495d7d1cc3962fb8cac61dc2685 | [
"MIT"
] | null | null | null | notebooks/notyet/basics.ipynb | liuxfiu/qmodels | f04d28923c623495d7d1cc3962fb8cac61dc2685 | [
"MIT"
] | null | null | null | 16.525 | 34 | 0.515885 | [
[
[
"## Basic Queuing Models",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown"
]
] |
e7fb06af789f63db8696556322537565f0531b2c | 9,450 | ipynb | Jupyter Notebook | Resample_Audio.ipynb | materialvision/melgan-neurips | 928ebe4571617af6fc8929ae3af8c07d148413ab | [
"MIT"
] | 2 | 2020-12-14T12:31:50.000Z | 2021-06-24T02:46:46.000Z | Resample_Audio.ipynb | materialvision/melgan-neurips | 928ebe4571617af6fc8929ae3af8c07d148413ab | [
"MIT"
] | null | null | null | Resample_Audio.ipynb | materialvision/melgan-neurips | 928ebe4571617af6fc8929ae3af8c07d148413ab | [
"MIT"
] | 4 | 2020-09-20T01:49:03.000Z | 2021-11-18T17:58:16.000Z | 38.888889 | 1,295 | 0.533439 | [
[
[
"<a href=\"https://colab.research.google.com/github/buganart/melgan-neurips/blob/master/Resample_Audio.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"#@markdown Befor... | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7fb0ea4786edefd82552976617378cc7661d804 | 77,754 | ipynb | Jupyter Notebook | nn.ipynb | GavinPHR/Parser | 3e7dd453d756526105edf8cacd35c72481940c0e | [
"MIT"
] | 1 | 2021-08-04T12:24:46.000Z | 2021-08-04T12:24:46.000Z | nn.ipynb | GavinPHR/Parser | 3e7dd453d756526105edf8cacd35c72481940c0e | [
"MIT"
] | null | null | null | nn.ipynb | GavinPHR/Parser | 3e7dd453d756526105edf8cacd35c72481940c0e | [
"MIT"
] | null | null | null | 567.547445 | 51,810 | 0.936762 | [
[
[
"import config\nfrom preprocessing import mappings, transforms, treebank_reader\nfrom training import parameter, rules, rules_and_count\n\nif __name__ == '__main__':\n # mp.set_start_method('fork')\n tb = treebank_reader.TreebankReader(config.train_file)\n config.train = tb.read()\n tb = t... | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
e7fb224bbc97d4b21ebff985ef056a3209f33037 | 459,120 | ipynb | Jupyter Notebook | reproduce_plots/rj4a_plots.ipynb | nasa/1d-pinn-reconstruction | 9798c1696447fa9f1c7098cb12e49ed60d736f67 | [
"NASA-1.3",
"BSD-3-Clause"
] | 2 | 2021-11-18T10:36:59.000Z | 2022-01-19T16:35:41.000Z | reproduce_plots/rj4a_plots.ipynb | nasa/1d-pinn-reconstruction | 9798c1696447fa9f1c7098cb12e49ed60d736f67 | [
"NASA-1.3",
"BSD-3-Clause"
] | null | null | null | reproduce_plots/rj4a_plots.ipynb | nasa/1d-pinn-reconstruction | 9798c1696447fa9f1c7098cb12e49ed60d736f67 | [
"NASA-1.3",
"BSD-3-Clause"
] | null | null | null | 762.657807 | 76,380 | 0.951157 | [
[
[
"# basic libraries\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow.keras.backend as K\nimport mhd_numerical_diff2 as mhdmod\nimport traj_utilities as tju\nimport matplotlib.pyplot as plt\nimport h5py as h5",
"_____no_output_____"
],
[
"#trained model weights\nmodel_... | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7fb2418fd7db874e82f4a3262b6cc2898bb1999 | 625,448 | ipynb | Jupyter Notebook | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD | 8fcf0b388822bbffce5bad3b7c818fb100d942d7 | [
"CC-BY-4.0"
] | null | null | null | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD | 8fcf0b388822bbffce5bad3b7c818fb100d942d7 | [
"CC-BY-4.0"
] | null | null | null | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD | 8fcf0b388822bbffce5bad3b7c818fb100d942d7 | [
"CC-BY-4.0"
] | null | null | null | 529.591871 | 119,128 | 0.943017 | [
[
[
"# Talktorial 5\n\n# Compound clustering\n\n#### Developed in the CADD seminars 2017 and 2018, AG Volkamer, Charité/FU Berlin \n\nCalvinna Caswara and Gizem Spriewald",
"_____no_output_____"
],
[
"## Aim of this talktorial\n\nSimilar compounds might bind to the same targets and sho... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
... |
e7fb432eef1dad67cdda009b4932b8452dcc0bc3 | 20,743 | ipynb | Jupyter Notebook | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs | 41ab06fd14b3a3dff933bb80b19ce46c7c5781cf | [
"Apache-2.0"
] | 2 | 2019-10-25T18:51:16.000Z | 2019-10-25T18:51:18.000Z | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs | 41ab06fd14b3a3dff933bb80b19ce46c7c5781cf | [
"Apache-2.0"
] | null | null | null | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs | 41ab06fd14b3a3dff933bb80b19ce46c7c5781cf | [
"Apache-2.0"
] | null | null | null | 31.052395 | 368 | 0.529191 | [
[
[
"##### Copyright 2018 The TensorFlow Authors.\n\n",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# htt... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"... |
e7fb445694ef571cf78cc94ff45cc66bb50630f1 | 3,631 | ipynb | Jupyter Notebook | fMRI_Data_Analysis.ipynb | npinak/fMRI | 30ce9f1111e37967e5958e1ad0bbb30fcca8bf12 | [
"MIT"
] | null | null | null | fMRI_Data_Analysis.ipynb | npinak/fMRI | 30ce9f1111e37967e5958e1ad0bbb30fcca8bf12 | [
"MIT"
] | null | null | null | fMRI_Data_Analysis.ipynb | npinak/fMRI | 30ce9f1111e37967e5958e1ad0bbb30fcca8bf12 | [
"MIT"
] | null | null | null | 36.31 | 385 | 0.576976 | [
[
[
"**fMRI Preprocessing**",
"_____no_output_____"
],
[
"- Converted DICOM images to NIfTI files.\n- Used SPM and R to preprocess fMRI data by correcting for slice timing, realigning to compensate for head motion, co-registering the fMRI data to a MP-RAGE image using a rigid body tran... | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e7fb47e67456c92b1171b67b077c325dc249fa9e | 32,701 | ipynb | Jupyter Notebook | Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb | nibukdk/web-scrapping-fmovie.to | af274c3fee252cf75f1422020f546da25b5275a2 | [
"MIT"
] | 1 | 2021-05-14T20:01:21.000Z | 2021-05-14T20:01:21.000Z | Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb | islamux/web-scrapping-fmovie.to | af274c3fee252cf75f1422020f546da25b5275a2 | [
"MIT"
] | null | null | null | Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb | islamux/web-scrapping-fmovie.to | af274c3fee252cf75f1422020f546da25b5275a2 | [
"MIT"
] | 2 | 2021-05-14T19:57:56.000Z | 2021-05-24T01:33:29.000Z | 31.810311 | 250 | 0.455277 | [
[
[
"# Introduction.",
"_____no_output_____"
],
[
"Project is the continuation of web crawling of website fmovies's [most-watched](https://fmovies.to/most-watched) section analysis for the website. \nThis is the second part. In part one we crawled websites and extracted informations. I... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]... |
e7fb4aabecef329d09941949656da49a83828b7e | 19,675 | ipynb | Jupyter Notebook | configuration.ipynb | mesameki/MachineLearningNotebooks | 4fe8c1702d5d2934beee599e977fd7581c441780 | [
"MIT"
] | 2 | 2020-07-12T02:37:49.000Z | 2021-09-09T09:55:32.000Z | configuration.ipynb | mesameki/MachineLearningNotebooks | 4fe8c1702d5d2934beee599e977fd7581c441780 | [
"MIT"
] | null | null | null | configuration.ipynb | mesameki/MachineLearningNotebooks | 4fe8c1702d5d2934beee599e977fd7581c441780 | [
"MIT"
] | 3 | 2020-07-14T21:33:01.000Z | 2021-05-20T17:27:48.000Z | 51.370757 | 638 | 0.614841 | [
[
[
"Copyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"... |
e7fb578499c4129ee1c2a7c95a78c8836f3d22b2 | 72,369 | ipynb | Jupyter Notebook | lessons/01.base-types.ipynb | aodarc/LIST-010 | 4579a047ca1ae0266f368349ea4536c6eb367f97 | [
"MIT"
] | null | null | null | lessons/01.base-types.ipynb | aodarc/LIST-010 | 4579a047ca1ae0266f368349ea4536c6eb367f97 | [
"MIT"
] | 4 | 2018-12-19T13:41:12.000Z | 2019-01-14T15:11:11.000Z | lessons/01.base-types.ipynb | aodarc/LIST-010 | 4579a047ca1ae0266f368349ea4536c6eb367f97 | [
"MIT"
] | null | null | null | 19.221514 | 424 | 0.436858 | [
[
[
"a = 'Hello'",
"_____no_output_____"
],
[
"d = {'vasilii': 'red'}",
"_____no_output_____"
],
[
"d['vasilii']",
"_____no_output_____"
],
[
"7.42",
"_____no_output_____"
],
[
"import math",
"_____no_output_____"
],... | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
... |
e7fb74a8a154b29dfa0c3785fd22919b1fad3103 | 49,318 | ipynb | Jupyter Notebook | Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb | professorjar/curso-de-jogos- | e20bd2ec1af76d72efd8a3485fe6ffd6eb674ea2 | [
"MIT"
] | null | null | null | Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb | professorjar/curso-de-jogos- | e20bd2ec1af76d72efd8a3485fe6ffd6eb674ea2 | [
"MIT"
] | null | null | null | Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb | professorjar/curso-de-jogos- | e20bd2ec1af76d72efd8a3485fe6ffd6eb674ea2 | [
"MIT"
] | null | null | null | 89.669091 | 12,784 | 0.853319 | [
[
[
"## Importing the images into this script",
"_____no_output_____"
]
],
[
[
"import os\nimport numpy as np\n\ndirectory = 'C:/Users/joaovitor/Desktop/Meu_Canal/DINO/'\njump_img = os.listdir(os.path.join(directory, 'jump'))\nnojump_img = os.listdir(os.path.join(directory, 'no_jum... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"m... |
e7fbb50167c81ad7a08e34d87acef609e3a4dec1 | 50,723 | ipynb | Jupyter Notebook | climate_starter.ipynb | tanmayrp/sqlalchemy-challenge | bbe4c7e60581851cb4195775f1a032869642caf2 | [
"ADSL"
] | null | null | null | climate_starter.ipynb | tanmayrp/sqlalchemy-challenge | bbe4c7e60581851cb4195775f1a032869642caf2 | [
"ADSL"
] | null | null | null | climate_starter.ipynb | tanmayrp/sqlalchemy-challenge | bbe4c7e60581851cb4195775f1a032869642caf2 | [
"ADSL"
] | null | null | null | 107.010549 | 23,828 | 0.861996 | [
[
[
"%matplotlib inline\nfrom matplotlib import style\nstyle.use('fivethirtyeight')\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd\nimport datetime as dt",
"_____no_output_____"
]
],
[
[
"# Reflect Tables i... | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"c... |
e7fbb947b8aab4a1aa8b127ab35413260a98aa3e | 2,144 | ipynb | Jupyter Notebook | joiner.ipynb | datastory/CFS-order-generator | 65dd70d6bef5650030a22a51f5813b4cb5cc89b0 | [
"MIT"
] | null | null | null | joiner.ipynb | datastory/CFS-order-generator | 65dd70d6bef5650030a22a51f5813b4cb5cc89b0 | [
"MIT"
] | null | null | null | joiner.ipynb | datastory/CFS-order-generator | 65dd70d6bef5650030a22a51f5813b4cb5cc89b0 | [
"MIT"
] | null | null | null | 24.089888 | 112 | 0.489272 | [
[
[
"from random import randint\nimport os\nfrom pydub import AudioSegment",
"_____no_output_____"
],
[
"def randomizer():\n tracks = []\n tracks.append('sil_' + str(randint(3, 6)))\n\n count = 1\n while (count < randint(4, 5)):\n count = count + 1\n tracks.ap... | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
e7fbbb7b2a23ed965663499c0dc4ef2ac0bfa2ea | 25,265 | ipynb | Jupyter Notebook | courses/ml/logistic_regression.ipynb | obs145628/ml-notebooks | 08a64962e106ec569039ab204a7ae4c900783b6b | [
"MIT"
] | 1 | 2020-10-29T11:26:00.000Z | 2020-10-29T11:26:00.000Z | courses/ml/logistic_regression.ipynb | obs145628/ml-notebooks | 08a64962e106ec569039ab204a7ae4c900783b6b | [
"MIT"
] | 5 | 2021-03-18T21:33:45.000Z | 2022-03-11T23:34:50.000Z | courses/ml/logistic_regression.ipynb | obs145628/ml-notebooks | 08a64962e106ec569039ab204a7ae4c900783b6b | [
"MIT"
] | 1 | 2019-12-23T21:50:02.000Z | 2019-12-23T21:50:02.000Z | 32.143766 | 233 | 0.536513 | [
[
[
"import sys\nsys.path.append('../../pyutils')\n\nimport numpy as np\nimport scipy.linalg\nimport torch\n\nimport metrics\nimport utils\nfrom sklearn.linear_model import LogisticRegression\n\nnp.random.seed(12)",
"_____no_output_____"
]
],
[
[
"# Binary Logistic Regression\n\nLe... | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markd... |
e7fbbb94f04a170a144822dfe9bf76d50a7b0190 | 17,040 | ipynb | Jupyter Notebook | pria_lifechem/analysis/scaffold/scaffold_Keck_Pria_FP_data.ipynb | chao1224/pria_lifechem | 1fd892505a45695c6197f8d711a8a37589cd7097 | [
"MIT"
] | 5 | 2018-05-14T10:15:13.000Z | 2021-03-15T17:18:10.000Z | pria_lifechem/analysis/scaffold/scaffold_Keck_Pria_FP_data.ipynb | chao1224/pria_lifechem | 1fd892505a45695c6197f8d711a8a37589cd7097 | [
"MIT"
] | 5 | 2018-05-05T21:04:11.000Z | 2019-06-24T22:05:35.000Z | pria_lifechem/analysis/scaffold/scaffold_Keck_Pria_FP_data.ipynb | chao1224/pria_lifechem | 1fd892505a45695c6197f8d711a8a37589cd7097 | [
"MIT"
] | 2 | 2019-10-18T23:42:27.000Z | 2020-07-08T19:46:14.000Z | 31.497227 | 386 | 0.578286 | [
[
[
"import os\nfrom virtual_screening.function import read_merged_data\nfrom rdkit.Chem.Scaffolds import MurckoScaffold\nfrom rdkit import Chem",
"/home/sliu426/.local/lib/python2.7/site-packages/sklearn/cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor... | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
... |
e7fbbca8f7301a286d2c205e85acd1c80d3debc6 | 19,864 | ipynb | Jupyter Notebook | assignments/assignment3/PyTorch_CNN.ipynb | pavel2805/my_dlcoarse_ai | f535b956f3f9c8ee1d85f014ebd9da517734a473 | [
"MIT"
] | null | null | null | assignments/assignment3/PyTorch_CNN.ipynb | pavel2805/my_dlcoarse_ai | f535b956f3f9c8ee1d85f014ebd9da517734a473 | [
"MIT"
] | null | null | null | assignments/assignment3/PyTorch_CNN.ipynb | pavel2805/my_dlcoarse_ai | f535b956f3f9c8ee1d85f014ebd9da517734a473 | [
"MIT"
] | null | null | null | 31.086072 | 236 | 0.557038 | [
[
[
"# Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorch\n\nЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ \nGoogle Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! \n\nАвторы курса благодарят компанию Google... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"... |
e7fbc78d59ffeb476d58aac34e3b1faf64dbb39e | 36,196 | ipynb | Jupyter Notebook | misc/baxter/derivation.ipynb | YoshimitsuMatsutaIe/rmp_test | a7c94ff68b518ef51821484795c308c2c8519c4c | [
"MIT"
] | null | null | null | misc/baxter/derivation.ipynb | YoshimitsuMatsutaIe/rmp_test | a7c94ff68b518ef51821484795c308c2c8519c4c | [
"MIT"
] | null | null | null | misc/baxter/derivation.ipynb | YoshimitsuMatsutaIe/rmp_test | a7c94ff68b518ef51821484795c308c2c8519c4c | [
"MIT"
] | null | null | null | 39.088553 | 292 | 0.448834 | [
[
[
"baxterのmap求める",
"_____no_output_____"
]
],
[
[
"import sympy as sy\nfrom sympy import sin, cos, pi, sqrt\nimport math\n#from math import pi\nq = sy.Matrix(sy.MatrixSymbol('q', 7, 1))\nL, h, H, L0, L1, L2, L3, L4, L5, L6, R = sy.symbols('L, h, H, L0, L1, L2, L3, L4, L5, L6, R')... | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7fbe104bad3079635a0970cc55d68a6ad4a3d72 | 674,184 | ipynb | Jupyter Notebook | total-collapsing.ipynb | stefaniaebli/dmt-signal-processing | d8efb7ed3cb6b506b40f011ebee12774e01c1c4f | [
"MIT"
] | null | null | null | total-collapsing.ipynb | stefaniaebli/dmt-signal-processing | d8efb7ed3cb6b506b40f011ebee12774e01c1c4f | [
"MIT"
] | null | null | null | total-collapsing.ipynb | stefaniaebli/dmt-signal-processing | d8efb7ed3cb6b506b40f011ebee12774e01c1c4f | [
"MIT"
] | null | null | null | 1,484.986784 | 131,084 | 0.95868 | [
[
[
"import numpy as np\nimport random\nimport gudhi as gd\nfrom matplotlib import pyplot as plt\nfrom matplotlib import colors as mcolors\nimport sys\nsys.path.append('code')\nimport dmtsignal as dmt\nimport dmtvisual as dmtvis\nimport importlib\nimport warnings\nwarnings.filterwarnings(\"ignore\")\ndmt ... | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7fbf792b24818cfc6f63d8fbdac096f078813bd | 307,560 | ipynb | Jupyter Notebook | d2l-en/chapter_computer-vision/object-detection-dataset.ipynb | mru4913/Dive-into-Deep-Learning | bcd16ac602f011292bd1d5540ef3833cd3fd7c72 | [
"MIT"
] | null | null | null | d2l-en/chapter_computer-vision/object-detection-dataset.ipynb | mru4913/Dive-into-Deep-Learning | bcd16ac602f011292bd1d5540ef3833cd3fd7c72 | [
"MIT"
] | null | null | null | d2l-en/chapter_computer-vision/object-detection-dataset.ipynb | mru4913/Dive-into-Deep-Learning | bcd16ac602f011292bd1d5540ef3833cd3fd7c72 | [
"MIT"
] | null | null | null | 1,627.301587 | 299,432 | 0.957995 | [
[
[
"# Object Detection Data Set (Pikachu)\n\nThere are no small data sets, like MNIST or Fashion-MNIST, in the object detection field. In order to quickly test models, we are going to assemble a small data set. First, we generate 1000 Pikachu images of different angles and sizes using an open source 3D P... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7fbffeed93e183ea84713cd6eefeee6d0d935a4 | 6,351 | ipynb | Jupyter Notebook | elastic-pynotebook.ipynb | willingc/bouncy-notes | c27a5eb512a0c73f6c1b9e3a0668a2a9fcf2486c | [
"MIT"
] | null | null | null | elastic-pynotebook.ipynb | willingc/bouncy-notes | c27a5eb512a0c73f6c1b9e3a0668a2a9fcf2486c | [
"MIT"
] | null | null | null | elastic-pynotebook.ipynb | willingc/bouncy-notes | c27a5eb512a0c73f6c1b9e3a0668a2a9fcf2486c | [
"MIT"
] | null | null | null | 22.682143 | 124 | 0.447646 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e7fc053b76efaf2bd886cba73cb9ea93c3e575dc | 171,147 | ipynb | Jupyter Notebook | jupyter_notebooks/feature_engineering_pt1.ipynb | StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project | ffe2aa36475e658a1d29853caa7cca53d34fe668 | [
"MIT"
] | null | null | null | jupyter_notebooks/feature_engineering_pt1.ipynb | StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project | ffe2aa36475e658a1d29853caa7cca53d34fe668 | [
"MIT"
] | null | null | null | jupyter_notebooks/feature_engineering_pt1.ipynb | StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project | ffe2aa36475e658a1d29853caa7cca53d34fe668 | [
"MIT"
] | null | null | null | 44.281242 | 7,774 | 0.438196 | [
[
[
"The Goal of this Notebook is to predict Future Sales given historical data (daily granularity). This is a part of the kaggle competition \"Predict Future Sales\": https://www.kaggle.com/c/competitive-data-science-predict-future-sales/data Where more information about the problem, dataset and other so... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
]... |
e7fc055e83432b991ecf55f33b114de88b5604fe | 33,917 | ipynb | Jupyter Notebook | assignments/assignment1/Linear classifier_solution.ipynb | tbb/dlcourse_ai | d8a14d30f7174b449c9bb79f3b87d4822d4f0f4b | [
"MIT"
] | null | null | null | assignments/assignment1/Linear classifier_solution.ipynb | tbb/dlcourse_ai | d8a14d30f7174b449c9bb79f3b87d4822d4f0f4b | [
"MIT"
] | null | null | null | assignments/assignment1/Linear classifier_solution.ipynb | tbb/dlcourse_ai | d8a14d30f7174b449c9bb79f3b87d4822d4f0f4b | [
"MIT"
] | null | null | null | 46.717631 | 10,620 | 0.71159 | [
[
[
"# Задание 1.2 - Линейный классификатор (Linear classifier)\n\nВ этом задании мы реализуем другую модель машинного обучения - линейный классификатор. Линейный классификатор подбирает для каждого класса веса, на которые нужно умножить значение каждого признака и потом сложить вместе.\nТот класс, у кото... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]... |
e7fc1596117e260d74d035ec4dcb35d4c50827b7 | 53,604 | ipynb | Jupyter Notebook | figures/Check-SED.ipynb | benjaminrose/SNIa-Local-Environments | 92713be96a89da991fe53bffcc596a5c0942fc37 | [
"MIT"
] | 1 | 2020-09-19T22:08:51.000Z | 2020-09-19T22:08:51.000Z | figures/Check-SED.ipynb | benjaminrose/SNIa-Local-Environments | 92713be96a89da991fe53bffcc596a5c0942fc37 | [
"MIT"
] | 9 | 2017-12-11T19:15:33.000Z | 2018-04-18T19:08:34.000Z | figures/Check-SED.ipynb | benjaminrose/SNIa-Local-Environments | 92713be96a89da991fe53bffcc596a5c0942fc37 | [
"MIT"
] | 3 | 2020-08-13T03:45:09.000Z | 2020-08-19T22:31:00.000Z | 169.097792 | 26,822 | 0.888068 | [
[
[
"# Is the SED Correct?\n\nIn the circle test, the SFH is totatlly bonkers. We just can not get the correct SFH back out with MCMC. Is the MCMC getting a good fit?",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e7fc1d74a65f9e55845d7b3c68a1e4da2d9381f2 | 65,946 | ipynb | Jupyter Notebook | pt1/scraping_classwork.ipynb | mengyuan616/scraping-lecture | e385f460beb316759dd8fc772e9d7705d4fdf1d9 | [
"MIT"
] | null | null | null | pt1/scraping_classwork.ipynb | mengyuan616/scraping-lecture | e385f460beb316759dd8fc772e9d7705d4fdf1d9 | [
"MIT"
] | null | null | null | pt1/scraping_classwork.ipynb | mengyuan616/scraping-lecture | e385f460beb316759dd8fc772e9d7705d4fdf1d9 | [
"MIT"
] | null | null | null | 35.340836 | 96 | 0.394444 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e7fc27e0968497ef49e3253e59d28d3a7875bd01 | 150,979 | ipynb | Jupyter Notebook | 006_bd_proj_dennis.ipynb | sh5864/bigdata-proj | 979b11d106dd14bfa76d286ed2fc85f77fb93802 | [
"MIT"
] | null | null | null | 006_bd_proj_dennis.ipynb | sh5864/bigdata-proj | 979b11d106dd14bfa76d286ed2fc85f77fb93802 | [
"MIT"
] | null | null | null | 006_bd_proj_dennis.ipynb | sh5864/bigdata-proj | 979b11d106dd14bfa76d286ed2fc85f77fb93802 | [
"MIT"
] | null | null | null | 140.445581 | 27,074 | 0.576014 | [
[
[
"**Summary Of Findings**:\nIt was found that wildfire frequency across the United State has been increasing in the past decade. Although fire and fire damage was generally localized to mostly the west coast in the past, fire frequency has been gradually increasing in states east of it in the contine... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"c... |
e7fc2fcab649236fce830ceb290b98b2c3113f4f | 297,181 | ipynb | Jupyter Notebook | MusicRecommendation/.ipynb_checkpoints/TestHDFTables-checkpoint.ipynb | HiKapok/KaggleCompetitions | 3d8c0e8d8f98334980c97f761262316edcd6d5e9 | [
"MIT"
] | 1 | 2018-06-27T14:14:01.000Z | 2018-06-27T14:14:01.000Z | MusicRecommendation/.ipynb_checkpoints/TestHDFTables-checkpoint.ipynb | HiKapok/KaggleCompetitions | 3d8c0e8d8f98334980c97f761262316edcd6d5e9 | [
"MIT"
] | 1 | 2017-12-30T01:01:52.000Z | 2018-01-05T04:09:32.000Z | MusicRecommendation/.ipynb_checkpoints/TestHDFTables-checkpoint.ipynb | HiKapok/KaggleCompetitions | 3d8c0e8d8f98334980c97f761262316edcd6d5e9 | [
"MIT"
] | 1 | 2018-06-27T14:14:16.000Z | 2018-06-27T14:14:16.000Z | 72.289224 | 385 | 0.177922 | [
[
[
"# The line below sets the environment\n# variable CUDA_VISIBLE_DEVICES\nget_ipython().magic('env CUDA_VISIBLE_DEVICES = 1')\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport multiprocessing as mp # will come in handy due to the size of the data\nimport os.path\nim... | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7fc41f38ec844e541eb633a806d0a45fc482b7a | 1,398 | ipynb | Jupyter Notebook | _downloads/plot_optimize_lidar_data.ipynb | scipy-lectures/scipy-lectures.github.com | 637a0d9cc2c95ed196550371e44a4cc6e150c830 | [
"CC-BY-4.0"
] | 48 | 2015-01-13T22:15:34.000Z | 2022-01-04T20:17:41.000Z | _downloads/plot_optimize_lidar_data.ipynb | scipy-lectures/scipy-lectures.github.com | 637a0d9cc2c95ed196550371e44a4cc6e150c830 | [
"CC-BY-4.0"
] | 1 | 2017-04-25T09:01:00.000Z | 2017-04-25T13:48:56.000Z | _downloads/plot_optimize_lidar_data.ipynb | scipy-lectures/scipy-lectures.github.com | 637a0d9cc2c95ed196550371e44a4cc6e150c830 | [
"CC-BY-4.0"
] | 21 | 2015-03-16T17:52:23.000Z | 2021-02-19T00:02:13.000Z | 25.888889 | 273 | 0.496423 | [
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\nThe lidar system, data (1 of 2 datasets)\n========================================\n\nGenerate a chart of the data recorded by the lidar system\n\n",
"_____no_output_____"
]
],
[
[
"import numpy as ... | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7fc44f0ad389b2ff91bb02b8b7ffe9df1fde3e8 | 3,730 | ipynb | Jupyter Notebook | notebooks/camera-calibration.ipynb | johnnylord/mtmc-testbed | e3d331505181baa076162e1f5835e566e8f70167 | [
"MIT"
] | 1 | 2020-09-25T08:46:19.000Z | 2020-09-25T08:46:19.000Z | notebooks/camera-calibration.ipynb | johnnylord/mtmc-testbed | e3d331505181baa076162e1f5835e566e8f70167 | [
"MIT"
] | null | null | null | notebooks/camera-calibration.ipynb | johnnylord/mtmc-testbed | e3d331505181baa076162e1f5835e566e8f70167 | [
"MIT"
] | 1 | 2020-09-18T01:33:45.000Z | 2020-09-18T01:33:45.000Z | 26.642857 | 336 | 0.596515 | [
[
[
"# Camera Calibration\n\nIn multiple camera tracking with overlapping view, we can utilize information from different camera to facilitate tracking algorithm.\n\n\n\nHowever, to make use of these multi-view information, we need to first calibrate cameras so that they are in the same ... | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e7fc513f85d7ff7fe7b976987bcf935985190fc1 | 30,804 | ipynb | Jupyter Notebook | docs/tutorials/keras_layers.ipynb | sarvex/lattice-1 | 784eca50cbdfedf39f183cc7d298c9fe376b69c0 | [
"Apache-2.0"
] | 508 | 2017-10-10T20:15:18.000Z | 2022-03-29T13:22:50.000Z | docs/tutorials/keras_layers.ipynb | Saiprasad16/lattice | 35f3e9d7da7f90a700d7a903e1818e82965f245c | [
"Apache-2.0"
] | 69 | 2017-10-12T05:08:57.000Z | 2022-02-15T21:43:57.000Z | docs/tutorials/keras_layers.ipynb | Saiprasad16/lattice | 35f3e9d7da7f90a700d7a903e1818e82965f245c | [
"Apache-2.0"
] | 93 | 2017-10-11T20:12:42.000Z | 2022-03-08T14:42:13.000Z | 37.292978 | 332 | 0.527756 | [
[
[
"##### Copyright 2020 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https:/... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"c... |
e7fc5146651d28eae0bc935decef0268f613ea98 | 23,877 | ipynb | Jupyter Notebook | MachineLearning/supervised_machine_learning/Polinamial_and_PlynomialRidge_Regression.ipynb | pavi-ninjaac/Machine_Learing_sratch | 85bc986aedd034e91a8d9c61d860477ab8a6a2e6 | [
"MIT"
] | null | null | null | MachineLearning/supervised_machine_learning/Polinamial_and_PlynomialRidge_Regression.ipynb | pavi-ninjaac/Machine_Learing_sratch | 85bc986aedd034e91a8d9c61d860477ab8a6a2e6 | [
"MIT"
] | null | null | null | MachineLearning/supervised_machine_learning/Polinamial_and_PlynomialRidge_Regression.ipynb | pavi-ninjaac/Machine_Learing_sratch | 85bc986aedd034e91a8d9c61d860477ab8a6a2e6 | [
"MIT"
] | null | null | null | 34.554269 | 153 | 0.47485 | [
[
[
"import numpy as np\nimport pandas as pd\n\nfrom itertools import combinations_with_replacement\n\nfrom sklearn.metrics import r2_score\nfrom sklearn.datasets import make_regression",
"_____no_output_____"
]
],
[
[
"# Common Regression class",
"_____no_output_____"
]
... | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
... |
e7fc557d94965badac4ab40624ccf79b2357e278 | 4,212 | ipynb | Jupyter Notebook | 9_coding quizzes/05_list_HackerRank.ipynb | lucaseo/TIL | a15b7c1d3f9666a682f0b95ab320e8567495559a | [
"MIT"
] | null | null | null | 9_coding quizzes/05_list_HackerRank.ipynb | lucaseo/TIL | a15b7c1d3f9666a682f0b95ab320e8567495559a | [
"MIT"
] | null | null | null | 9_coding quizzes/05_list_HackerRank.ipynb | lucaseo/TIL | a15b7c1d3f9666a682f0b95ab320e8567495559a | [
"MIT"
] | null | null | null | 23.530726 | 232 | 0.451804 | [
[
[
"# Lists \nfrom: [HackerRank](https://www.hackerrank.com/challenges/python-lists/problem) - (easy)\n\nConsider a list (list = []). You can perform the following commands:\n\ninsert `i`, `e`: Insert integer at position. \nprint(): Print the list. \nremove `e`: Delete the first occurrence of integer.... | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
]
] |
e7fc608931132047521db5c3504ce73ed0f06eb4 | 502,108 | ipynb | Jupyter Notebook | frameworks/tensorflow/adanet_objective.ipynb | jiankaiwang/sophia.ml | b8cc450ed2a53417a3ff9431528dbbd7fcfcc6ea | [
"MIT"
] | 7 | 2019-05-03T01:18:56.000Z | 2021-08-21T18:44:17.000Z | frameworks/tensorflow/adanet_objective.ipynb | jiankaiwang/sophia.ml | b8cc450ed2a53417a3ff9431528dbbd7fcfcc6ea | [
"MIT"
] | null | null | null | frameworks/tensorflow/adanet_objective.ipynb | jiankaiwang/sophia.ml | b8cc450ed2a53417a3ff9431528dbbd7fcfcc6ea | [
"MIT"
] | 3 | 2019-01-17T03:53:31.000Z | 2022-01-27T14:33:54.000Z | 65.149604 | 969 | 0.663294 | [
[
[
"!pip install adanet",
"Collecting adanet\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/04/c4/11ac106b2f8946ebe1940ebe26ef4dd212d655c4a2e28bbcc3b5312268e4/adanet-0.3.0-py2.py3-none-any.whl (65kB)\n\u001b[K 100% |################################| 71kB 353kB/s ta 0:00:01\n\u... | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"... |
e7fc62ad1656604644187e2d1edff78d0323ed9e | 5,614 | ipynb | Jupyter Notebook | CNN/Heatmap_demo.ipynb | ucl-exoplanets/DI-Project | a05eeb66b14187bb18618f8cde17dc0f2c435ff8 | [
"CC-BY-4.0"
] | 3 | 2019-12-05T16:44:40.000Z | 2022-03-07T22:35:31.000Z | CNN/Heatmap_demo.ipynb | ucl-exoplanets/DI-Project | a05eeb66b14187bb18618f8cde17dc0f2c435ff8 | [
"CC-BY-4.0"
] | 2 | 2021-05-28T19:11:05.000Z | 2021-05-31T13:22:54.000Z | CNN/Heatmap_demo.ipynb | ucl-exoplanets/DI-Project | a05eeb66b14187bb18618f8cde17dc0f2c435ff8 | [
"CC-BY-4.0"
] | 2 | 2020-07-15T17:31:17.000Z | 2020-10-21T19:24:42.000Z | 22.821138 | 128 | 0.519238 | [
[
[
"import keras.backend as K\nimport numpy as np\nfrom keras.models import Model,load_model\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.axes_grid1 import ImageGrid\n",
"_____no_output_____"
]
],
[
[
"## load test data and loc_map",
"_____no_output_____"
]
],
... | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e7fc77289c0909639a1412adf0774f67380a3fe5 | 14,505 | ipynb | Jupyter Notebook | DecisionTree/MyDecisionTree.ipynb | QYHcrossover/ML-numpy | 863cc651ac38bc421e3b6e99f36a51267f0de0f9 | [
"MIT"
] | 12 | 2020-07-01T02:35:12.000Z | 2022-03-29T13:19:44.000Z | DecisionTree/MyDecisionTree.ipynb | QYHcrossover/ML-numpy | 863cc651ac38bc421e3b6e99f36a51267f0de0f9 | [
"MIT"
] | null | null | null | DecisionTree/MyDecisionTree.ipynb | QYHcrossover/ML-numpy | 863cc651ac38bc421e3b6e99f36a51267f0de0f9 | [
"MIT"
] | 2 | 2021-11-18T08:02:38.000Z | 2021-12-08T02:53:38.000Z | 26.712707 | 114 | 0.366563 | [
[
[
"import pandas as pd\nfrom sklearn.model_selection import train_test_split",
"_____no_output_____"
]
],
[
[
"## 构造数据集",
"_____no_output_____"
]
],
[
[
"def create_data():\n datasets = [['青年', '否', '否', '一般', '否'],\n ['青年', '否', '否', '好',... | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"... |
e7fc77440c1c968eb342fafe83b27e13330c8300 | 39,893 | ipynb | Jupyter Notebook | ActiveDebrisRemoval.ipynb | jbrneto/active-debris-removal | 361b18731cf9a0e55f4dacef8c1f3b3d16b74abd | [
"MIT"
] | null | null | null | ActiveDebrisRemoval.ipynb | jbrneto/active-debris-removal | 361b18731cf9a0e55f4dacef8c1f3b3d16b74abd | [
"MIT"
] | null | null | null | ActiveDebrisRemoval.ipynb | jbrneto/active-debris-removal | 361b18731cf9a0e55f4dacef8c1f3b3d16b74abd | [
"MIT"
] | null | null | null | 44.325556 | 181 | 0.504926 | [
[
[
"!pip install pykep\n!pip install -U TLE-tools\n!pip install astropy",
"_____no_output_____"
],
[
"import random\nimport bisect\nimport numpy\nimport scipy\nimport copy\nfrom datetime import datetime\nfrom datetime import timedelta\n# -- for debris\nimport math\nimport csv\nfrom go... | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
e7fc7bd5d2b2ee68dee5e173ef6068b8d2f39f3c | 90,693 | ipynb | Jupyter Notebook | 2. Sparse SSHIBA.ipynb | alexjorguer/SSHIBA | 785cbffb569745ab58921749bc90420494e4223b | [
"MIT"
] | 2 | 2021-05-20T10:01:54.000Z | 2021-11-17T12:02:13.000Z | 2. Sparse SSHIBA.ipynb | sevisal/SSHIBA | 785cbffb569745ab58921749bc90420494e4223b | [
"MIT"
] | null | null | null | 2. Sparse SSHIBA.ipynb | sevisal/SSHIBA | 785cbffb569745ab58921749bc90420494e4223b | [
"MIT"
] | 1 | 2021-11-17T12:02:54.000Z | 2021-11-17T12:02:54.000Z | 283.415625 | 35,116 | 0.922199 | [
[
[
"# 2. Feature SelectionModel\nAuthor: _Carlos Sevilla Salcedo (Updated: 18/07/2019)_\n\nIn this notebook we are going to present the extension to include a double sparsity in the model. The idea behind this modification is that besides imposing sparsity in the latent features, we could also force to h... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7fc815feec2734a42b857c80a0e362064c15d1f | 61,930 | ipynb | Jupyter Notebook | examples/notebooks/15Matching_sections.ipynb | fprice111/python-dts-calibration | bc972070ab1c9fe43e9ecc85ace30e2877b8cd00 | [
"BSD-3-Clause"
] | 20 | 2019-10-07T15:54:07.000Z | 2022-03-18T07:18:22.000Z | examples/notebooks/15Matching_sections.ipynb | fprice111/python-dts-calibration | bc972070ab1c9fe43e9ecc85ace30e2877b8cd00 | [
"BSD-3-Clause"
] | 90 | 2019-01-25T09:41:37.000Z | 2022-03-21T12:45:30.000Z | examples/notebooks/15Matching_sections.ipynb | fprice111/python-dts-calibration | bc972070ab1c9fe43e9ecc85ace30e2877b8cd00 | [
"BSD-3-Clause"
] | 9 | 2019-10-16T12:37:59.000Z | 2022-02-18T21:24:29.000Z | 210.646259 | 30,844 | 0.907121 | [
[
[
"# 15. Calibration using matching sections",
"_____no_output_____"
],
[
"In notebook 14 we showed how you can take splices or connectors within your calibration into account. To then calibrate the cable we used reference sections on both sides of the splice. If these are not availa... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7fc8838dee2b490c19691366f98441c26bcb604 | 295,396 | ipynb | Jupyter Notebook | nbs/julia_sets.ipynb | adiamaan92/brotground | 25263438b69fa46c2c3fc0667a42bd6524b76d9e | [
"MIT"
] | 3 | 2021-11-24T03:12:35.000Z | 2022-02-07T02:15:45.000Z | nbs/julia_sets.ipynb | adiamaan92/brotground | 25263438b69fa46c2c3fc0667a42bd6524b76d9e | [
"MIT"
] | null | null | null | nbs/julia_sets.ipynb | adiamaan92/brotground | 25263438b69fa46c2c3fc0667a42bd6524b76d9e | [
"MIT"
] | null | null | null | 2,204.447761 | 164,334 | 0.964624 | [
[
[
"!pip install brotground==0.1.3",
"_____no_output_____"
],
[
"from brotground import JuliaBrot\nfrom brotground.resources import quadratic_julia_set\nfrom brotground.renderers import StaticRenderer",
"_____no_output_____"
],
[
"matplot_renderer = StaticRenderer(... | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
e7fc93e8df725b87b3c4317e389965bb87643cfc | 51,658 | ipynb | Jupyter Notebook | Module4/IntroToRegularization.ipynb | AlephEleven/Deep-Learning-Explained | e08138d5dcd98dad30e0c6950553b94720c0d9ad | [
"Unlicense"
] | 1 | 2022-02-26T22:59:36.000Z | 2022-02-26T22:59:36.000Z | Module4/IntroToRegularization.ipynb | AlephEleven/Deep-Learning-Explained | e08138d5dcd98dad30e0c6950553b94720c0d9ad | [
"Unlicense"
] | null | null | null | Module4/IntroToRegularization.ipynb | AlephEleven/Deep-Learning-Explained | e08138d5dcd98dad30e0c6950553b94720c0d9ad | [
"Unlicense"
] | null | null | null | 40.263445 | 704 | 0.623311 | [
[
[
"# Deep Learning Explained\n\n# Module 4 - Lab - Introduction to Regularization for Deep Neural Nets \n\n\n\nThis lesson will introduce you to the principles of regularization required to successfully train deep neural networks. In this lesson you will:\n\n1. Understand the need for regularization... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
... |
e7fca7be0c9548cf83d8b3bb03a0918a52449732 | 167,393 | ipynb | Jupyter Notebook | convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb | mwizasimbeye11/udacity-pytorch-scholar-challenge | 5d76f66b6d3185a01fb37dc17302a13eb6299da4 | [
"MIT"
] | null | null | null | convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb | mwizasimbeye11/udacity-pytorch-scholar-challenge | 5d76f66b6d3185a01fb37dc17302a13eb6299da4 | [
"MIT"
] | null | null | null | convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb | mwizasimbeye11/udacity-pytorch-scholar-challenge | 5d76f66b6d3185a01fb37dc17302a13eb6299da4 | [
"MIT"
] | null | null | null | 317.032197 | 99,196 | 0.919776 | [
[
[
"# Multi-Layer Perceptron, MNIST\n---\nIn this notebook, we will train an MLP to classify images from the [MNIST database](http://yann.lecun.com/exdb/mnist/) hand-written digit database.\n\nThe process will be broken down into the following steps:\n>1. Load and visualize the data\n2. Define a neural n... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"m... |
e7fcab98b23c0c50af333d238aace461d12cd684 | 27,216 | ipynb | Jupyter Notebook | scraping/SinhalaSongBook/SinhalaSongBook/.ipynb_checkpoints/index_id_add-checkpoint.ipynb | harith96/Sinhala-Songs-Search-Engine | 010b6d4cf5ad2a3621b1e71f01614d396e5e13a4 | [
"MIT"
] | null | null | null | scraping/SinhalaSongBook/SinhalaSongBook/.ipynb_checkpoints/index_id_add-checkpoint.ipynb | harith96/Sinhala-Songs-Search-Engine | 010b6d4cf5ad2a3621b1e71f01614d396e5e13a4 | [
"MIT"
] | null | null | null | scraping/SinhalaSongBook/SinhalaSongBook/.ipynb_checkpoints/index_id_add-checkpoint.ipynb | harith96/Sinhala-Songs-Search-Engine | 010b6d4cf5ad2a3621b1e71f01614d396e5e13a4 | [
"MIT"
] | null | null | null | 40.926316 | 84 | 0.340204 | [
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"df = pd.read_json(\"lyrics.json\")",
"_____no_output_____"
],
[
"global count\ncount = 0\n\ndef insert_index_id_columns(row):\n global count\n count = 1\n return pd.Series([count, \"songs\"], index=['so... | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
e7fcadd0a5bc034ee7b2037243e3dcc061d0ed4b | 33,880 | ipynb | Jupyter Notebook | Week05/WS04/Workshop04.ipynb | ds-connectors/Physics-88-Fa21 | 147ea6ea06798fc6e7d7eac9f06076365c291fc9 | [
"BSD-3-Clause"
] | 1 | 2021-08-30T17:52:58.000Z | 2021-08-30T17:52:58.000Z | Week05/WS04/Workshop04.ipynb | ds-connectors/Physics-88-Fa21 | 147ea6ea06798fc6e7d7eac9f06076365c291fc9 | [
"BSD-3-Clause"
] | null | null | null | Week05/WS04/Workshop04.ipynb | ds-connectors/Physics-88-Fa21 | 147ea6ea06798fc6e7d7eac9f06076365c291fc9 | [
"BSD-3-Clause"
] | null | null | null | 36.080937 | 838 | 0.599351 | [
[
[
"## Workshop 4\n### File Input and Output (I/O)\n\n**Submit this notebook to bCourses (ipynb and pdf) to receive a grade for this Workshop.**\n\nPlease complete workshop activities in code cells in this iPython notebook. The activities titled **Practice** are purely for you to explore Python. Some of ... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]... |
e7fcbd661cdeb21c18d5cba1221c360584cfe2e1 | 8,613 | ipynb | Jupyter Notebook | Coin detection.ipynb | Viniths28/ML-and-AI | 5ab1ba34cf32047c0ed8318347e5de78ebabcc3c | [
"Apache-2.0"
] | null | null | null | Coin detection.ipynb | Viniths28/ML-and-AI | 5ab1ba34cf32047c0ed8318347e5de78ebabcc3c | [
"Apache-2.0"
] | 1 | 2020-09-22T17:55:22.000Z | 2020-09-22T17:55:22.000Z | Coin detection.ipynb | Viniths28/ML-and-AI | 5ab1ba34cf32047c0ed8318347e5de78ebabcc3c | [
"Apache-2.0"
] | null | null | null | 22.785714 | 87 | 0.503077 | [
[
[
"from sklearn.neural_network import MLPClassifier\nfrom sklearn.model_selection import train_test_split\n\nimport math\nimport numpy as np\nimport argparse\nimport glob\nimport cv2\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.model_selection import train_test_split\nfrom __future__ ... | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7fcc7fac9b8aecb4866223ef6a33fc34167f43a | 53,984 | ipynb | Jupyter Notebook | paper-plotting/paths_counting.ipynb | twistedcubic/attention-rank-collapse | 38b5df6dc2add25f6d945e48a6baf96862368c20 | [
"Apache-2.0"
] | 118 | 2021-03-08T01:46:30.000Z | 2022-02-10T06:51:20.000Z | paper-plotting/paths_counting.ipynb | twistedcubic/attention-rank-collapse | 38b5df6dc2add25f6d945e48a6baf96862368c20 | [
"Apache-2.0"
] | null | null | null | paper-plotting/paths_counting.ipynb | twistedcubic/attention-rank-collapse | 38b5df6dc2add25f6d945e48a6baf96862368c20 | [
"Apache-2.0"
] | 11 | 2021-03-08T10:21:11.000Z | 2021-12-30T13:03:20.000Z | 278.268041 | 48,164 | 0.913345 | [
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport jax\nimport jax.numpy as jnp\nfrom functools import partial\nimport scipy\nimport itertools\nimport matplotlib\nimport seaborn as sns",
"_____no_output_____"
],
[
"architectures = [\n (\"DistilBert\", 12, 6),\n (\"M... | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
e7fccb4c1413e82a64e0244cde25759be0bf0a36 | 2,793 | ipynb | Jupyter Notebook | courses/dl1/multi_label_mri_modality_classification.ipynb | mingrui/fastai | ef3533d11ef9b64b27ced38e2fc26de8c9ed7132 | [
"Apache-2.0"
] | 1 | 2022-02-20T11:52:34.000Z | 2022-02-20T11:52:34.000Z | courses/dl1/multi_label_mri_modality_classification.ipynb | mingrui/fastai | ef3533d11ef9b64b27ced38e2fc26de8c9ed7132 | [
"Apache-2.0"
] | null | null | null | courses/dl1/multi_label_mri_modality_classification.ipynb | mingrui/fastai | ef3533d11ef9b64b27ced38e2fc26de8c9ed7132 | [
"Apache-2.0"
] | null | null | null | 18.871622 | 71 | 0.517007 | [
[
[
"%reload_ext autoreload\n%autoreload 2\n%matplotlib inline",
"_____no_output_____"
],
[
"import torch",
"_____no_output_____"
],
[
"from fastai.imports import *\nfrom fastai.torch_imports import *\nfrom fastai.transforms import *\nfrom fastai.conv_learner import... | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7fce98a1a180ff75904c9e8d44fdad8f9f357ef | 1,264 | ipynb | Jupyter Notebook | tests/notebooks/ipynb_maxima/maxima_example.ipynb | sthagen/mwouts-jupytext | 3b1eaa21d3d139444bdc278a0b696c363838e085 | [
"MIT"
] | 11 | 2018-06-15T12:12:11.000Z | 2018-08-25T14:01:52.000Z | tests/notebooks/ipynb_maxima/maxima_example.ipynb | sthagen/mwouts-jupytext | 3b1eaa21d3d139444bdc278a0b696c363838e085 | [
"MIT"
] | 33 | 2018-06-17T01:16:10.000Z | 2018-08-30T16:09:02.000Z | tests/notebooks/ipynb_maxima/maxima_example.ipynb | sthagen/mwouts-jupytext | 3b1eaa21d3d139444bdc278a0b696c363838e085 | [
"MIT"
] | 1 | 2018-07-20T06:52:12.000Z | 2018-07-20T06:52:12.000Z | 16.205128 | 32 | 0.47231 | [
[
[
"## maxima misc",
"_____no_output_____"
]
],
[
[
"kill(all)$",
"_____no_output_____"
],
[
"f(x) := 1/(x^2+l^2)^(3/2);",
"_____no_output_____"
],
[
"integrate(f(x), x);",
"_____no_output_____"
],
[
"tex(%)$",
"___... | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e7fce9f6bd8f3ccc53d5cf98e49d9acf197f53ef | 363,171 | ipynb | Jupyter Notebook | _notebooks/2021-11-24-french_healtcare.ipynb | LuisAVasquez/quiescens-lct | 2a187da92aafd8ac1db0054c3bc10b01755b1fa3 | [
"Apache-2.0"
] | null | null | null | _notebooks/2021-11-24-french_healtcare.ipynb | LuisAVasquez/quiescens-lct | 2a187da92aafd8ac1db0054c3bc10b01755b1fa3 | [
"Apache-2.0"
] | null | null | null | _notebooks/2021-11-24-french_healtcare.ipynb | LuisAVasquez/quiescens-lct | 2a187da92aafd8ac1db0054c3bc10b01755b1fa3 | [
"Apache-2.0"
] | null | null | null | 732.199597 | 122,360 | 0.948352 | [
[
[
"# \"Settling in France: The healthcare system\"\n> The public healthcare system here is amazing\n\n- toc: true \n- badges: true\n- comments: true\n- categories:[\"french bureaucracy\", Nancy, going to France]\n- image: images/chart-preview.png",
"_____no_output_____"
],
[
"---\n\n... | [
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markd... |
e7fcff5f30e776f388a7215fa7b94d517c252e16 | 282,884 | ipynb | Jupyter Notebook | 2016/tutorial_final/79/Tutorial.ipynb | zeromtmu/practicaldatascience.github.io | 62950a3a3e7833552b0f2269cc3ee5c34a1d6d7b | [
"MIT"
] | 1 | 2021-07-06T17:36:24.000Z | 2021-07-06T17:36:24.000Z | 2016/tutorial_final/79/Tutorial.ipynb | zeromtmu/practicaldatascience.github.io | 62950a3a3e7833552b0f2269cc3ee5c34a1d6d7b | [
"MIT"
] | null | null | null | 2016/tutorial_final/79/Tutorial.ipynb | zeromtmu/practicaldatascience.github.io | 62950a3a3e7833552b0f2269cc3ee5c34a1d6d7b | [
"MIT"
] | 1 | 2021-07-06T17:36:34.000Z | 2021-07-06T17:36:34.000Z | 198.654494 | 64,632 | 0.85219 | [
[
[
"# Principal Component Analysis (PCA)",
"_____no_output_____"
],
[
"### Introduction",
"_____no_output_____"
],
[
"<p> The purpose of this tutorial is to provide the reader with an intuitive understanding for principal component analysis (PCA). PCA is a multivar... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"mar... | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"... |
e7fd0542a5139d058958c8799909f40461e46d6c | 8,260 | ipynb | Jupyter Notebook | notebooks/B1. Logistic Regression - Overview.ipynb | vitorpq/LearnDataScience | 23b06019d9a0da445d713cd3056d307bd0df477f | [
"BSD-2-Clause"
] | 21 | 2015-02-16T18:14:20.000Z | 2021-04-15T19:27:39.000Z | notebooks/B1. Logistic Regression - Overview.ipynb | isayev/LearnDataScience | 8827b954575b5276017d546562379f55ef3f1ee4 | [
"BSD-2-Clause"
] | null | null | null | notebooks/B1. Logistic Regression - Overview.ipynb | isayev/LearnDataScience | 8827b954575b5276017d546562379f55ef3f1ee4 | [
"BSD-2-Clause"
] | 18 | 2015-01-17T00:42:33.000Z | 2020-12-11T01:10:22.000Z | 34.132231 | 265 | 0.482567 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e7fd08b8923d00894bfe53ad1d5769ee642d3c64 | 51,910 | ipynb | Jupyter Notebook | docs/notebooks/analysis/example_fit_ramsey.ipynb | dpfranke/qtt | f60e812fe8b329e67f7b38d02eef552daf08d7c9 | [
"MIT"
] | null | null | null | docs/notebooks/analysis/example_fit_ramsey.ipynb | dpfranke/qtt | f60e812fe8b329e67f7b38d02eef552daf08d7c9 | [
"MIT"
] | null | null | null | docs/notebooks/analysis/example_fit_ramsey.ipynb | dpfranke/qtt | f60e812fe8b329e67f7b38d02eef552daf08d7c9 | [
"MIT"
] | null | null | null | 300.057803 | 25,496 | 0.931401 | [
[
[
"# Fitting the data from a Ramsey experiment",
"_____no_output_____"
],
[
"In this notebook we analyse data from a Ramsey experiment. Using the method and data from:\n\nWatson, T. F., Philips, S. G. J., Kawakami, E., Ward, D. R., Scarlino, P., Veldhorst, M., … Vandersypen, L. M. K.... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7fd2620e0454d771cb35c6e7be40f2a916cb9c5 | 10,966 | ipynb | Jupyter Notebook | Lesson05/Activity27/activity27.ipynb | webobite/Data-Visualization-with-Python | 1c0ef63da951c9f77351ba621e71cba2600dce00 | [
"MIT"
] | 74 | 2019-03-22T11:25:01.000Z | 2022-03-16T16:09:02.000Z | Lesson05/Activity27/activity27.ipynb | zwfengineer/Data-Visualization-with-Python | 18f79fcfefe53a2cbd2309c8648bbe16c33150c2 | [
"MIT"
] | 1 | 2019-06-17T02:04:23.000Z | 2019-06-17T03:20:30.000Z | Lesson05/Activity27/activity27.ipynb | zwfengineer/Data-Visualization-with-Python | 18f79fcfefe53a2cbd2309c8648bbe16c33150c2 | [
"MIT"
] | 84 | 2018-11-29T12:59:44.000Z | 2022-03-22T04:04:57.000Z | 26.616505 | 214 | 0.591373 | [
[
[
"## Plotting geospatial data on a map",
"_____no_output_____"
],
[
"In this first activity for geoplotlib, you'll combine methodologies learned in the previous exercise and use theoretical knowledge from previous lessons. \nBesides from wrangling data you need to find the area wi... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"... |
e7fd281607286cbfde86ef0099ef58722519fef2 | 13,152 | ipynb | Jupyter Notebook | BRL/vehicleload/Preprocessing_readedge.ipynb | kidrabit/Data-Visualization-Lab-RND | baa19ee4e9f3422a052794e50791495632290b36 | [
"Apache-2.0"
] | 1 | 2022-01-18T01:53:34.000Z | 2022-01-18T01:53:34.000Z | BRL/vehicleload/Preprocessing_readedge.ipynb | kidrabit/Data-Visualization-Lab-RND | baa19ee4e9f3422a052794e50791495632290b36 | [
"Apache-2.0"
] | null | null | null | BRL/vehicleload/Preprocessing_readedge.ipynb | kidrabit/Data-Visualization-Lab-RND | baa19ee4e9f3422a052794e50791495632290b36 | [
"Apache-2.0"
] | null | null | null | 35.072 | 190 | 0.502737 | [
[
[
"import numpy as np\nimport pandas as pd\nimport pickle\nimport math\nimport pandas as pd\nfrom pandas import HDFStore \nimport argparse\n\n###################################################################################\n#location\nnode_ids_filename = 'data/node_locate.txt'\nwith open(node_ids_fil... | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
e7fd32b8400f33ae867fa988c1610613e1608171 | 6,411 | ipynb | Jupyter Notebook | DecisionTreeClassifier.ipynb | manishgaurav84/ml-from-scratch | 2af963d6e13889b6dcc8486ecaa0374577cee0c8 | [
"MIT"
] | null | null | null | DecisionTreeClassifier.ipynb | manishgaurav84/ml-from-scratch | 2af963d6e13889b6dcc8486ecaa0374577cee0c8 | [
"MIT"
] | null | null | null | DecisionTreeClassifier.ipynb | manishgaurav84/ml-from-scratch | 2af963d6e13889b6dcc8486ecaa0374577cee0c8 | [
"MIT"
] | null | null | null | 33.390625 | 100 | 0.511621 | [
[
[
"import numpy as np\nfrom collections import Counter\n\n\ndef entropy(y):\n hist = np.bincount(y)\n ps = hist / len(y)\n return -np.sum([p * np.log2(p) for p in ps if p > 0])\n\n\nclass Node:\n\n def __init__(self, feature=None, threshold=None, left=None, right=None, *, value=None):\n ... | [
"code"
] | [
[
"code",
"code"
]
] |
e7fd596077ce0ad0dba7a2b55f19b5a57d2d9bcb | 26,968 | ipynb | Jupyter Notebook | SQL Case Study - Country Club/Unit-8.3_SQL-Project.ipynb | shalin4788/Springboard-Do-not-refer- | e7627e6f4b09456e08c6f10baeb66b0a22422b7a | [
"MIT"
] | 2 | 2020-10-23T06:24:18.000Z | 2020-10-23T06:24:25.000Z | SQL Case Study - Country Club/Unit-8.3_SQL-Project.ipynb | shalin4788/Springboard-Do-not-refer- | e7627e6f4b09456e08c6f10baeb66b0a22422b7a | [
"MIT"
] | 5 | 2021-06-08T22:56:21.000Z | 2022-01-13T03:35:07.000Z | SQL Case Study - Country Club/Unit-8.3_SQL-Project.ipynb | shalin4788/Springboard-Do-not-refer- | e7627e6f4b09456e08c6f10baeb66b0a22422b7a | [
"MIT"
] | null | null | null | 29.798895 | 145 | 0.358351 | [
[
[
"# Import packages\nfrom sqlalchemy import create_engine\nimport pandas as pd",
"_____no_output_____"
],
[
"# Create engine: engine\nengine = create_engine('sqlite:///country_club.db')",
"_____no_output_____"
],
[
"# Execute query and store records in DataFrame:... | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7fd5b7276c446eb1f7d781dd134be5eab0f7287 | 265,861 | ipynb | Jupyter Notebook | scikit-learn/scikit-learn-svm.ipynb | AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials | 352dd6d9a785e22fde0ce53a6b0c2e56f4964950 | [
"Apache-2.0"
] | 24,753 | 2015-06-01T10:56:36.000Z | 2022-03-31T19:19:58.000Z | scikit-learn/scikit-learn-svm.ipynb | AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials | 352dd6d9a785e22fde0ce53a6b0c2e56f4964950 | [
"Apache-2.0"
] | 150 | 2017-08-28T14:59:36.000Z | 2022-03-11T23:21:35.000Z | scikit-learn/scikit-learn-svm.ipynb | AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials | 352dd6d9a785e22fde0ce53a6b0c2e56f4964950 | [
"Apache-2.0"
] | 7,653 | 2015-06-06T23:19:20.000Z | 2022-03-31T06:57:39.000Z | 708.962667 | 52,452 | 0.9384 | [
[
[
"# scikit-learn-svm",
"_____no_output_____"
],
[
"Credits: Forked from [PyCon 2015 Scikit-learn Tutorial](https://github.com/jakevdp/sklearn_pycon2015) by Jake VanderPlas\n\n* Support Vector Machine Classifier\n* Support Vector Machine with Kernels Classifier",
"_____no_outpu... | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.