content stringlengths 73 1.12M | license stringclasses 3
values | path stringlengths 9 197 | repo_name stringlengths 7 106 | chain_length int64 1 144 |
|---|---|---|---|---|
<jupyter_start><jupyter_text>___
___
Note-
1) First make the copy of the project in your drive then start with the solution.
2) Dont run the cell directly, first add a call above it then run the cell so that you dont miss the solution.
# Logistic Regression Project
In this project we will be working with a fake adve... | no_license | /Logistic_Regression_Assignment.ipynb | young-ai-expert/Data-Science-Projects | 12 |
<jupyter_start><jupyter_text># Vehicle detection and tracking project*picture by Udacity*<jupyter_code>import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
import os
import matplotlib.image as mpimg
%matplotlib qt
%matplotlib inline
vehs = []
for image in os.listdir(os.getcwd() + "/vehicles/GTI_Rig... | no_license | /CarND-Vehicle-Detection-master-P5.ipynb | azumf/CarND-Vehicle-Detection-P5 | 16 |
<jupyter_start><jupyter_text># 교차검증 그리드 서치 (cross_val_score) p324~<jupyter_code>from sklearn.model_selection import cross_val_score
import numpy as np
best_score = 0
for gamma in [0.001, 0.01, 0.1, 1, 10, 100]:
for C in [0.001, 0.01, 0.1, 1, 10, 100]:
# 매개변수의 각 조합에 대해 SVC를 훈련시킵니다
svm = SVC(gamm... | no_license | /머신러닝/l.evalueation(모델평가, 성능 향상).ipynb | chayoonjung/yun-python | 1 |
<jupyter_start><jupyter_text>Create a array<jupyter_code>data = np.arange(10)
print (data.shape)
data
data = data.reshape(2,5)
print (data.shape)
data
data_1 = np.ones([3,3])
print (type(data_1))
print (data_1)
data_1 = np.zeros([3,3])
print (data_1)
data = np.eye(3,3)
data
data =np.diag((1,2,3))
data
data = np.empty([... | permissive | /Numpy.ipynb | yuzhipeter/Data_Structure_Numpy_Pandas | 8 |
<jupyter_start><jupyter_text># Read in the data<jupyter_code>import pandas as pd
import numpy
import re
data_files = [
"ap_2010.csv",
"class_size.csv",
"demographics.csv",
"graduation.csv",
"hs_directory.csv",
"sat_results.csv"
]
data = {}
for f in data_files:
d = pd.read_csv("schools/{0}... | no_license | /Guided Project: Analyzing NYC High School Data/Schools.ipynb | isabellechiu/Self-Project-Dataquest | 10 |
<jupyter_start><jupyter_text># Mô tả dữ liệu bằng Arviz
### Bs. Lê Ngọc Khả Nhi
Bài thực hành nảy nhằm hướng dẫn dùng package Arviz để thực hiện các biểu đồ mô tả đữ liệu đơn giản.
Arviz (https://arviz-devs.github.io/arviz/index.html) là một thư viện đồ họa chuyên dụng cho Thống kê Bayes, cho phép vẽ các biểu đồ để ... | no_license | /Arviz demo 1.ipynb | kinokoberuji/Statistics-Python-Tutorials | 15 |
<jupyter_start><jupyter_text># Programming and Data Analysis
> Homework 0
Kuo, Yao-Jen from [DATAINPOINT](https://www.datainpoint.com)## Instructions
- We've imported necessary modules at the beginning of each exercise.
- We've put necessary files(if any) in the working directory of each exercise.
- We've defined t... | no_license | /exercises.ipynb | rose020/homework0 | 7 |
<jupyter_start><jupyter_text>#### Plotting of gesture data<jupyter_code>print("Shape of X_train: ", X_train.shape)
print("shape of y_train/labels: ", y_train.shape)
print("Shape of X_test: ", X_test.shape)
print("shape of y_test/labels: ", y_test.shape)
samples = np.random.choice(len(X_train), 8)
def show_images(images... | no_license | /asl-training.ipynb | ayulockin/ASL_Classifier | 3 |
<jupyter_start><jupyter_text># Pandas basics
In this notebook we will **learn** how to work with the two main data types in `pandas`: `DataFrame` and `Series`.## Data structures (`pandas`)### `Series`
In `pandas`, series are the building blocks of dataframes.
Think of a series as a column in a table. A series collec... | permissive | /notebooks/3_PandasBasics.ipynb | Giovanni1085/UvA_CDH_2020 | 24 |
<jupyter_start><jupyter_text><jupyter_code>import numpy as np # useful for many scientific computing in Python
import pandas as pd # primary data structure library
!conda install -c anaconda xlrd --yes
df_can = pd.read_excel('https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DV0101EN/l... | permissive | /Lab Experiments/Experiment_3_200720.ipynb | rohitsmittal7/J045-ML-Sem-V | 3 |
<jupyter_start><jupyter_text>**MATH 3332 **
**Section 52 **
# In Who-is-Normal.xslx there are 7 columns representing variables x1-x7, one of which is sample from the normal distribution. Find which one of the variables is normal?
## Reading in Values
The program starts by reading in the values from the Excel spread... | no_license | /Probability/Normal.ipynb | rlacherksu/notebooks | 3 |
<jupyter_start><jupyter_text># 株価の関係<jupyter_code>%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
import numpy
import sqlite3
import pandas as pd
import os
fmdate="2015-01-01"
todate="2016-12-31"
code="6502"
if os.name == "nt": fname=r'C:\WINDOWS\Fonts\ipaexg.ttf'
else: fname = os.environ["HOME"... | no_license | /theme/notebook/22_toshiba_stock.ipynb | k-utsubo/xc40 | 10 |
<jupyter_start><jupyter_text>## Online Factorization Machine
Online factorization models take single data as an input, make a prediction, and train with the data.### 1. Setup
The from models imports the package for use. We have also imported a few other packages for plotting.<jupyter_code>import sys
sys.path.append('.... | no_license | /jupyters/online_models_example.ipynb | yejihan-dev/fm-for-online-recommendation | 5 |
<jupyter_start><jupyter_text>### 运行一次来获得cookie
- 注意填充自己的帐号密码<jupyter_code>import requests
import time
from selenium import webdriver
def get_pixiv_cookie(pixiv_id,pixiv_pw):
driver = webdriver.Chrome() # Optional argument, if not specified will search pat
driver.get('https://accounts.pixiv.net/login');
t... | no_license | /pixiv.ipynb | Unknown-Chinese-User/pixiv-spider | 7 |
<jupyter_start><jupyter_text>**This notebook is an exercise in the [Intro to Deep Learning](https://www.kaggle.com/learn/intro-to-deep-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/ryanholbrook/deep-neural-networks).**
---
# Introduction #
In the tutorial, we saw how to build... | no_license | /Intro to Deep Learning/2 - Deep Neural Networks/exercise-deep-neural-networks.ipynb | mtamjidhossain/Kaggle-courses | 6 |
<jupyter_start><jupyter_text># Answering Questions for the Chinook Record StoreThe Chinook record store has just signed a deal with a new record label, and you've been tasked with selecting the first three albums that will be added to the store, from a list of four. All four albums are by artists that don't have any tr... | no_license | /chinook_store_sql.ipynb | EdsTyping/chinook_record_store_sql | 8 |
<jupyter_start><jupyter_text>## Surrogate Models & Helper Functions<jupyter_code>ValueRange = namedtuple('ValueRange', ['min', 'max'])
def determinerange(values):
"""Determine the range of values in each dimension"""
return ValueRange(np.min(values, axis=0), np.max(values, axis=0))
def linearscaletransform(v... | no_license | /Original - Optimality/200D/second/F16_200_original.ipynb | SibghatUllah13/Deep-Latent_Variable_Models-for-dimensionality-reduction-in-surrogate-assisted-optimization | 4 |
<jupyter_start><jupyter_text>## Download rnn_merged.zip & rnn_embed.zip from https://drive.google.com/drive/folders/1yO_W-m0fF_PludrnScdgyTGsPFoDsA6_?usp=sharing and unzip to the same folder of this file
## Also download train_jpg.zip & test_jpg.zip from competition website<jupyter_code>import pandas as pd
import tens... | no_license | /RNN Self-Trained WordVec + Image + Merge Features (with Fast Loading)-Copy1.ipynb | tnmichael309/kaggle-avito-demand-challenge | 6 |
<jupyter_start><jupyter_text># Consume deployed webservice via REST
Demonstrates the usage of a deployed model via plain REST.
REST is language-agnostic, so you should be able to query from any REST-capable programming language.## Configuration<jupyter_code>from environs import Env
env = Env()
env.read_env("foundation... | permissive | /mnist_fashion/04_consumption/consume-webservice.ipynb | anderl80/aml-template | 3 |
<jupyter_start><jupyter_text># Problem : Print all ancestors of binary tree.
Algorithm:
1. Check if root or node is None, if yes, return False
2. Append the ancestor list with the root
3. If the root equals node return True to the calling function
4. Check the left and right subtree for node recursively
5. If found r... | no_license | /Trees/Binary Trees/Problems/.ipynb_checkpoints/AncestorsOfBinaryTree-checkpoint.ipynb | sumeet13/Algorithms-and-Data-Structures | 3 |
<jupyter_start><jupyter_text>Table of Contents
1 HISTOGRAM2 QQPLOT3 AGGREGATION PLOT (part4)## HISTOGRAM<jupyter_code>library(MASS)
# Create a histogram of counts with hist()
hist(Cars93$Horsepower, main = "hist() plot")
# Create a normalized histogram with truehist()
hist(Cars93$Horsep... | no_license | /Data visualization - base R/.ipynb_checkpoints/2.1. One variable-checkpoint.ipynb | yoogun143/Datacamp_R | 3 |
<jupyter_start><jupyter_text>原文代码作者:François Chollet
github:https://github.com/fchollet/deep-learning-with-python-notebooks
中文注释制作:黄海广
github:https://github.com/fengdu78
代码全部测试通过。
配置环境:keras 2.2.1(原文是2.0.8,运行结果一致),tensorflow 1.8,python 3.6,
主机:显卡:一块1080ti;内存:32g(注:绝大部分代码不需要GPU)
<jupyter... | permissive | /References/Python深度学习/code/6.1-using-word-embeddings.ipynb | Jianhan-Liu/NLP | 14 |
<jupyter_start><jupyter_text><jupyter_code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import os
import zipfile
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oaut... | no_license | /TMDB_Box_Office_Revenue_Prediction.ipynb | inboxpraveen/TMDB-Box-office-revenue | 9 |
<jupyter_start><jupyter_text># Data Scaling ## StandardScaler (Standardization)
y = (x – mean) / standard_deviation
Where the mean is calculated as:
mean = sum(x) / count(x)
And the standard_deviation is calculated as:
standard_deviation = sqrt( sum( (x – mean)^2 ) / count(x))
<jupyter_code>from sklearn.prep... | no_license | /Data_Scaling.ipynb | abohashem95/SKlearn-codes | 7 |
<jupyter_start><jupyter_text># Lab 4: Functions and VisualizationsWelcome to Lab 4! This week, we'll learn about functions, table methods such as `apply`, and how to generate visualizations!
Recommended Reading:
* [Applying a Function to a Column](https://www.inferentialthinking.com/chapters/08/1/applying-a-function... | no_license | /lab04/lab04.ipynb | Peter-Jantsch/m121-sp21-lab | 34 |
<jupyter_start><jupyter_text># List<jupyter_code># make a list with integer, float, string types and store it to var
var = [10 ,"Apple" ,"a+b", 5.5, "Ball" ]
# access individual items of list using square brackets [] in variable name
print(var[0])
print(var[1])
print(var[2])
# use negative indexing
print(var[-1])
prin... | no_license | /python_list.ipynb | zhubaiyuan/exercise-book | 1 |
<jupyter_start><jupyter_text># Implement a Queue - Using Two Stacks
Given the Stack class below, implement a Queue class using **two** stacks! Note, this is a "classic" interview problem. Use a Python list data structure as your Stack.<jupyter_code># Uses lists instead of your own Stack class.
stack1 = []
stack2 = []<... | no_license | /Python_algo/Chapter13_QueuesDeques/Stacks, Queues, and Deques Interview Problems/Stacks, Queues, Deques Interview Questions/.ipynb_checkpoints/Implement a Queue -Using Two Stacks -checkpoint.ipynb | qy2205/MyUdemy | 3 |
<jupyter_start><jupyter_text>#### for improving model performance we need to split all wines types in three category good,fine and bad<jupyter_code>reviews = []
for i in data['quality']:
if i >= 1 and i <= 3:
reviews.append('1')
elif i >= 4 and i <= 7:
reviews.append('2')
elif i >= 8 and i <... | no_license | /notebooks/loaiabdalslam/classification-pca-kernel-99.ipynb | Sayem-Mohammad-Imtiaz/kaggle-notebooks | 3 |
<jupyter_start><jupyter_text># Generate content<jupyter_code>wav_out = "/Users/tmiano/Documents/Projects/cvr/me_drilling-sound_171128.wav"
img_out = "/Users/tmiano/Documents/Projects/cvr/me_drilling-sound_171128.jpg"
sr=44100
duration = 4 # seconds
mic_recording = sd.rec(duration * sr, samplerate=sr, channels=2,dtype=... | no_license | /final_project/notebooks/sandbox/mic_audio_spectrogram.ipynb | thommiano/cvr_6501-009 | 2 |
<jupyter_start><jupyter_text># WeatherPy
----
### Analysis
* As expected, the weather becomes significantly warmer as one approaches the equator (0 Deg. Latitude). More interestingly, however, is the fact that the southern hemisphere tends to be warmer this time of year than the northern hemisphere. This may be due to... | no_license | /starter_code/WeatherPy.ipynb | nebajuinpou/Weather_challenge_Python | 8 |
<jupyter_start><jupyter_text>## Dimensionality Reduction### PCA<jupyter_code>from sklearn.decomposition import PCA
import numpy as np
X = np.random.rand(100, 3)
y = 4 + 3 * X.dot(X.T) + np.random.randn(100, 1)
pca = PCA(n_components = 2)
X2D = pca.fit_transform(X)
print(pca.explained_variance_ratio_)
pca = PCA()
pca.... | no_license | /notebooks/dimensionality-reduction.ipynb | Modest-as/machine-learning-exercises | 3 |
<jupyter_start><jupyter_text># Exercise 02 - OLAP Cubes - Grouping SetsAll the databases table in this demo are based on public database samples and transformations
- `Sakila` is a sample database created by `MySql` [Link](https://dev.mysql.com/doc/sakila/en/sakila-structure.html)
- The postgresql version of it is cal... | no_license | /course_DE/Udacity-Data-Engineering-master/Cloud Data Warehouse/L1 E2 - Grouping Sets.ipynb | yennanliu/analysis | 7 |
<jupyter_start><jupyter_text>EOS 491/526 Assignment #1
Daniel Scanks
V00788200
Question #1<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
n1= np.random.uniform(-1,1,200000) # 1 uniformly distributed random variable on [-1,1] with mean and std dev calculated
mean = np.mean(... | no_license | /.ipynb_checkpoints/EOS491Assignment#1-checkpoint.ipynb | scanks/EOS491Assignments | 1 |
<jupyter_start><jupyter_text>
## [mlcourse.ai](https://mlcourse.ai) – Open Machine Learning Course
Authors: [Vitaliy Radchenko](https://www.linkedin.com/in/vitaliyradchenk0/), [Yury Kashnitsky](https://yorko.github.io), and Mikalai Parshutsich. Translated and edited by [Christina Butsko](https://www.linkedin.co... | no_license | /jupyter_notebooks/Lecture_05_Ensembles/topic-5-ensembles-part-3-feature-importance.ipynb | RisnyK/mlcourse_dubai | 7 |
<jupyter_start><jupyter_text># Churn Prediction For Bank CustomerWe have a dataset in which there are details of a bank's customers and the target variable is a binary variable reflecting the fact whether the customer left the bank (closed his account) or he continues to be a customer.## Dataset
- **RowNumber:** corres... | no_license | /Classification_Churn_Modelling.ipynb | bunyamin-polat/Churn-Prediction-with-DL-For-Bank-Customer | 25 |
<jupyter_start><jupyter_text>## Note on spells
If child runs away for longer than a week, their spell will be considered ended, and a new spell would start if they return.<jupyter_code># age of child at start of spell
ages = spells.STARTAGE.value_counts()
_ = plt.bar(ages.index, ages)
# primary placement type of spell... | no_license | /python/spell-vis.ipynb | DataGraphito/FosterCareMobility | 5 |
<jupyter_start><jupyter_text># A Primer on Bayesian Methods for Multilevel ModelingHierarchical or multilevel modeling is a generalization of regression modeling.
*Multilevel models* are regression models in which the constituent model parameters are given **probability models**. This implies that model parameters are... | permissive | /notebooks/Section1_2-Hierarchical_Models.ipynb | Robert-Muil/bayes_course_july2020 | 20 |
<jupyter_start><jupyter_text># Requesting Argo BGC data from Ifremer erddap, expert mode
Using the expert mode, you can have access to all fields retrieved from the erddap, including all QC variables and without any data mode filtering.
***
Script prepared by [Guillaume Maze](http://github.com/gmaze) (Mar. 2020)<jupy... | non_permissive | /examples/Example-Expert-02.ipynb | euroargodev/erddap_usecases | 15 |
<jupyter_start><jupyter_text># Gapminder上的农用地数据清理
按照练习4最后一个小练习的要求,从[Gapminder](https://www.gapminder.org/data/)上下载感兴趣的数据,并用[lesson5 探索两个变量](https://classroom.udacity.com/nanodegrees/nd002-cn-advanced-vip/parts/7af8e761-54e5-4c80-a284-e402c30a791b)上学到的基本方法绘制2-5张图。
此部分是数据的获取、清理部分。分析部分在此文件夹中的R部分完成。
整个清理步骤为:收集数据—评估数据—清... | no_license | /R basic/lesson4/.ipynb_checkpoints/agriculture_land_data_clean-checkpoint.ipynb | Tanghuaizhi/learn_data_analysis | 1 |
<jupyter_start><jupyter_text># Unsupervised Anomaly Detection Brain-MRI
Jupyter notebook for running all the experiments from our [paper](https://arxiv.org/abs/2004.03271).
Hyperparameters may have to be adjusted!## Preperation
### Imports and installation of the required libraries
<jupyter_code># from google.colab... | non_permissive | /Unsupervised Anomaly Detection Brain-MRI.ipynb | irfixq/AE | 24 |
<jupyter_start><jupyter_text>Installing (updating) the following libraries for your Sagemaker
instance.<jupyter_code>!pip install .. # installing d2l
<jupyter_output><empty_output><jupyter_text># 自我注意力和位置编码
:label:`sec_self-attention-and-positional-encoding`
在深度学习中,我们经常使用 CNN 或 RNN 对序列进行编码。现在请注意机制。想象一下,我们将一系列令牌输入注意力池,... | no_license | /chapter_attention-mechanisms/self-attention-and-positional-encoding.ipynb | Global19/d2l-zh-pytorch-sagemaker | 7 |
<jupyter_start><jupyter_text>
# BERT 모델
---
## KorQuAD 데이터셋
KorQuAD(The Korean Question Answering Dataset, 한국어 질의응답 데이터셋) 을 가지고
자연어처리(Natural Language Processing) 분야의 기계독해(Machine Reading Comprehension, MRC) 태스크 처리
KorQuAD 데이터셋은 스탠포드 대학에서 구축한 대용량 데이터셋 SQuAD 데이터셋을 벤치마크.
SQuAD 데이터셋은 '기계독해(MRC)' 태스크 처리를 테스트 하기... | no_license | /bert_qna.ipynb | doutoury/bert_qnd | 34 |
<jupyter_start><jupyter_text># Preprocess Docs<jupyter_code>from preprocesing import process_data_files
process_data_files.prep_anorexia_data(man_dir+"original_chunks/", man_dir+"train_golden_truth.txt",
dest_dir+"prep_chunks/")
man_dir="D:/corpus/DepresionEriskCollections/2017/tes... | no_license | /notebooks/preprocesamiento_depresion.ipynb | v1ktop/data_augmentation_for_author_profiling | 9 |
<jupyter_start><jupyter_text>Let's get coordinates!
<jupyter_code>!wget -q -O 'Geospatial_Coordinates.csv' http://cocl.us/Geospatial_data
print('Data downloaded!')
df_geo_coor = pd.read_csv("./Geospatial_Coordinates.csv")
df_geo_coor.head()
df_toronto = pd.merge(df, df_geo_coor, how='left', left_on = 'PostalCode', righ... | no_license | /Clustering Toronto.ipynb | savana2019x/Coursera_Capstone | 2 |
<jupyter_start><jupyter_text>### Numpy learning from various tutorialsFirst tutorial: http://cs231n.github.io/python-numpy-tutorial/#numpy1. Array Creation<jupyter_code>import numpy as np
a = np.array([1, 2, 3])
print(type(a))
print(a.shape)
b = np.zeros((3,4,5))
print(b)
c = np.ones((3,4))
print(c)
d = np.full((2,2), ... | no_license | /6.86x/tutorials/numpy.ipynb | ssinghaldev/data_science | 3 |
<jupyter_start><jupyter_text># [【SOTA】マイナビ × SIGNATE Student Cup 2019: 賃貸物件の家賃予測](https://signate.jp/competitions/264)## 1. データ読み込み<jupyter_code>import pandas as pd
import numpy as np
import pathlib
import os
# 学習データ、テストデータの読み込み
train_path = pathlib.Path("./DATA/train.csv")
test_path = pathlib.Path("./DATA/test.csv")
... | no_license | /00_Python/SIGNATE/20210818_【SOTA】マイナビ × SIGNATE Student Cup 2019 賃貸物件の家賃予測/.ipynb_checkpoints/RentForcast_210926_01-checkpoint.ipynb | Kosuke-Yanai/KY | 23 |
<jupyter_start><jupyter_text># Learning Together - Data Analysis$e^{i\pi} + 1 = 0$<jupyter_code>import numpy as np
import pandas as pd
from pandas import Series, DataFrame
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/00484/'
dframe_trip = pd.read_csv('tripadvisor_review.csv', sep = ',')
dframe_trip.h... | permissive | /.ipynb_checkpoints/Aggregation-checkpoint.ipynb | buildGather/learnDataAnalysis | 1 |
<jupyter_start><jupyter_text>---
_You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._
... | no_license | /CURSO4/semana3/Case+Study+-+Sentiment+Analysis.ipynb | sandrarairan/Ciencias-Datos-Aplicada-Python-coursera | 4 |
<jupyter_start><jupyter_text># K Nearest Neighbors Project <jupyter_code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set()<jupyter_output><empty_output><jupyter_text>## Get the Data<jupyter_code>df = pd.read_csv('KNN_Project_Data')<jupyter_output><... | no_license | /K Nearest Neighbors Project.ipynb | hunain-saeed/KNN-pdsml | 10 |
<jupyter_start><jupyter_text><jupyter_code>import pandas as pd
df = pd.read_csv('drug review processed.csv.gz')
df.info()
df.head()
df = df.drop(columns=df.columns[0])
df.head()
df.groupby('vaderSentimentLabel').size()
import matplotlib.pyplot as plt
df.groupby('vaderSentimentLabel').count().plot.bar()
plt.show()
df.g... | no_license | /Opinion_Mining_using_the_UCI_Drug_Review_Data_(Part_2)_Sentiment_Prediction_Using_Machine_Learning_Classification_Algorithms(Scikit_Learn_Implementatition).ipynb | zainabnatiq/Opinion-Mining-using-the-UCI-Drug-Review-Dataset | 1 |
<jupyter_start><jupyter_text><jupyter_code><jupyter_output><empty_output><jupyter_text># Introduction to Regression with Neural Networks in Tensorflow
There are many definition for a regression problem but in our case, we're going to simplify it: predicting a numerical variable based on some other combination of varia... | permissive | /neural_network_regression_with_tensorflow.ipynb | artms-18/tensorflow_fundamentals | 7 |
<jupyter_start><jupyter_text>
## 2. Linear regression with multiple variables
In this part, you will implement linear regression with multiple variables to
predict the prices of houses. You want to predict the price of a house given its size and number of bedrooms. To achieve this, you need to train a model on data ... | no_license | /Beginners/Week3/codelab/Linear regression with multiple variables.ipynb | Otsebolu/cycle2-resource-materials | 7 |
<jupyter_start><jupyter_text># Preview one original file<jupyter_code>filename = folder+'TRABEAE12903CDD8EE.mp3'
orig, sr = librosa.load(filename, sr=16000)
IPython.display.Audio(orig, rate=sr)<jupyter_output><empty_output><jupyter_text># Sonify mel-spectrogram of that example<jupyter_code>recon, sr = transform_and_res... | non_permissive | /Sonify.ipynb | andrebola/ICASSP2020 | 3 |
<jupyter_start><jupyter_text># MEE - AQUISIÇÃO DE DADOSPrimeira tentativa usando o celular do Matheus com n=1 e n=9. Foi aqui que percebemos que não estamos considerando todas as incertezas. ERRO NORMALIZADO INCOMPATÍVEL.## Importando Pacotes<jupyter_code>import matplotlib.pyplot as plt
from numpy import array, absolut... | non_permissive | /Aquisicoes/Primeira Tentativa/30.04.2019/Primeira_Tentativa.ipynb | Mathcd/MEE-sensorsProject | 8 |
<jupyter_start><jupyter_text>Preprocesing<jupyter_code>netflix_titles_df = pd.read_csv('netflix_titles.csv')
netflix_titles_df.head()
netflix_titles_df.drop(netflix_titles_df.columns[[0,1,5,6,7,9]], axis=1, inplace=True)
netflix_titles_df.count()
null_rows = len(netflix_titles_df[netflix_titles_df.isna().any(axis=1)])
... | no_license | /Netflix_Shows_Recommendation_API/preprocessing.ipynb | AnantShankhdhar/Recommender-Systems | 2 |
<jupyter_start><jupyter_text># Pretrained BERT models<jupyter_code>import sys
package_dir = "../input/ppbert/pytorch-pretrained-bert/pytorch-pretrained-BERT"
sys.path.append(package_dir)
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import torch.utils.data
... | no_license | /jigsaw-starter-blend.ipynb | OmerBgu/jigsaw-unintended-bias-in-toxicity-classification | 9 |
<jupyter_start><jupyter_text>Table of Contents
1 Reuso de Código1.1 Pacotes e Módulos1.2 Um pouco sobre pacotes1.3 Localizando Módulos2 Módulos personalizados2.1 O módulo vector.py2.1.1 Vector Add2.1.2 Vector Subtraction2.1.3 &... | non_permissive | /17.Pacotes e Módulos.ipynb | zuj2019/mc102 | 10 |
<jupyter_start><jupyter_text># create new fields
adm_df["year_term"] = (
adm_df["ACADEMIC_YEAR"] + "." + adm_df["ACADEMIC_TERM"].str.title()
)
# week_number = (
# lambda r: (r["create_date"].date().isocalendar()[1])
# if (r["create_date"].date() >= date((int(r["ACADEMIC_YEAR"]) - 1), 9, 1))
# else (date... | non_permissive | /funnel/admissions.py_Test_20201217.ipynb | JeffWalton-PSC/Admissions | 2 |
<jupyter_start><jupyter_text>### A Fredkin gate is be obtained with very good fidelity, $\simeq 99.999\%$, starting with all interactions on, and only $\sigma_z$ operators as self-interactions.<jupyter_code>net = QubitNetwork(
num_qubits=4,
# interactions=('all', ['xy', 'xx', 'yy', 'zz']),
interactions='all... | no_license | /notebooks/quantum_gate_learning.ipynb | theneva/quantum-gate-learning | 4 |
<jupyter_start><jupyter_text># AssignmentQ1. Write the NumPy program to create an array of ones and an array
of zeros?
Expected OutputCreate an array of zeros
Default type is float
[[ 0. 0.]]
Type changes to int
[[0 0]]
Create an array of ones
Default type is float
[[ 1. 1.]]
Type c... | no_license | /DL - Subjective Assignment - 6 - Numpy 2.ipynb | lalubasha434/DL-Assignments | 18 |
<jupyter_start><jupyter_text>### Crear base de datos con SQLite
Primero vamos a crear una base de datos<jupyter_code>import sqlite3
conn = sqlite3.connect('todo.db')
conn.execute("CREATE TABLE todo (id INTEGER PRIMARY KEY, task char(100) NOT NULL, status boll NOT NULL)")
conn.execute("INSERT INTO todo (task, status)V... | no_license | /TodoList-Bottle.ipynb | RamonAgramon/Jupyternotebooks | 2 |
<jupyter_start><jupyter_text># Electronic structure through (quantum) annealing
In this project we map the electronic structure Hamiltonian to an Ising Hamiltonian and find the ground state energy. Refer to the following references:
[1] https://arxiv.org/abs/1706.00271
[2] https://arxiv.org/abs/1811.05256
[3] http... | permissive | /Project_4_Ising_Annealer/H2_Ising_Annealing.ipynb | HermanniH/CohortProject_2020_Week2 | 1 |
<jupyter_start><jupyter_text># Loading mixed data (GTZAN dataset + YouTube data)<jupyter_code>classes = {'reggae': 0, 'classical': 1, 'country': 2, 'disco': 3, 'hiphop': 4, 'jazz': 5, 'metal': 6, 'pop': 7, 'reggae': 8, 'rock': 9}
!unzip /content/drive/MyDrive/mel_spec_mix.zip
!unzip /content/drive/MyDrive/mel_spec_test... | no_license | /sourcecode_dataset/CNN source code/ML_Project_CNN_Mixed.ipynb | tuandung2812/Music-genres-classification | 5 |
<jupyter_start><jupyter_text># 散点图<jupyter_code># 导入包
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
# 创建数据
N = 1000
x = np.random.randn(N)
y = np.random.randn(N)
# matplotlib作图
plt.scatter(x, y, marker='o')
plt.show()
plt.scatter(x, y, marker='>')
plt.show()
plt.scatter(x,... | no_license | /Data-Visualization/ScatterPlot.ipynb | Fengql95/Data-Analysis | 1 |
<jupyter_start><jupyter_text># Tutorial for Small FOV Instruments
In this tutorial we combine our skymaps with galaxy catalogs to get a list of galaxies for individual pointings. A note is made that this is only possible with 3D skymaps which are provided for combact binary merger candidate events.We begin by importin... | non_permissive | /skymaps/Tutorial_for_Small_FOV_Instruments.ipynb | battyone/odw-2018 | 9 |
<jupyter_start><jupyter_text># Physics 91SI: Lab 3
Part 1
------<jupyter_code># Don't edit this function
def load_sample():
"""Return the entire text of Hamlet in a string."""
with open('hamlet.txt') as f:
sample = f.read()
return sample
# Edit this function. "pass" tells Python to do nothing.
def ... | no_license | /lab3.ipynb | physics91si/lab03-Matias-A | 3 |
<jupyter_start><jupyter_text># INF702 - Text Mining in Social Media
/xlsx ( folder)
Ontology_PreSNA.xlsx
ML_Super.xlsx
ML_Unsuper.xlsx
Emerging Technologies.xlsx<jupyter_code>import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd # Importing Pandas
import os
outfolder = 'xlsx'
if(... | no_license | /[Survey]SummaryTable.ipynb | fairhs77/WeeklyTasks | 1 |
<jupyter_start><jupyter_text># Capstone Project
This notebook will mainly be used for the Capstone Project in the IBM Data Science Specialization Course on Coursera.<jupyter_code>import pandas as pd
import numpy as np
print('Hello Capstone Project Course!')<jupyter_output>Hello Capstone Project Course!
| no_license | /capstone.ipynb | lmarien/Coursera_Capstone | 1 |
<jupyter_start><jupyter_text># Read in USV data for all 3 Saildrone
- caluclate density and wind speed
- caluclate distance between successive obs
- caluculate total cumulative distance
- switch from time to cumulative distance as index
- interpolate data onto grid
<jupyter_code>
ds=[]
for iusv in range(3):
fname=s... | permissive | /atomic/Calculate_density_plot.ipynb | agonmer/Saildrone | 5 |
<jupyter_start><jupyter_text># Growing Decision Trees via ID3/C4.5 & Greedy Learning
A from-scratch implementation of a growing decision trees ML algorithm for quick classification. We use entropy / 'information gain' as the metric for generating the decision rules. This notebook is a step-by-step walkthrough, with mo... | no_license | /Growing Decision Trees Implementation.ipynb | Kodyak/Growing-Decision-Trees | 16 |
<jupyter_start><jupyter_text>## INF1608 - Análise Numérica - 2017.1
## Departamento de Informática - PUC-Rio
## Prof. Hélio Lopes - lopes@inf.puc-rio.br
## http://www.inf.puc-rio.br/~lopes
# Exercícios resolvidos
R1) Faça uma função que verique se uma matriz A de tamanho nxn é estritamente diagonal dominante:
... | no_license | /Exercicios lista metodos iterativos- Rafael.ipynb | rafarubim/analise-numerica | 4 |
<jupyter_start><jupyter_text># Sample Jupyter Notebook to call COVID data provider
<jupyter_code>func_url = 'http://localhost:7071/api/covidata'
import pandas as pd
from ipywidgets import Image
import requests
df = pd.read_csv(func_url+"?country=US")
df.head()
Image(value=requests.get(func_url+"?country=US&output=plo... | no_license | /notebooks/sample.ipynb | shwars/e2e_covid | 1 |
<jupyter_start><jupyter_text>## Image pyramids
Take a look at how downsampling with image pyramids works.
First, we'll read in an image then construct and display a few layers of an image pyramid.<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
image... | permissive | /1_4_Feature_Vectors/1. Image Pyramids.ipynb | svedagiriml/CVND_Exercises | 1 |
<jupyter_start><jupyter_text># Lesson 1: Introduction to Deep Learning with PyTorch
- [@AlfredoCanziani](https://twitter.com/alfredocanziani)
- [@GokuMohandas](https://twitter.com/GokuMohandas)### Create the data<jupyter_code>import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras ... | no_license | /HW1/Pytorch and Tensorflow/lesson1_Tensorflow.ipynb | tunghoangt/DeepNeuralNet | 5 |
<jupyter_start><jupyter_text># Preparando dados no HDFS<jupyter_code>!ls
!hdfs dfs -ls /
!hdfs dfs -mkdir /semantix
!hdfs dfs -ls /
!ls -lha
!hdfs dfs -put NASA* /semantix
!hdfs dfs -ls /semantix<jupyter_output>Found 2 items
-rw-r--r-- 1 root hdfs 167813770 2019-08-17 20:45 /semantix/NASA_access_log_Aug95
-rw-r--r... | no_license | /ProcessoSemantix.ipynb | atworkdante/semantix | 11 |
<jupyter_start><jupyter_text># Tensor Transformations<jupyter_code>from __future__ import print_function
import torch
import numpy as np
from datetime import date
date.today()
torch.__version__
np.__version__<jupyter_output><empty_output><jupyter_text>NOTE on notation
* _x, _y, _z, ...: NumPy 0-d or 1-d arrays
* _X, _Y... | permissive | /0-libraries/pytorch-exercises/Chapter1_Tensors/1-tensor-transformations.ipynb | Christomesh/deep-learning-implementations | 23 |
<jupyter_start><jupyter_text>##### Copyright 2019 The TensorFlow Authors.<jupyter_code>#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# ... | no_license | /TF-Iris-rasp3b/beginner.ipynb | Neelanjan-Goswami/AWS-Sagemaker_-_AWS-IoT-Analytics | 11 |
<jupyter_start><jupyter_text>### BasicsThe basics of Python Programming* #### Printing Stuff<jupyter_code>print("Hello, World!")<jupyter_output>Hello, World!
<jupyter_text>* #### String substitution<jupyter_code>print("Hello, {0}, have a good {1}.".format("Saurav", "day"))
print("Hello, {name}, have a good {time}.".for... | no_license | /CorePython/00_Basics.ipynb | uniquerockrz/learning-python | 11 |
<jupyter_start><jupyter_text>## 1. Import and observe dataset
We all love watching movies! There are some movies we like, some we don't. Most people have a preference for movies of a similar genre. Some of us love watching action movies, while some of us like watching horror. Some of us like watching movies that have n... | no_license | /Projects/Find Movie Similarity from Plot Summaries/Movie_Prediction.ipynb | Gttz/DataCamp_Courses | 12 |
<jupyter_start><jupyter_text>#1. Preprocess your data so that you can feed it into ANN models.
#2. Split your data into training and test sets.<jupyter_code>input_dim = 784 # 28*28
output_dim = nb_classes = 10
batch_size = 128
nb_epoch = 20
X_train = X_train.reshape(60000, input_dim)
X_test = X_test.reshape(10000, i... | no_license | /deep-learning-challenge/deep_learning_challenge.ipynb | ADEnnaco/thinkful-coursework | 11 |
<jupyter_start><jupyter_text># 新做的<jupyter_code>data = []
label2 = []
label3 = []
for i in range(1,25):
a = np.load('/content/drive/My Drive/Newdata/ordered_by_patient/150_6_116_116_' + str(i) +'_flipped.npy')
c = np.load('/content/drive/My Drive/Newdata/ordered_by_patient/two_labels/' + str(i) + '.npy') #两个label
... | no_license | /colab/Colab Notebooks/30到6_Matrix_run_file.ipynb | jackxuxu/Thesis | 2 |
<jupyter_start><jupyter_text>## Challenge
* The Inter-American Development Bank is asking the Kaggle community for help with income qualification for some of the world's poorest families. Are you up for the challenge?
* Here's the backstory: Many social programs have a hard time making sure the right people are given ... | no_license | /datasets/costa-rican-household-poverty-prediction/kernels/0.12---nikitpatel---keras-deeplearning-classification.ipynb | mindis/GDS | 1 |
<jupyter_start><jupyter_text># 1) Derive the descriptive statistics of the data and discuss the points you find remarkable.<jupyter_code>usEducation.head()
fill_list = ["enroll", "total_revenue", "federal_revenue",
"state_revenue", "federal_revenue", "total_expenditure",
"instruction_expendi... | no_license | /explorationPhase1.ipynb | mysticalScientist23/HR-Employee-Attrition | 5 |
<jupyter_start><jupyter_text><jupyter_code>R = 6371.0 #* 1000 #in m
atan = math.atan
sin = math.sin
cos = math.cos
pi = math.pi
answer = 4
distancelist = []
for i in range(0,answer):
if i+1 == (answer):
lon1 = latlist[answer-1]
lon2 = latlist[0]
... | no_license | /Section 2 - Optimising Wind Farm Spacing Rosslare Harbour to Kilmore Quay.ipynb | 16431496/optimisation_code | 2 |
<jupyter_start><jupyter_text>## Task 4: **Healthcare Sector Analysis**
Discovering facts from data in healthcare sector<jupyter_code>import os
import re
import sys
import json
import nltk
import pandas as pd
import numpy as np
from scipy.stats import norm, ttest_ind
from collections import defaultdict
import matplotli... | no_license | /healthcare_sector_analysis.ipynb | rsk2327/PDSG_PayInequality | 16 |
<jupyter_start><jupyter_text><jupyter_code>from google.colab import drive
drive.mount('/content/drive')
from google.colab.patches import cv2_imshow
import cv2
import numpy as np
import os
import math
#drive/MyDrive/new/yolo/
# Load Yolo
#drive/MyDrive/new/yolo/
net = cv2.dnn.readNet("drive/MyDrive/new/yolo/yolov3.wei... | no_license | /YoloVedioCap.ipynb | brkuhgk/yolo | 1 |
<jupyter_start><jupyter_text># Assignment 01Made by : Neeraj kumar
Shift : 03:30 P.M TO 06:30 P.M<jupyter_code>#\newline Backslash and newline ignored
print("Line1 \n Line2 \n Line3") #Example
print("\\") #Backslash(\\) is used to print single backslash and if we want to print four backslash so we must used eight
#bac... | no_license | /Assignment 01 Sunday 03 30 to 06 30.ipynb | neeraj72/new | 4 |
<jupyter_start><jupyter_text>## ReLU<jupyter_code>model_metadata_file_path = pathlib.Path("..", training_logs["relu"], "metadata.json")
model_metadata = io.read_metadata(str(model_metadata_file_path))
model_metadata
history = model_metadata["history"]
metrics_df = visualize.create_metrics_dataframe(history)
loss_plot =... | permissive | /notebooks/4.1-agj-evaluate-Tomato.ipynb | bitjockey42/ikapati-research | 2 |
<jupyter_start><jupyter_text># Gradient Descent: Step Sizes - Lab## IntroductionIn this lab, we'll practice applying gradient descent. As we know gradient descent begins with an initial regression line, and moves to a "best fit" regression line by changing values of $m$ and $b$ and evaluating the RSS. So far, we have... | non_permissive | /index.ipynb | Patrickbfuller/dsc-2-14-12-gradient-descent-step-sizes-lab-seattle-ds-career-040119 | 12 |
<jupyter_start><jupyter_text># The Sparks Foundation
## #GRIPJUNE21## Submitted by MOHD AUSAAF NASIRData Science and Business Analytics Tasks
Task 1 - Prediction using Supervised MLStep 1 - Import the required libraries<jupyter_code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn ... | no_license | /Task 1.ipynb | AusaafNasir/Task-1---The-Sparks-Foundation | 11 |
<jupyter_start><jupyter_text># Object Oriented Programming
## Homework Assignment
#### Problem 1
Fill in the Line class methods to accept coordinates as a pair of tuples and return the slope and distance of the line.<jupyter_code>class Line:
def __init__(self,coor1,coor2):
self.coor1 = coor1
s... | no_license | /05-Object Oriented Programming/02-Object Oriented Programming Homework.ipynb | jackwmarkham/pythonbootcamp | 2 |
<jupyter_start><jupyter_text># Yelp Project
We will be mining Yelp data today to find zip codes that have the highest rated restaurants for different categories of food. We will be using the excellent Yelp Fusion API. An API (application programming interface) is a way for developers to directly interact with a server.... | no_license | /Yelp API Project.ipynb | fh4/Intro-Python-Programming | 9 |
<jupyter_start><jupyter_text>#### Merging dataframes <jupyter_code>daily2 = pd.read_csv('/Users/plarkin/Documents/GA/projects/p5/tanveer/Data/oc_daily.csv')
daily2.head(1)
# to load file
daily = pd.read_csv('/Users/plarkin/Documents/GA/projects/p5/tanveer/clean_daily_oc.csv')
daily.head(1)
daily.shape,daily2.shape
da... | no_license | /classification_with_text/RNN & LSTM Modeling-Only Sentiment.ipynb | plarkin13/stock_prediction_sentiment | 7 |
<jupyter_start><jupyter_text># Predicting eye tracking features
Eye-tracking data from reading represent an important resource for both linguistics and natural language processing. The ability to accurately model gaze features is crucial to advance our understanding of language processing. On one hand, it can tell us ... | no_license | /code/tutorial1/esslli_tutorial1.ipynb | yancong222/ESSLLI2021 | 11 |
<jupyter_start><jupyter_text>### Control Statements
- Conditional Statements
- if-else
- Iterative Statements
- for
- while
- do-while<jupyter_code>year = int(input("Enter a year "))
if year%400 == 0 or (year% 100 !=0 and year%4 ==0):
print("Leap Year")
else:
print("Non-Leap Year")
n=int(input("Ent... | no_license | /Workshop_File--Day 2.ipynb | Amogh19/Problem-Solving-and-Programming | 3 |
<jupyter_start><jupyter_text># Exercise 2: Creating Redshift Cluster using the AWS python SDK
## An example of Infrastructure-as-code<jupyter_code>import pandas as pd
import boto3
import json<jupyter_output><empty_output><jupyter_text># STEP 0: Make sure you have an AWS secret and access key
- Create a new IAM user i... | no_license | /cloud_data_warehouses/.ipynb_checkpoints/L3 Exercise 2-checkpoint.ipynb | aaharrison/udacity-data-eng | 11 |
<jupyter_start><jupyter_text># 50 Years of Music Trends
## Objective
* Analyze lyrics from billboard top 100 songs over 50 years to identify trends
* Statement: Has the sentiments of popular lyrics changed over time?
## Hypothesis
* Ha = the sentiments of popular lyrics has become more negative over time
* Ho = no ch... | no_license | /projects/Lyrics-Analyzer/Output/Test Samples/JB_Sample/.ipynb_checkpoints/Project_Notebook_FINAL_JB-checkpoint.ipynb | jeffreybox/GTATL201805DATA3 | 1 |
<jupyter_start><jupyter_text>### Test-train split<jupyter_code>feature_columns = 'average_stars review_count compliment_cool compliment_cute \
compliment_funny compliment_hot compliment_list compliment_more \
compliment_note compliment_photos compliment_plain cool fans \
funny rev_length rev_stars rev_u... | no_license | /Capstone_project_fake_reviewers/Machine_learning/Naivebayes_extratree.ipynb | tannisthamaiti/Fake-reviewers-in-Yelp | 3 |
<jupyter_start><jupyter_text>---
title: "Point-in-time joins with PySpark"
date: 2021-09-09
type: technical_note
draft: false
---# Point-in-Time (PIT) joins in Hopsworks Feature Store
In order to create a training dataset, data scientists usually have to generate information about the future by putting themselves back... | no_license | /notebooks/featurestore/hsfs/time_travel/point_in_time_join_python.ipynb | robzor92/hops-examples | 16 |
<jupyter_start><jupyter_text># 範例 : 計程車費率預測
https://www.kaggle.com/c/new-york-city-taxi-fare-prediction# [作業目標]
- 試著模仿範例寫法, 使用程車費率預測競賽練習時間欄位處理# [作業重點]
- 新增星期幾(day of week)與第幾周(week of year)這兩項特徵, 觀察有什麼影響 (In[4], Out[4], In[5], Out[5])
- 新增加上年週期與周周期特徵(參考教材) , 觀察有什麼影響 (In[8], Out[8], In[9], Out[9]) <jupyter_code># 做完特徵工程... | no_license | /answers/Day_025_DayTime_Features_Ans.ipynb | Julian-Chu/2nd-ML100Days | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.