content stringlengths 73 1.12M | license stringclasses 3
values | path stringlengths 9 197 | repo_name stringlengths 7 106 | chain_length int64 1 144 |
|---|---|---|---|---|
<jupyter_start><jupyter_text># checking to make sure the data is indeed different for day/night<jupyter_code>plt.hist(day_argmax.flatten() - night_argmax.flatten(), bins=100);
plt.show()
plt.imshow(day_argmax - night_argmax);
np.mean(day_argmax.flatten() - night_argmax.flatten())<jupyter_output><empty_output><jupyter_t... | no_license | /notebooks/fire_season_length-scipy-aggregate.ipynb | joemcglinchy/night_fire | 12 |
<jupyter_start><jupyter_text>元组<jupyter_code>tup = (4,5,6)
tup
tuple([1,2,3])
a,b,c = tup
print(a)
seq = [(1,2,3),(4,5,6),(7,8,9)]
for a,b,c in seq:
print('a={0},b={1},c={2}'.format(a,b,c))
a = range(10)
a = list(a)
a.append(10)
a.insert(2,100)
a
a.pop(1)
a
a = [1,2,5,5,8,7,6,3,1,1]
a.sort()
a
import bisect
c = [1,... | no_license | /Mouse_3.ipynb | Mizaoz/test | 1 |
<jupyter_start><jupyter_text>print(results[0])<jupyter_code>import os
from os import path
import matplotlib.pyplot as plt
from wordcloud import WordCloud,STOPWORDS
text=open('Twitter.txt',"r")
wordcloud = WordCloud().generate(text)
wordcloud = WordCloud(font_path='ARegular.ttf',background_color='white',mode='RGB',width... | no_license | /NLP-Twitter.ipynb | jaypatel333/ML | 1 |
<jupyter_start><jupyter_text># The Workflows of Data-centric AI for Classification with Noisy Labels
In this tutorial, you will learn how to easily incorporate [cleanlab](https://github.com/cleanlab/cleanlab) into your ML development workflows to:
- Automatically find label issues lurking in your classification data.... | non_permissive | /docs/source/tutorials/indepth_overview.ipynb | cgnorthcutt/cleanlab | 21 |
<jupyter_start><jupyter_text># Setting up environment:<jupyter_code>pip install k-means-constrained
from k_means_constrained import KMeansConstrained
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd<jupyter_output><empty_output><jupyter_text># Sample 1:<jupyter_code>X = np.ar... | no_license | /STS_0_4.ipynb | econdavidzh/Optimizing_school_routes_with_Machine_Learning | 4 |
<jupyter_start><jupyter_text># Initialization
Welcome to the first assignment of "Improving Deep Neural Networks".
Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning.
If you completed the previous course of this specialization, ... | no_license | /Neural-Networks-and-Deep-Learning-Assignments/Course 2/Initialization.ipynb | ravi1-7/sally-recruitment | 10 |
<jupyter_start><jupyter_text># Multiclass Support Vector Machine exercise
*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assig... | permissive | /assignment1/svm.ipynb | budmitr/cs231n | 6 |
<jupyter_start><jupyter_text># Data Science Career Guide Probability and Statistics
#### Notebook I am using to follow along with Jose's course and practice LaTex<jupyter_code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import scipy.stats as scs
import seaborn as sns<jupyte... | no_license | /.ipynb_checkpoints/Data-Science-Career-Guide-Prob-Stats-Exper-ED-checkpoint.ipynb | edeane/ds-interview-prep | 7 |
<jupyter_start><jupyter_text># 序列主題(一):變數賦值與輸入輸出## I. 自由練習**1. 變數賦值**
用變數儲存資料,將資料放在電腦記憶體中的某個位置,然後給這個地方一個好名字。**2. 變數三二一**
一個特殊符號(等號、名字=資料),兩種運算方式(運算式與函式),三種資料型態(文字數字與布林)(1)光有名字不行,要有等號,要有資料;名字在左、資料在右,運算式也可以。<jupyter_code># 練習
Sum = 0
A = 1
B = 2 #「=」的左邊是名字,不能放常數。
A + B = Sum #「=」的左邊不能是運算式。
A, B = 1, 2 # 這樣寫可以,是Pythoni... | no_license | /謝欣汝_Week5_練習作業.ipynb | a109010169/a109010169 | 6 |
<jupyter_start><jupyter_text># Question 1
* Define Python functions for the two functions $e^{x}$ and $\cos(\cos(x))$ which return a vector (or scalar) value.
* Plot the functions over the interval [−2$\pi$,$4\pi$).
* Discuss periodicity of both functions
* Plot the expected functions from fourier series<jupyter_cod... | no_license | /Assign3/.ipynb_checkpoints/ass3q1-checkpoint.ipynb | Rohithram/EE2703_Applied_Programming_in_python | 6 |
<jupyter_start><jupyter_text># Exploratory Data Analysis on stock data from 5 companies.
IBM
GE
Procter & Gamble
Coca Cola
Boeing<jupyter_code>%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import datetime
ibm = pd.read_csv('D:/DS/analytics_edge/IBMS... | permissive | /1. EDA/3. Stock Dynamics.ipynb | atheesh1998/analytics-edge | 6 |
<jupyter_start><jupyter_text># HyperParameter Tuning### `keras.wrappers.scikit_learn`
Example adapted from: [https://github.com/fchollet/keras/blob/master/examples/mnist_sklearn_wrapper.py]()## Problem:
Builds simple CNN models on MNIST and uses sklearn's GridSearchCV to find best model<jupyter_code>import numpy as n... | no_license | /05_keras/keras_lab_2/04 - HyperParameter Tuning.ipynb | ginevracoal/statisticalMachineLearning | 4 |
<jupyter_start><jupyter_text># K-Means Clustering## Importing the libraries<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
input_path = "/Users/sayarsamanta/Documents/GitHub/Data-Science-Projects/Clustering/KNN/Data/"<jupyter_output><empty_output><jupyter_text>## Importing the datas... | no_license | /Clustering/KNN/.ipynb_checkpoints/k_means_clustering-checkpoint.ipynb | sayarsamanta/Data-Science-Projects | 5 |
<jupyter_start><jupyter_text># Data Munging## Relational Data
The simplest type of data we have see might consist a single table with a some columns and some rows. This sort of data is easy to analyze and compute and we generally want to reduce our data to a single table before we start running machine learning algori... | no_license | /home/datacourse/data-wrangling/DS_Data_Munging.ipynb | Tobiadefami/WQU-Data-Science-Module-1 | 22 |
<jupyter_start><jupyter_text># Handling Missing Values - AssignmentIn this exercise, you'll apply what you learned in the **Handling missing values** tutorial.
# Setup
The questions below will give you feedback on your work. Run the following cell to set up the feedback system.<jupyter_code>from learntools.core impor... | no_license | /Handling Missing Values.ipynb | amdonatusprince/Handling-Missing-Values---Assignment | 10 |
<jupyter_start><jupyter_text># Import modules<jupyter_code>%matplotlib
import matplotlib.pyplot as plt
import numpy as np
from scipy.linalg import sqrtm, block_diag
from math import pi
from qutip import *
import sympy
import functools, operator
from collections import OrderedDict
import itertools<jupyter_output>Using ... | no_license | /Two-qubit with CR.ipynb | jaseung/python-code-SYR | 17 |
<jupyter_start><jupyter_text>## 1. Google Play Store apps and reviews
Mobile apps are everywhere. They are easy to create and can be lucrative. Because of these two factors, more and more apps are being developed. In this notebook, we will do a comprehensive analysis of the Android app market by comparing over ten thou... | no_license | /notebook.ipynb | nitanitapsari/The-Android-App-Market-on-Google-Play | 9 |
<jupyter_start><jupyter_text># 3 Maneras de Programar a una Red Neuronal - DOTCSV
## Código inicial<jupyter_code>import numpy as np
import scipy as sc
import matplotlib.pyplot as plt
from sklearn.datasets import make_circles
# Creamos nuestros datos artificiales, donde buscaremos clasificar
# dos anillos concéntric... | no_license | /Notebooks IA/3_Maneras_de_Programar_a_una_Red_Neuronal_DotCSV.ipynb | miguelmontcerv/Artificial-Intelligence | 4 |
<jupyter_start><jupyter_text># How to make a plot with legend entries that are hyperlinks
* See https://github.com/matplotlib/matplotlib/issues/25567
* Works with SVG and PDF<jupyter_code>from matplotlib import pyplot as plt
import numpy as np
# generate SVG images instead of PNG
%config InlineBackend.figure_formats ... | permissive | /plots_with_hyperlinks.ipynb | HDembinski/essays | 1 |
<jupyter_start><jupyter_text># Programming Exercise 4: Neural Networks Learning
## Introduction
In this exercise, you will implement the backpropagation algorithm for neural networks and apply it to the task of hand-written digit recognition. Before starting on the programming exercise, we strongly recommend watchin... | no_license | /Exercise4/exercise4.ipynb | arpit1012/COURSERA-MACHINE-LEARNING-PYTHON-IMPLEMENTATION | 20 |
<jupyter_start><jupyter_text># New plots in preparation for paper* There appears to be no correlation between the radio properties of our sample and eddington ratio in any sense
* Radio flux/luminosity is roughly correlated with bolometric luminosity and absolute magnitudes $M_i(z=2)$
* Loosely speaking, the median b... | no_license | /Notebooks/Random_DataAnalysis/OpticalvsRadio_PreliminaryDataAnalysis.ipynb | RichardsGroup/VLA2018b | 7 |
<jupyter_start><jupyter_text># GitHubとの連携
Google ColaboratoryとGitHubの連携について学びましょう。## ●Githubとは?
GitHubは、今や開発者にとってなくてはならないサービスです。
「Git」は、プログラミングによるサービス開発の現場などでよく使われている「バージョン管理システム」です。
そして、GitHubは、Gitの仕組みを利用して、世界中の人々が自分のプロダクトを共有、公開することができるようにしたウェブサービス名です。
GitHubで作成されたリポジトリ(貯蔵庫のようなもの)は、無料の場合誰にでも公開されますが、有料の場合は指定したユーザ... | no_license | /github.ipynb | yuichinambu/Colab_Sample | 1 |
<jupyter_start><jupyter_text># Dynamic WebPage
---<jupyter_code>import requests
from bs4 import BeautifulSoup
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
url = "https://play.google.com/store/movies/top"
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:79.0) Gecko/... | no_license | /15_selenium_movie.ipynb | yangguda/Web_Crawling | 1 |
<jupyter_start><jupyter_text>
# Exploratory Data Analysis on Superstore Data### Objectives:-
#### 1) To Perform Exploratory Data Analysis
#### 2) Find out business problems
#### 3) identify key areas for improving profits.### 1. Importing required packages and dataset<jupyter_code>i... | no_license | /Task5/EDA on SuperStore Data.ipynb | harshit9665/The-Spark-Foundation-GRIP | 26 |
<jupyter_start><jupyter_text>## Lesson-01 Assignment#### 今天是2020年1月05日,今天世界上又多了一名AI工程师 :) `各位同学大家好,欢迎各位开始学习我们的人工智能课程。这门课程假设大家不具备机器学习和人工智能的知识,但是希望大家具备初级的Python编程能力。根据往期同学的实际反馈,我们课程的完结之后 能力能够超过80%的计算机人工智能/深度学习方向的硕士生的能力。`## 本次作业的内容#### 1. 复现课堂代码
在本部分,你需要参照我们给大家的GitHub地址里边的课堂代码,结合课堂内容,复现内容。#### 2. 作业截止时间
此次作业截止时间为 2020.01... | no_license | /Assignment_01_Build_Sentence_Generation_System_Using_Syntax_Tree_and_Language_Model.ipynb | samisgood968/NLP | 15 |
<jupyter_start><jupyter_text># Explore Your Environment## Get Latest Code<jupyter_code>%%bash
pull_force_overwrite_local<jupyter_output><empty_output><jupyter_text>## Helper Scripts### Find Script from Anywhere<jupyter_code>!which pull_force_overwrite_local<jupyter_output><empty_output><jupyter_text>### List `/scripts... | permissive | /gpu.ml/notebooks/01_Explore_Environment.ipynb | BrentDorsey/pipeline | 7 |
<jupyter_start><jupyter_text># Quarterback QBR Rank<jupyter_code>import pandas as pd
qbs = pd.read_csv('../Capstone_csv_file/qbs_stats_rank_19-20', index_col = 'NAME')
qbs.head()
qbs.rename(columns=lambda x: x.strip(), inplace = True)
qbs = qbs[['QBR', 'QBR_rank']].copy()
qbs = qbs.sort_values('QBR_rank')
qbs.to_csv('.... | no_license | /Capstone_EDA_Quarterbacks/Capstone_qbs_qbr_rank_19-20.ipynb | ChrisSCorliss/Capstone | 1 |
<jupyter_start><jupyter_text>This project is for Decision Trees
We will use the Pima Indian database<jupyter_code>import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClas... | no_license | /Untitled33.ipynb | oops-git/aiml | 1 |
<jupyter_start><jupyter_text># Newton method with double rootConsider the function
$$
f(x) = (x-1)^2 \sin(x)
$$
for which $x=1$ is a double root.<jupyter_code>%matplotlib inline
%config InlineBackend.figure_format = 'svg'
from numpy import sin,cos,linspace,zeros,abs
from matplotlib.pyplot import plot,xlabel,ylabel,grid... | no_license | /root_finding/newton2.ipynb | animesh1995/na | 3 |
<jupyter_start><jupyter_text># ------------------------------------PLOTS------------------------------------# LC Description<jupyter_code>plot.figure(1)
plot.subplots_adjust(hspace=0.14)
p1=plot.subplot(321)
p1.set_title('GALFORM$_{r<24.8}$')
graph.Density(LC1[LC1_abs[0]],'galaxies per $mag\cdot z$',ylabel='u, absolute... | no_license | /Plots.ipynb | MBravoS/MILL_Codes | 11 |
<jupyter_start><jupyter_text># Hierarchical A* path finding algorithm
Created by - Sanjana Tule
Date - 19/08/2021
* **Implement weighted risk factor**.
Give higher weightage to risk factors compared to length. As length = 10 and risk = 2 vs length = 2 and risk = 10 should not be the same.
Weighted Risk Factor = L... | no_license | /3_2_hierarchical_pathfinding_part_2.ipynb | santule/oomdena_earthquake | 5 |
<jupyter_start><jupyter_text>Still need to do
- figure out how to take out the random little low points for when stations were downloaded
- expand the code to loop across all 7 stations, (wooh!)
- Also run with both baros, and have a nice little toggle switch somewhere to use 1 or 2
- Eventually use those graphs to ... | no_license | /Python/Scripts/.ipynb_checkpoints/exploratory_stream_script_ver3-checkpoint.ipynb | cshuler/Samoa_ASPA_UH_Stream_data_process | 3 |
<jupyter_start><jupyter_text>### Get stations data<jupyter_code>stations = pd.read_json('./bkk-stations.json')
bkk_lat, bkk_lng = stations['lat'].mean(), stations['lng'].mean()
import folium
map_bkk = folium.Map(location=[bkk_lat, bkk_lng], zoom_start=12)
for lat, lng, name in zip(stations['lat'], stations['lng'], st... | no_license | /bangkok.ipynb | Woracheth/Coursera_Capstone | 7 |
<jupyter_start><jupyter_text># 다항회귀분석
앞에서 살펴본 단순회귀분석은 두 변수간의 관계를 직선 형태로 설명하는 알고리즘
다항 함수를 사용하면 보다 복잡한 곡선 형태의 회귀선을 표현할 수 있음<jupyter_code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.linear_model import L... | permissive | /AI_Class/003/Polynomial_regression.ipynb | CodmingOut/AI_Mentoring | 8 |
<jupyter_start><jupyter_text>## Deliverable 3. Create a Travel Itinerary Map.<jupyter_code># Dependencies and Setup
import pandas as pd
import requests
import gmaps
import numpy as np
import gmaps.datasets
# Import API key
from config import g_key
# Configure gmaps
gmaps.configure(api_key=g_key)
# 1. Read the Weather... | no_license | /Vacation_Itinerary/Vacation_Itinerary.ipynb | rtippana1/World_Weather_Analysis | 1 |
<jupyter_start><jupyter_text>## 파이썬 머신러닝
# 로지스틱 회귀 (Logistic Regression)- 로지스틱 회귀는 이름과 다르게 **분류(Classification)** 알고리즘이다.
- 로지스틱 회귀는 각 클래스를 **직선** 또는 **평면** 으로 가른다.<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
iris = load_iris()<jupyter_output><empty_output><jup... | no_license | /머신러닝/02_지도학습_03_로지스틱회귀.ipynb | gubosd/lecture13 | 24 |
<jupyter_start><jupyter_text># Population Segmentation with SageMaker
In this notebook, you'll employ two, unsupervised learning algorithms to do **population segmentation**. Population segmentation aims to find natural groupings in population data that reveal some feature-level similarities between different regions ... | permissive | /Population_Segmentation/Pop_Segmentation_Exercise.ipynb | xwilchen/ML_SageMaker_Studies | 42 |
<jupyter_start><jupyter_text>Michael Siripongpibul
CAP4630<jupyter_code>import random
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
%tensorflow_version 2.x
from tensorflow.keras import models
from tensorflow.keras import layers
import tensorflow as tf
<j... | no_license | /HW_3/HW_3.ipynb | Michael-Siri/AI-CAP4630 | 11 |
<jupyter_start><jupyter_text># Decision boundaries<jupyter_code>def plot_decision_bundaries(model, x, h=0.1, cmap='BrBG', torch_model=True, target_class=0):
x1_min, x1_max = x[:, 0].min() - 1, x[:, 0].max() + 1
x2_min, x2_max = x[:, 1].min() - 1, x[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1... | non_permissive | /examples/api_examples/example_pruning_01_xor.ipynb | pietrobarbiero/logic_explainer_networks | 4 |
<jupyter_start><jupyter_text># Data Science Academy - Python Fundamentos - Capítulo 2
## Download: http://github.com/dsacademybr## Strings### Criando uma String
Para criar uma string em Python você pode usar aspas simples ou duplas. Por exemplo:<jupyter_code># Uma única palavra
'Oi'
# Uma frase
'Criando uma string em ... | no_license | /Cap02/Notebooks/DSA-Python-Cap02-03-Strings.ipynb | dudolbh/PythonAnaliseDados | 9 |
<jupyter_start><jupyter_text># Pandas數據分析
今天介紹資料分析近來很紅的 pandas 套件, 作者是 Wes McKinney。Python 會成為一個數據分析的熱門語言, 和 pandas 的出現也有相當的關係。
但是 pandas 雖然功能強, 但有些地方沒那麼直覺, 有時會讓大家以為是個深奧的套件。其實你大約可以把 pandas 想成「Python 的 Excel」, 但是功能更強、更有彈性、也有更多的可能性。
下面介紹個基本上就是把 pandas 當 Excel 學的影片, 相信大家會覺得很親切。
https://youtu.be/9d5-Ti6onew<jupyter_code... | no_license | /Unit02_02_Pandas數據分析.ipynb | SIOCHEONG/IMLP342 | 35 |
<jupyter_start><jupyter_text>### 导入的库包括用于化学信息处理的rdkit库,还有sklearn的一些库<jupyter_code>import numpy as np
import pandas as pd
from rdkit import Chem, DataStructs
from rdkit.Chem import AllChem
from rdkit.Chem import Draw
from rdkit.Chem import PandasTools
from rdkit.Chem.Draw import IPythonConsole
import matplotlib.pyplot a... | no_license | /classicclassifier.ipynb | SamanthaWangdl/MIT_AIcure_open_drug_task | 13 |
<jupyter_start><jupyter_text>El objetivo principal de los métodos Monte Carlo aplicados a estadística bayesiana es
construir muestras de la densidad de probabilidad posterior.
En el caso de Metropolis-Hastings la exploración se hace dando pasos en el espacio de parámetros,
el lugar de llegada de cada paso (que puede ... | no_license | /Repaso_Ejercicios/Ejercicio_7/Ejercicio_7.ipynb | JoseMontanaC/Metodos_Computacionales | 7 |
<jupyter_start><jupyter_text># moving average convergence/divergence (macd) crossover
<jupyter_code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# load data
# load data
df_sp500 = pd.read_csv('sp500_20210219.csv') #, index_col='Date')
df_sp500.rename(columns={... | permissive | /moving_average_convergence-divergence_crossover.ipynb | andrewcchoi/moving-average-convergence-divergence-crossover | 1 |
<jupyter_start><jupyter_text>
# Plot different SVM classifiers in the iris dataset
Comparison of different linear SVM classifiers on a 2D projection of the iris
dataset. We only consider the first 2 features of this dataset:
- Sepal length
- Sepal width
This example shows how to plot the decision surface for four S... | no_license | /ai/sklearn/plot_iris_svc.ipynb | dudajiang/learnpython | 1 |
<jupyter_start><jupyter_text><jupyter_code>!pip install tensorflow-gpu==2.1.0
import os
os.kill(os.getpid(), 9)
from google.colab import drive
from absl import logging
import time
import cv2
import numpy as np
import tensorflow as tf
from tensorflow.keras import Model
from tensorflow.keras.layers import (
Add,
... | permissive | /yolov3_tiny_keras.ipynb | anspire/Notebooks | 1 |
<jupyter_start><jupyter_text>End To End Project<jupyter_code># Fetching the data and creat dir
import os
import tarfile
from six.moves import urllib
DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml/master/"
HOUSING_PATH = "datasets/housing"
HOUSING_URL = DOWNLOAD_ROOT + HOUSING_PATH + "/housing.tgz... | no_license | /Template/Supervised+.ipynb | AlexandreDOMINH/DATA-SCIENCE | 1 |
<jupyter_start><jupyter_text>
# Contourf Hatching
Demo filled contour plots with hatched patterns.
<jupyter_code>import matplotlib.pyplot as plt
import numpy as np
# invent some numbers, turning the x and y arrays into simple
# 2d arrays, which make combining them together easier.
x = np.linspace(-3, 5, 150).reshape... | no_license | /3.1.1/_downloads/95c7fa66a2f3fff1a18165a1bf108519/contourf_hatching.ipynb | matplotlib/matplotlib.github.com | 4 |
<jupyter_start><jupyter_text># Frequencies of words in novels: a Data Science pipeline
Earlier this week, I did a Facebook Live Code along session. In it, we used some basic Natural Language Processing to plot the most frequently occurring words in the novel _Moby Dick_. In doing so, we also see the efficacy of thinki... | permissive | /NLP_datacamp/NLP_FB_live_coding_soln_verbose.ipynb | AllardQuek/Tutorials | 20 |
<jupyter_start><jupyter_text>## Seattle Terry Stops Final Project Submission
* Student name: Rebecca Mih
* Student pace: Part Time Online
* Scheduled project review date/time:
* Instructor name: James Irving
* Blog post URL:
* **Data Source:** https://www.kaggle.com/city-of-seattle/seattle-terry-stops
* Date... | permissive | /.ipynb_checkpoints/Terry Stops v7-checkpoint.ipynb | sn95033/Terry-Stops-Analysis | 23 |
<jupyter_start><jupyter_text># 第四題 反序數量<jupyter_code>num = int(input())
data = list(map(int,input().split()))
def s(x,y):
count = 0
for i in x:
for j in y:
if i>j:
count = count+1
return count
def w(x):
if len(x) == 2:
if x[0]>x[1]:
return 1
... | no_license | /APCS/.ipynb_checkpoints/201804-checkpoint.ipynb | leomiboy/level1_practice | 1 |
<jupyter_start><jupyter_text>In this assignment you are generating sample images of Simpsons with deep convolutional generative adversarial networks (DCGANs).
You need to do the following:
1- Read and understand this tutorial: https://towardsdatascience.com/image-generator-drawing-cartoons-with-generative-adversaria... | no_license | /Image Generator (DCGAN) Simpson _ A4.ipynb | mayankc7991/GAN-on-Simpsons | 1 |
<jupyter_start><jupyter_text>### TRAIN
the train set, containing the user ids and whether they have churned.
Churn is defined as whether the user did not continue the subscription within 30 days of expiration.
is_churn = 1 means churn,
is_churn = 0 means renewal.<jupyter_code>train_inpu... | no_license | /.ipynb_checkpoints/KKBox - EDA-checkpoint.ipynb | d18124313/dissertation | 7 |
<jupyter_start><jupyter_text>*********************************************************************
MAIN PROGRAM TO COMPUTE A DESIGN MATRIX TO INVERT FOR STRUCTURE --
Copyright (c) 2014-2023: HILARY R. MARTENS, LUIS RIVERA, MARK SIMONS
This file is part of LoadDef.
LoadDef is free software: you can redist... | non_permissive | /desmat/run_dm_structure.ipynb | hrmartens/LoadDef | 55 |
<jupyter_start><jupyter_text>### Open Flights Data Wrangling
To practice, you are going to wrangle data from OpenFlights. You can read about it here:
http://openflights.org/data.html
This includes three main files, one for each airport, one for each airline, and one for each route. They can be merged or joined wi... | no_license | /9-Data Wrangling/open-flights.ipynb | Kevin-Robert/ce599-s17 | 5 |
<jupyter_start><jupyter_text>## 梯度下降法模拟<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
plot_x = np.linspace(-1, 6, 141)
plot_x
plot_y = (plot_x - 2.5)**2 - 1
plt.plot(plot_x, plot_y)
plt.show()
def dJ(theta):
return 2*(theta - 2.5)
#损失函数
def J(theta):
return (theta - 2.5)**2 - 1
eta = 0.1
epsil... | no_license | /06-Gradient Descent/01-Gradient-Descent-Simulation.ipynb | NOVA-QY/ML-Python | 1 |
<jupyter_start><jupyter_text>## Estudo sobre Indexação hierárquica
### Capítulo 8 PAG 284
>#### Estudo realizado com base no livro 'Python para análise de dados'
> Contato
> * [Linkedin](www.linkedin.com/in/isweluiz)<jupyter_code>data = pd.Series(np.random.randn(9),
index=[['a', 'a', 'a','b','b'... | no_license | /indexação_hierarquica.ipynb | isweluiz/data-science | 11 |
<jupyter_start><jupyter_text>още известно време просто пробвам някакви неща от презентацията<jupyter_code>pipeline = Pipeline([
('features', CountVectorizer()),
('clf', LinearSVC())
])
cross_val_score(pipeline, train.text, train.author, cv=3, n_jobs=3)
pipeline.fit(train.text, train.author)
count_vectorizer = ... | no_license | /Spooky Authors.ipynb | dimiturtrz/machine-learning-with-python | 12 |
<jupyter_start><jupyter_text># Clase 5: Visualización con la librería seaborn
Seaborn es una librería de visualizaciones estadísticas la cual está construida por sobre matplotlib. Esto último quiere decir que utiliza todos los elementos primitivos de matplotlib para hacer visualizaciones más atractivas que las que imp... | no_license | /3_analisis_exploratorio_y_estadistica/Clase-5.ipynb | bastianabaleiv/diplomado_udd_corfo_2020 | 45 |
<jupyter_start><jupyter_text># COMSW 4995 - Deep Learning Project
## Bingchen Liu, Binwei Xu, Hang Yang
## Columbia University### This is a two-branch MobileNet-Bidirectional LSTM model
The Mobilenet branch is largely based on Beluga's kernel https://www.kaggle.com/gaborfodor/greyscale-mobilenet-lb-0-892
The LSTM br... | no_license | /MobileNet_LSTM.ipynb | bingchen-liu/COMSW4995-Quickdraw-Kaggle-Challenge | 13 |
<jupyter_start><jupyter_text># 作業 : (Kaggle)鐵達尼生存預測
https://www.kaggle.com/c/titanic# [作業目標]
- 試著模仿範例寫法, 在鐵達尼生存預測中, 觀察均值編碼的效果# [作業重點]
- 仿造範例, 完成標籤編碼與均值編碼搭配邏輯斯迴歸的預測
- 觀察標籤編碼與均值編碼在特徵數量 / 邏輯斯迴歸分數 / 邏輯斯迴歸時間上, 分別有什麼影響 (In[3], Out[3], In[4], Out[4]) # 作業1
* 請仿照範例,將鐵達尼範例中的類別型特徵改用均值編碼實作一次<jupyter_code># 做完特徵工程前的所有準備 (與前範例相同)
i... | no_license | /homework/Day_023_HW.ipynb | yles9056/2nd-ML100Days | 2 |
<jupyter_start><jupyter_text>
# G-MODE KPFM with Fast Free Force Recovery (F3R)
### Oak Ridge National Laboratory
### *Liam Collins, Anugrah Saxena, Rama Vasudevan and Chris Smith*
#### Additional edits: *Rajiv Giridharagopal, University of Washington*
#### Contacts: collinslf@ornl.gov (primary author) and rgiri@uw.ed... | no_license | /Notebooks/G-Mode F3R-v2-Copy1.ipynb | rajgiriUW/GKPFM | 43 |
<jupyter_start><jupyter_text>Segmenting and Clustering Neighborhoods in Toronto_Part 2## Problem part 2:<jupyter_code>import pandas as pd
import requests
from bs4 import BeautifulSoup
List_url = "https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M"
source = requests.get(List_url).text
soup = BeautifulSoup(... | no_license | /My Neighborhood in Toronto-02.ipynb | innovish/Neighborhoods-in-Toronto | 1 |
<jupyter_start><jupyter_text>Given the dtypes, there is no possibility of negative values in the dataset. <jupyter_code>%matplotlib inline
import os
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.offsetbox import OffsetImage, AnnotationBbox
import numpy as np
from glob import glob
import matplotlib.c... | no_license | /.ipynb_checkpoints/EDA - Fashion Mnist-checkpoint.ipynb | azaelmsousa/LogisticRegression-ANN | 1 |
<jupyter_start><jupyter_text># LINEAR REGRESSION<jupyter_code>x = df['sqft_living'].values.reshape(-1,1)
y = df['price'].values
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size = 0.23, random_state= 19)
model= LinearRegression()
model.fit(x_train,y_train)
predicted = model.predict(x_test)
print("r:",metri... | no_license | /FIRST STEP INTO M.L (chekpoint).ipynb | amaterstu/Formation-AI | 3 |
<jupyter_start><jupyter_text>## 338. Counting Bits
Given a non negative integer number num. For every numbers i in the range 0 ≤ i ≤ num calculate the number of 1's in their binary representation and return them as an array.
Example 1:
```
Input: 2
Output: [0,1,1]
```
Example 2:
```
Input: 5
Output: [0,1,1,2,1,2]
```... | permissive | /leetcode/questions/338-CountingBits.ipynb | subramp-prep/pyLeetcode | 1 |
<jupyter_start><jupyter_text># Introduction
Maps allow us to transform data in a `DataFrame` or `Series` one value at a time for an entire column. However, often we want to group our data, and then do something specific to the group the data is in. We do this with the `groupby` operation.
In these exercises we'll appl... | no_license | /datasets/wine-reviews/kernels/X---ostaski---grouping-and-sorting.ipynb | mindis/GDS | 8 |
<jupyter_start><jupyter_text># enzymeBayes
Final project for Alp Kucukelbir's Machine Learning Probabilistic Programming (COMS6998) by Jiayu Zhang and Kiran Gauthier. ### Familiarizing ourselves with the data All data in this analysis has been graciously provided by Prof. Jennifer Ross, Mengqi Xu, and their collabora... | no_license | /final-project/final-notebook.ipynb | KiranGauthier/enzymeBayes | 9 |
<jupyter_start><jupyter_text># Now You Code 4: Shopping List
Write a simple shopping list program. Use a Python `list` as a shopping list. Functions definitions have been written for you, so all you need to do is complete the code inside the function.
The main program loop has a menu allowing you to 1) add to the li... | no_license | /content/lessons/09/Now-You-Code/.ipynb_checkpoints/NYC4-ShoppingList-checkpoint.ipynb | Learn2Code-SummerSyr/2019learn2code-auramnar | 1 |
<jupyter_start><jupyter_text> Academic and Employability Factors influencing placementCampus placement or campus recruiting is a program conducted within universities or other educational institutions to provide jobs to students nearing completion of their studies. In this type of program, the educational institutions ... | no_license | /notebooks/yashvi/campus-recruitment-analysis.ipynb | Sayem-Mohammad-Imtiaz/kaggle-notebooks | 25 |
<jupyter_start><jupyter_text># Minist 数据集测试
> 范例来自于 https://www.tensorflow.org/tutorials/quickstart/beginner## 准备工作- 引入必要的包<jupyter_code>from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import random as rdm
f... | non_permissive | /python/tf/mnist.ipynb | alvin-qh/study-ai | 25 |
<jupyter_start><jupyter_text># Fully-Connected Neural Nets
In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-la... | no_license | /assignment2/.ipynb_checkpoints/FullyConnectedNets-checkpoint.ipynb | willbryk720/cs231_solutions | 18 |
<jupyter_start><jupyter_text>Actividad Guiada 1 de Algoritmos de Optimización
David Boñar
https://colab.research.google.com/drive/12Uogb_wSXuY_pVLANr7NY6gxIXHkIvSr
https://github.com/davidbonar1/03MAIR_Algoritmos_de_Optimizacion_2019
Problema 1: Programar una función que resuelva el problema de las torres de Hanoy.... | no_license | /David_Boñar_AG1.ipynb | davidbonar1/03MAIR_Algoritmos_de_Optimizacion_2019 | 5 |
<jupyter_start><jupyter_text># Exit Survey Analysis
Organizations would typically want to understand why its employees resign. This information is usually gathered by using exit surveys that resigning employees are asked to take.
In this project, we'll work with exit surveys from employees of the Department of Educat... | no_license | /exit_survey_analysis.ipynb | nrabang/DQ-exit-survey-analysis | 23 |
<jupyter_start><jupyter_text># Group 6 Mini Project 2
# 1. Term - Frequency Inverse Document Frequency
1) Remove Stopwords (1 Mark)
2) Remove the punctuations. the special characters and convert the text to lower case. (2 Mark)
3) create bigrams and trigrams for the entire dataset and list down 20 most frequ... | no_license | /Text Mining Course/Mini Project 1/GROUP6_MINIPROJECT2_UNSUPERVISEDLEARNING.ipynb | superchiku/MachineLearningCodeAssignments-BITS | 3 |
<jupyter_start><jupyter_text># Face Recognition
In this assignment, you will build a face recognition system. Many of the ideas presented here are from [FaceNet](https://arxiv.org/pdf/1503.03832.pdf). In lecture, we also talked about [DeepFace](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-ga... | no_license | /Convolutional Neural Networks/Week4/Face_Recognition_v3a.ipynb | vattikutiravi9/Deep-Learning-specialization | 10 |
<jupyter_start><jupyter_text># Tutorial: Bring your own data (Part 3 of 3)
## Introduction
In the previous [Tutorial: Train a model in the cloud](2.train-model.ipynb) article, the CIFAR10 data was downloaded using the builtin `torchvision.datasets.CIFAR10` method in the PyTorch API. However, in many cases you are goi... | permissive | /tutorials/getting-started/3.train-model-cloud-data.ipynb | luisquintanilla/azureml-examples | 2 |
<jupyter_start><jupyter_text># 1. Multi-layer Perceptron
### Train and evaluate a simple MLP on the Reuters newswire topic classification task.
This is a collection of documents that appeared on Reuters newswire in 1987. The documents were assembled and indexed with categories.
Dataset of 11,228 newswires from Reu... | permissive | /notebooks/1. Multi-Layer-Perceptron.ipynb | 3catz/DeepLearning-NLP | 1 |
<jupyter_start><jupyter_text># 標準ガウス分布における確率密度関数と累積分布関数の考察分散を1.0から4.5まで0.5間隔で増やすと確率密度関数と累積分布関数はどのように変わるのかを考察した。
ただし、累積分布関数のところで出てくる$erf$とは
$erf(x) = \frac{2}{\sqrt{\pi}} \int_{0}^{x} e^{-t^2} dt$のことであり、誤差関数と呼ばれる<jupyter_code>import numpy as np
from scipy.special import erf
import matplotlib.pyplot as plt
%matplotlib in... | no_license | /codes/code_ipynb/gausu_distribution.ipynb | kiwamizamurai/Lygometry | 3 |
<jupyter_start><jupyter_text>Linear search<jupyter_code>pos = 0 #global variable for printing position
def search(my_list, n):
global pos
i = 0
while i < len(my_list): #iterate throgh list to find element
if my_list[i] == n:
pos = i #updating position
return True
... | no_license | /basic_ML/algorithms.ipynb | rohan-dhere/Neosoft_assignment | 4 |
<jupyter_start><jupyter_text># Limpieza de datos<jupyter_code>def remove_non_ascii(words):
"""Remove non-ASCII characters from list of tokenized words"""
new_words = []
for word in words:
new_word = unicodedata.normalize('NFKD', word).encode('ascii', 'ignore').decode('utf-8', 'ignore')
new_w... | no_license | /Proyecto1.ipynb | mg-torres/biProyecto1 | 12 |
<jupyter_start><jupyter_text># ПРОЕКТ "СТАТИСТИЧЕСКИЙ АНАЛИЗ ДАННЫХ"**Описание проекта:**
Вы аналитик компании «Мегалайн» — федерального оператора сотовой связи. Клиентам предлагают два тарифных плана: «Смарт» и «Ультра». Чтобы скорректировать рекламный бюджет, коммерческий департамент хочет понять, какой тариф принос... | no_license | /Определение перспективного тарифа для телеком компании.ipynb | ginger-boy/my_project | 31 |
<jupyter_start><jupyter_text># Age Application
## Requirements:
* Get age of user as input
* Print how many seconds the user has lived
* use input(), int(), and print() functions
* use the format() string method<jupyter_code>age = int(input('Enter your age: '))
print("You have lived for {} seconds.".format(age * 365 *... | no_license | /notebooks/section2_age_app.ipynb | kristakernodle/learning_py_pgsql | 2 |
<jupyter_start><jupyter_text># PoS<jupyter_code>task = "pos"
metric = "Accuracy"<jupyter_output><empty_output><jupyter_text>### mBERT<jupyter_code>short_model_name = "mbert"
stats.analysis_of_variance.one_way(task, short_model_name, metric, experiment, results_path, show_distribution=True)<jupyter_output><empty_output>... | no_license | /analysis/stat_tests/acl/Intra_vs_inter_group.ipynb | jerbarnes/typology_of_crosslingual | 10 |
<jupyter_start><jupyter_text>Main pandas data models are Series (1D) and DataFrame (2D). Series is a subclass of numpy.ndarray.
Index labels do not have to be ordered and duplicates are allowed.
Indexes are for fast lookup and join. Hierarchical indexes.
Data alignment, dataframe manipulation.
In a dataframe, e... | no_license | /.ipynb_checkpoints/Data Analysis with Pandas-checkpoint.ipynb | neo-anderson/datascience-ipython | 7 |
<jupyter_start><jupyter_text># Toronto3
### 1. Import libraries<jupyter_code>import numpy as np # library to handle data in a vectorized manner
import pandas as pd # library for data analsysis
pd.set_option("display.max_columns", None)
pd.set_option("display.max_rows", None)
import json # library to handle JSON file... | no_license | /Toronto 3.ipynb | leduc0801/Capstone_project | 28 |
<jupyter_start><jupyter_text>Capstone NotebookThis notebook will be used for developing a capstone project. This project is a part of IBM's data science specialisation.<jupyter_code>import pandas as pd
import numpy as np
print("Hello Capstone Project Course!")<jupyter_output>Hello Capstone Project Course!
| no_license | /Ibm-NoteBook.ipynb | modeware/ibm-assign | 1 |
<jupyter_start><jupyter_text>## 1.0 Import Function<jupyter_code>from META import SA_ALGORITHM_0001
from META_GRAPHICS_LIBRARY import *<jupyter_output><empty_output><jupyter_text>## 2.0 Setup <jupyter_code>SETUP = {'N_REP': 10,
'N_ITER': 10,
'N_POP': 1,
'D': 5,
'X_L': [-30] * 5,
... | no_license | /Algoritmos em organização/SA example.ipynb | wmpjrufg/META_PLATAFORMA | 5 |
<jupyter_start><jupyter_text>資料來源:政府資料開放平台<jupyter_code>df = pd.read_csv('https://gis.taiwan.net.tw/od/01_PRD/%E6%AD%B7%E5%B9%B4%E8%A7%80%E5%85%89%E5%A4%96%E5%8C%AF%E6%94%B6%E5%85%A5%E7%B5%B1%E8%A8%88.csv',encoding='big5')
df.head()<jupyter_output><empty_output><jupyter_text>去逗號<jupyter_code>locale.setlocale(locale.LC_... | no_license | /HW5_1 .ipynb | Athenakk/Pythonhw | 3 |
<jupyter_start><jupyter_text> ╔══Alai-DeepLearning════════════════════════════╗
### **✎ Week 5. Machine Learning Basis**
# Section 4. Tensorflow을 이용한 Linear Regression 구현
### _Objective_
1. Tensorflow을 통해 우리는 선형회귀를 구현합니다.
╚═════════════════════════════════════════╝<jupyter_code>%matplotlib... | no_license | /lecture-codes/5_machine_learning_basis/4_Tensorflow을 이용한 Linear Regression 구현하기.ipynb | anthony0727/ALAI-DL | 20 |
<jupyter_start><jupyter_text>y = 2x + 1<jupyter_code>model.predict([[15]])
from sklearn.datasets import fetch_california_housing
california = fetch_california_housing()
X = california.data
df = pd.DataFrame(X, columns = california.feature_names)
Y = california.target
print(df)
i = 3
plt.title(california.feature_names[... | no_license | /C1. Linear Regression/chapte1-practice.ipynb | JHyuk2/ML-DL | 1 |
<jupyter_start><jupyter_text>## loading training data<jupyter_code>x_train=[]
for file in tqdm(train['file']):
img = read_img(file, (224, 224))
x_train.append(img)
x_train=np.array(x_train)
x_train.shape
x_test=[]
for file in tqdm(test['filepath']):
img=read_img(file,(224,224))
x_test.append(... | no_license | /datasets/plant-seedlings-classification/kernels/NULL---amarjeet007---plant-seedlings-classification.ipynb | mindis/GDS | 7 |
<jupyter_start><jupyter_text>I had posted my very naive baseline at https://www.kaggle.com/mhviraf/a-baseline-for-dsb-2019. In that kernel I only used the mode label for each Assessment and I thought it should be very easy to beat. This kernel shows how you can beat that baseline by actually applying a model. In this k... | no_license | /kernels/a-new-baseline-for-dsb-2019-catboost-model.ipynb | enridaga/data-journey | 5 |
<jupyter_start><jupyter_text>**Purpose:**
Inference with a DistilBERT model pretrained on SQuAD
<jupyter_code>%%capture
!pip install transformers
import time
import sys
import os
import contextlib
from transformers import DistilBertTokenizer, DistilBertForQuestionAnswering
import torch
from google.colab import driv... | no_license | /Colab/Colab Notebooks/1_squad_pretrained_distilbert_base_QnA.ipynb | niravraje/Web-QnA | 1 |
<jupyter_start><jupyter_text># 问题
- 你想将一个多层嵌套的序列展开成一个单层列表
## 解决方案
- 可以写一个包含yield from 语句的递归生成器来轻松解决这个问题。比如:<jupyter_code>from collections import Iterable
def flatten(items, ignore_types=(str, bytes)):
for x in items:
if isinstance(x, Iterable) and not isinstance(x, ignore_types):
yield from ... | no_license | /PythonCookBook/4_迭代器和生成器/4_14_展开嵌套的序列.ipynb | NAMEs/Python_Note | 3 |
<jupyter_start><jupyter_text>## Function to generate all the primenumbers in a given range<jupyter_code>def prime(lb):
count=0
for i in range(1,lb+1):
if lb%i==0:
count=count+1
if count==2:
print(i,end=",")
lb=int(input())
ub=int(input())
for j in range(lb,ub+1):
prime(j)
... | no_license | /25-09-2019/25-09-2019(class)/25-09-2019(class).ipynb | bindu707/gitbasics-mstp-level1-506 | 4 |
<jupyter_start><jupyter_text>Задание 1
Дана переменная, в которой хранится словарь, содержащий гео-метки для каждого пользователя
(пример структуры данных приведен ниже).
Вам необходимо написать программу, которая выведет на экран множество уникальных гео-меток всех пользователей.<jupyter_code>ids = {'user1': [213, 2... | no_license | /DZ4/DZ4.ipynb | VeraRomantsova/Vera_Romantsova_ds | 6 |
<jupyter_start><jupyter_text># This is a sample notebook
Some text<jupyter_code>library('qt1')
install.packages('qtl', lib='~/R-library/', repos='http://cran.us.r-project.org')
.libPaths('~/R-library/')
.libPaths()
library('qtl')
qtlversion()
help(install.packages)<jupyter_output><empty_output> | no_license | /R-installing-package.ipynb | mbmilligan/msi-ipython-nb-ex | 1 |
<jupyter_start><jupyter_text>### 1. Make numbers in a list ordering by value ?<jupyter_code># http://wuchong.me/blog/2014/02/09/algorithm-sort-summary/
# method 1
# using max, remove
def value_ordering(x):
out=[]
for k in range(len(x)):
#print ((x))
out.append(max(x))
x.remove(max(x))... | no_license | /archived/programming/python/Python_Basics_FAQs.ipynb | yennanliu/CS_basics | 3 |
<jupyter_start><jupyter_text>## Barrowman Method Application
This code is an application of the barrowman method for determining the center of pressure for each respective component of the rocket. This method is meant to provide insight on the drag coefficient vs. Mach number for the LV4 rocket for PSAS.
## References:... | non_permissive | /archive/BarrowmanMethodNotebook/.ipynb_checkpoints/BarrowmanMethod-checkpoint.ipynb | psas/liquid-engine-analysis | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.