text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Grading
Step2: Word embedding
Step3: How to work with Google's word2vec embeddings?
Step5: From word to text embeddings
Step6: To check the basic correctness of your implementation, run the function question_to_vec_tests.
Step7: You can submit embeddings for the questions from file test_embeddings.tsv to earn the points. In this task you don't need to transform the text of a question somehow.
Step9: Now we have a method to create a representation of any sentence and we are ready for the first evaluation. So, let's check how well our solution (Google's vectors + question_to_vec) will work.
Step11: Test your code on the tiny examples
Step12: Submit results of the functions hits_count and dcg_score for the following examples to earn the points.
Step13: First solution
Step15: We will use cosine distance to rank candidate questions which you need to implement in the function rank_questions. The function should return a sorted list of pairs (initial position in candidates list, question). Index of some pair corresponds to its rank (the first is the best). For example, if the list of candidates was [a, b, c] and the most similar is c, then a and b, the function should return a list [(2, c), (0, a), (1, b)].
Step16: Test your code on the tiny examples
Step17: Now we can test the quality of the current approach. Run the next two cells to get the results. Pay attention that calculation of similarity between vectors takes time and this calculation is computed approximately in 10 minutes.
Step18: If you did all the steps correctly, you should be frustrated by the received results. Let's try to understand why the quality is so low. First of all, when you work with some data it is necessary to have an idea how the data looks like. Print several questions from the data
Step19: As you can see, we deal with the raw data. It means that we have many punctuation marks, special characters and unlowercased letters. In our case, it could lead to the situation where we can't find some embeddings, e.g. for the word "grid?".
Step20: Now transform all the questions from the validation set
Step21: Let's evaluate the approach again after the preparation
Step22: Now, prepare also train and test data, because you will need it in the future
Step23: Task 4 (W2VTokenizedRanks). For each question from prepared test.tsv submit the ranks of the candidates to earn the points. It should take about 3-5 minutes. Pay attention that the function rank_candidates returns a ranking, while in this case you should find a position in this ranking. Ranks should start with 1.
Step24: Advanced solution
Step25: And now we can compare the new embeddings with the previous ones. You can find trained word vectors in the file [model_file_name].tsv. Upload the embeddings from StarSpace into a dict.
Step26: Due to training for the particular task with the supervised data, you should expect to obtain a higher quality than for the previous approach. In additiion, despite the fact that StarSpace's trained vectors have a smaller dimension than word2vec's, it provides better results in this task.
Step27: Authorization & Submission
Step28: If you want to submit these answers, run cell below
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append("..")
from common.download_utils import download_week3_resources
download_week3_resources()
from grader import Grader
grader = Grader()
import gensim
wv_embeddings = ######### YOUR CODE HERE #############
def check_embeddings(embeddings):
error_text = "Something wrong with your embeddings ('%s test isn't correct)."
most_similar = embeddings.most_similar(positive=['woman', 'king'], negative=['man'])
if len(most_similar) < 1 or most_similar[0][0] != 'queen':
return error_text % "Most similar"
doesnt_match = embeddings.doesnt_match(['breakfast', 'cereal', 'dinner', 'lunch'])
if doesnt_match != 'cereal':
return error_text % "Doesn't match"
most_similar_to_given = embeddings.most_similar_to_given('music', ['water', 'sound', 'backpack', 'mouse'])
if most_similar_to_given != 'sound':
return error_text % "Most similar to given"
return "These embeddings look good."
print(check_embeddings(wv_embeddings))
import numpy as np
def question_to_vec(question, embeddings, dim=300):
question: a string
embeddings: dict where the key is a word and a value is its' embedding
dim: size of the representation
result: vector representation for the question
######################################
######### YOUR CODE HERE #############
######################################
def question_to_vec_tests():
if (np.zeros(300) != question_to_vec('', wv_embeddings)).any():
return "You need to return zero vector for empty question."
if (np.zeros(300) != question_to_vec('thereisnosuchword', wv_embeddings)).any():
return "You need to return zero vector for the question, which consists only unknown words."
if (wv_embeddings['word'] != question_to_vec('word', wv_embeddings)).any():
return "You need to check the corectness of your function."
if ((wv_embeddings['I'] + wv_embeddings['am']) / 2 != question_to_vec('I am', wv_embeddings)).any():
return "Your function should calculate a mean of word vectors."
return "Basic tests are passed."
print(question_to_vec_tests())
from util import array_to_string
question2vec_result = []
for question in open('data/test_embeddings.tsv'):
question = question.strip()
answer = question_to_vec(question, wv_embeddings)
question2vec_result = np.append(question2vec_result, answer)
grader.submit_tag('Question2Vec', array_to_string(question2vec_result))
def hits_count(best_ranks, k):
best_ranks: list with ranks for each element (the best rank is 1, the worst — len(best_ranks))
k: number of top-ranked elements
result: float number
######################################
######### YOUR CODE HERE #############
######################################
def test_hits():
answers = ['woman', 'man']
candidates_ranking = [['woman', 'queen'], ['man', 'king']]
best_ranks = [1, 1]
correct_answers = [1, 1]
for k in range(1, 3):
if not np.isclose(hits_count(best_ranks, k), correct_answers[k - 1]):
return "Check the function."
candidates_ranking = [['woman', 'queen'], ['king', 'man']]
best_ranks = [1, 2]
correct_answers = [0.5, 1]
for k in range(1, 3):
if not np.isclose(hits_count(best_ranks, k), correct_answers[k - 1]):
return "Check the function."
return "Basic tests are passed."
print(test_hits())
def dcg_score(best_ranks, k):
best_ranks: list with ranks for each element (the best rank is 1, the worst — len(best_ranks))
k: number of top-ranked elements
result: float number
######################################
######### YOUR CODE HERE #############
######################################
def test_dcg():
answers = ['woman', 'man']
candidates_ranking = [['woman', 'queen'], ['man', 'king']]
best_ranks = [1, 1]
correct_answers = [1.0, 1.0]
for k in range(1, 3):
if not np.isclose(dcg_score(best_ranks, k), correct_answers[k - 1]):
return "Check the function."
candidates_ranking = [['woman', 'queen'], ['king', 'man']]
best_ranks = [1, 2]
correct_answers = [0.5, 0.8154]
for k in range(1, 3):
if not np.isclose(dcg_score(best_ranks, k), correct_answers[k - 1], atol=1e-03):
return "Check the function."
return "Basic tests are passed."
print(test_dcg())
test_examples = [
[1],
[1, 2],
[2, 1],
[1, 2, 3],
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
[9, 5, 4, 2, 8, 10, 7, 6, 1, 3],
[4, 3, 5, 1, 9, 10, 7, 8, 2, 6],
[5, 1, 7, 6, 2, 3, 8, 9, 10, 4],
[6, 3, 1, 4, 7, 2, 9, 8, 10, 5],
[10, 9, 8, 7, 6, 5, 4, 3, 2, 1],
]
hits_results = []
for example in test_examples:
for k in range(len(example)):
hits_results.append(hits_count(example, k + 1))
grader.submit_tag('HitsCount', array_to_string(hits_results))
dcg_results = []
for example in test_examples:
for k in range(len(example)):
dcg_results.append(dcg_score(example, k + 1))
grader.submit_tag('DCGScore', array_to_string(dcg_results))
def read_corpus(filename):
data = []
for line in open(filename, encoding='utf-8'):
data.append(line.strip().split('\t'))
return data
validation = ######### YOUR CODE HERE #############
from sklearn.metrics.pairwise import cosine_similarity
def rank_candidates(question, candidates, embeddings, dim=300):
question: a string
candidates: a list of strings (candidates) which we want to rank
embeddings: some embeddings
dim: dimension of the current embeddings
result: a list of pairs (initial position in the list, question)
######################################
######### YOUR CODE HERE #############
######################################
def test_rank_candidates():
questions = ['converting string to list', 'Sending array via Ajax fails']
candidates = [['Convert Google results object (pure js) to Python object',
'C# create cookie from string and send it',
'How to use jQuery AJAX for an outside domain?'],
['Getting all list items of an unordered list in PHP',
'WPF- How to update the changes in list item of a list',
'select2 not displaying search results']]
results = [[(1, 'C# create cookie from string and send it'),
(0, 'Convert Google results object (pure js) to Python object'),
(2, 'How to use jQuery AJAX for an outside domain?')],
[(0, 'Getting all list items of an unordered list in PHP'),
(2, 'select2 not displaying search results'),
(1, 'WPF- How to update the changes in list item of a list')]]
for question, q_candidates, result in zip(questions, candidates, results):
ranks = rank_candidates(question, q_candidates, wv_embeddings, 300)
if not np.all(ranks == result):
return "Check the function."
return "Basic tests are passed."
print(test_rank_candidates())
wv_ranking = []
for line in validation:
q, *ex = line
ranks = rank_candidates(q, ex, wv_embeddings)
wv_ranking.append([r[0] for r in ranks].index(0) + 1)
for k in [1, 5, 10, 100, 500, 1000]:
print("DCG@%4d: %.3f | Hits@%4d: %.3f" % (k, dcg_score(wv_ranking, k), k, hits_count(wv_ranking, k)))
for line in validation[:3]:
q, *examples = line
print(q, *examples[:3])
from util import text_prepare
prepared_validation = []
for line in validation:
######### YOUR CODE HERE #############
wv_prepared_ranking = []
for line in prepared_validation:
q, *ex = line
ranks = rank_candidates(q, ex, wv_embeddings)
wv_prepared_ranking.append([r[0] for r in ranks].index(0) + 1)
for k in [1, 5, 10, 100, 500, 1000]:
print("DCG@%4d: %.3f | Hits@%4d: %.3f" % (k, dcg_score(wv_prepared_ranking, k),
k, hits_count(wv_prepared_ranking, k)))
def prepare_file(in_, out_):
out = open(out_, 'w')
for line in open(in_, encoding='utf8'):
line = line.strip().split('\t')
new_line = [text_prepare(q) for q in line]
print(*new_line, sep='\t', file=out)
out.close()
######################################
######### YOUR CODE HERE #############
######################################
from util import matrix_to_string
w2v_ranks_results = []
prepared_test_data = ######### YOUR CODE HERE #############
for line in open(prepared_test_data):
q, *ex = line.strip().split('\t')
ranks = rank_candidates(q, ex, wv_embeddings, 300)
ranked_candidates = [r[0] for r in ranks]
w2v_ranks_results.append([ranked_candidates.index(i) + 1 for i in range(len(ranked_candidates))])
grader.submit_tag('W2VTokenizedRanks', matrix_to_string(w2v_ranks_results))
######### TRAINING HAPPENING HERE #############
starspace_embeddings = ######### YOUR CODE HERE #############
ss_prepared_ranking = []
for line in prepared_validation:
q, *ex = line
ranks = rank_candidates(q, ex, starspace_embeddings, 100)
ss_prepared_ranking.append([r[0] for r in ranks].index(0) + 1)
for k in [1, 5, 10, 100, 500, 1000]:
print("DCG@%4d: %.3f | Hits@%4d: %.3f" % (k, dcg_score(ss_prepared_ranking, k),
k, hits_count(ss_prepared_ranking, k)))
starspace_ranks_results = []
prepared_test_data = ######### YOUR CODE HERE #############
for line in open(prepared_test_data):
q, *ex = line.strip().split('\t')
ranks = rank_candidates(q, ex, starspace_embeddings, 100)
ranked_candidates = [r[0] for r in ranks]
starspace_ranks_results.append([ranked_candidates.index(i) + 1 for i in range(len(ranked_candidates))])
grader.submit_tag('StarSpaceRanks', matrix_to_string(starspace_ranks_results))
STUDENT_EMAIL = # EMAIL
STUDENT_TOKEN = # TOKEN
grader.status()
grader.submit(STUDENT_EMAIL, STUDENT_TOKEN)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we move onto the ISPAQ-specific ones. Because of the directory structure and where this example lives, we need to add the main ispaq directory to our path. Then we will be able to import the ISPAQ modules.
Step2: With the modules imported, we now need to set up some variables that will be required to run the ISPAQ code. This includes a logger and an arguments class that contains the fields from the preference file.
Step3: Those will now be used to create a userRequest, which will then be used to create a concierge object. The concierge object will need to later be passed into the method that actually calculates the PDFs.
Step4: Now that we've handled that, we can calculate the PDFs. First, we move into the directory that contains the database, since that's where the ISPAQ code expects us to be. Then we call on the calculate_PDF method, which will return dataframes that contain
Step5: With it in a dataframe, you can now do what you want with it! Manipulate it how you want.
|
<ASSISTANT_TASK:>
Python Code:
import sys
import os
import logging
import pandas as pd
from obspy import UTCDateTime
path_parent = os.path.dirname(os.getcwd())
sys.path.insert(1, f'{path_parent}/ispaq/')
import concierge
from user_request import UserRequest
import PDF_aggregator
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
class args:
db_name = 'ispaq_example.db'
starttime = UTCDateTime('2020-10-01')
endtime = UTCDateTime('2021-10-15')
metrics = 'pdf'
stations = 'IU.ANMO.00.BHZ.M' # The "stations" must refer to a single target, including the quality code (N.S.L.C.Q)
preferences_file = f'{path_parent}/preference_files/default.txt'
station_url = 'IRIS'
dataselect_url = 'IRIS'
event_url = 'IRIS'
resp_dir = ''
output = 'db'
csv_dir = f'{path_parent}/csv/'
sncl_format = 'N.S.L.C.'
sigfigs = 6
pdf_type = 'plot'
pdf_interval = 'aggregated'
plot_include = ''
pdf_dir = f'{path_parent}/pdfs/'
psd_dir = f'{path_parent}/psds/'
user_request = UserRequest(args, logger=logger)
concierge = concierge.Concierge(user_request, logger=logger)
print(concierge, logger)
print(concierge.logger)
os.chdir(path_parent)
[pdfDF,modesDF, maxDF, minDF] = PDF_aggregator.calculate_PDF(pd.DataFrame(), args.stations, args.starttime, args.endtime, concierge)
print(pdfDF)
PDF_aggregator.plot_PDF(args.stations, args.starttime, args.endtime, pdfDF, modesDF, maxDF, minDF, concierge)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here we have to find the 'NaN' values and fill them with the mean. Probably it's not the best way to complete the info where we have empty values but at least we are keeping the same distribution.
Step2: The 'SalePrice' has a skewed graph. We can stabilize it applying a logarithmic operation because we know that all the values are positive.
Step3: There are a lot of features in this dataset so we are going to select only the most correlated features with the 'SalePrice'.
Step4: KFold
Step5: We implement two methos to plot the PCA in case we need to visualize the information in a 2D graph. We'll need to reduce all the features to only one feature (component).
Step6: Removing the 1% of the anomalies we can get an stable prediction but it's not sure. Probably it's going to help but this would be removed from the final calculation.
Step7: Model
Step8: Running Models
Step9: Adding Categorical
Step10: Get Predictions
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
#load the files
train = pd.read_csv('input/train.csv')
test = pd.read_csv('input/test.csv')
data = pd.concat([train, test])
#size of training dataset
train_samples = train.shape[0]
test_samples = test.shape[0]
# remove the Id feature
data.drop(['Id'],1, inplace=True);
#data.describe()
datanum = data.select_dtypes([np.number])
datanum = datanum.fillna(datanum.dropna().mean())
import matplotlib.pyplot as plt
%matplotlib inline
# Transforming to non-skewed SalePrice
data.SalePrice = data.SalePrice.apply(np.log)
data.SalePrice.hist(bins=50)
# Correlation features
datanum.corr()['SalePrice'].drop('SalePrice').sort_values(ascending=False).head(10)
def getDataWithHighCorrFeatures(data, numberFeatures=10):
high_corr_feat_names = data.corr()['SalePrice'].drop('SalePrice').sort_values(ascending=False).head(numberFeatures).axes[0].tolist()
#high_corr_feat_names.remove('SalePrice')
return data[high_corr_feat_names]
from sklearn.model_selection import KFold
kf = KFold(n_splits=10, random_state=13)#, shuffle=True)
print(kf)
#plotting PCA
from sklearn.decomposition import PCA
def getX_PCA(X):
pca = PCA(n_components=1)
return pca.fit(X).transform(X)
def plotPCA(X, y):
pca = PCA(n_components=1)
X_r = pca.fit(X).transform(X)
plt.plot(X_r, y, 'x')
from sklearn.covariance import EllipticEnvelope
def removeAnomalies(X_train, y_train, verbose=False):
# fit the model
ee = EllipticEnvelope(contamination=0.01,
assume_centered=True,
random_state=13)
ee.fit(X_train)
pred = ee.predict(X_train)
X_anom = X_train[pred != 1]
y_anom = y_train[pred != 1]
X_no_anom = X_train[pred == 1]
y_no_anom = y_train[pred == 1]
if (verbose):
print("Number samples no anomalies: {}".format(X_no_anom.shape[0]))
#after removing anomalies
#plt.scatter(getX_PCA(X_no_anom), y_no_anom)
#plt.scatter(getX_PCA(X_anom), y_anom)
return X_no_anom, y_no_anom
def idxNotAnomalies(X):
ee = EllipticEnvelope(contamination=0.01,
assume_centered=True,
random_state=13)
ee.fit(X)
pred = ee.predict(X)
return [index[0] for index, x in np.ndenumerate(pred) if x == 1]
# Linear regression
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
def train(X_train, y_train, verbose=False):
#lr = LinearRegression()
import xgboost as xgb
lr = xgb.XGBRegressor(max_depth=5,
n_estimators=250,
min_child_weight=7,
n_jobs=4)
#
batch = 0
for train_idx, val_idx in kf.split(X_train, y_train):
X_t, X_v = X_train[train_idx], X_train[val_idx]
y_t, y_v = y_train[train_idx], y_train[val_idx]
#training
lr.fit(X_t, y_t)
#calculate costs
t_error = mean_squared_error(y_t, lr.predict(X_t))**0.5
v_error = mean_squared_error(y_v, lr.predict(X_v))**0.5
if verbose:
print("{}) Training error: {:.2f} Validation error: {:.2f} Score: {:.2f}"
.format(batch, t_error, v_error, lr.score(X_v, y_v)))
batch += 1
return lr
def metrics(model, X, y, verbose=False):
#Scores
if verbose:
print("Training score: {:.4f}".format(model.score(X, y)))
#RMSLE
#print(np.count_nonzero(~np.isfinite(model.predict(X))))
rmsle = mean_squared_error(y, model.predict(X))**0.5
if verbose:
print("RMSLE: {:.4f}".format(rmsle))
# Plotting the results
plt.scatter(model.predict(X), y)
return rmsle, model.score(X, y)
# Get polynomial features
from sklearn.preprocessing import PolynomialFeatures
def getPolynomial(X_train, X_no_anom, X_test):
poly = PolynomialFeatures(degree=2)
return poly.fit_transform(X_train), poly.fit_transform(X_no_anom), poly.fit_transform(X_test)
def getKeyWithMinError(X_train, X_no_amon, y_train, y_no_anom, verbose=False):
rmsles = {}
for f in range(1,X_train.shape[1]):
model = train(X_no_anom[:,:f], y_no_anom, verbose=False)
rmsles[f] = metrics(model, X_train[:,:f], y_train, verbose=False)
min_error_key = min(rmsles, key=rmsles.get)
if (verbose):
print("Min error (k={}):{}".format(min_error_key, rmsles[min_error_key]))
#model = train(X_train_pol[:,:min_error_key], y_train)
#metrics(model, X_train_orig_pol[:,:min_error_key], y_train_orig)
#pd.Series(rmsles).plot()
return min_error_key
import warnings
warnings.filterwarnings('ignore')
errors = []
for f in range(1,17):
#print("====Corr feat: {}====".format(f))
datanum_high_corr = getDataWithHighCorrFeatures(datanum, f)
y = np.array(data['SalePrice'])
X = np.array(datanum_high_corr)
#split by idx
idx = train_samples
X_train, X_test = X[:idx], X[idx:]
y_train, y_test = y[:idx], y[idx:]
#print("Shape X train: {}".format(X_train.shape))
X_no_anom, y_no_anom = removeAnomalies(X_train, y_train)
#print("Shape X train (no anom): {}".format(X_no_anom.shape))
X_train, X_no_anom, X_test = getPolynomial(X_train, X_no_anom, X_test)
#print("Shape X train (poly): {}".format(X_no_anom.shape))
key = 1000 #getKeyWithMinError(X_train, X_no_anom, y_train, y_no_anom)
model = train(X_no_anom[:,:key], y_no_anom)
error, score = metrics(model, X_train[:,:key], y_train)
print("f:{} err:{:.3f} score:{:.3f}".format(f, error, score))
errors.append(error)
# show graph
pd.Series(errors).plot()
features = data.select_dtypes([np.object]).axes[1].tolist()
features.append('SalePrice')
datacat = pd.get_dummies(data[features])
datacat = datacat.fillna(datacat.dropna().mean())
datacat.corr()['SalePrice'].drop('SalePrice').sort_values(ascending=False).head(10)
features_order_by_corr = datacat.corr()['SalePrice'].drop('SalePrice').sort_values(ascending=False).axes[0].tolist()
import warnings
warnings.filterwarnings('ignore')
datanum_high_corr = getDataWithHighCorrFeatures(datanum, 15)
Xn = np.array(datanum_high_corr)
#choosing the number of categorical
num_cat = 10
Xc = np.array(datacat[features_order_by_corr[:num_cat]])
y = np.array(data['SalePrice'])
poly = PolynomialFeatures(degree=2)
Xpn = poly.fit_transform(Xn)
X = np.concatenate([Xpn, Xc], axis=1)
no_anom_idx = idxNotAnomalies(Xn[:idx]) #only from numeric features
print(Xpn.shape)
print(Xc.shape)
print(X.shape)
#split by idx
idx = train_samples
X_train, X_test = X[:idx], X[idx:]
y_train, y_test = y[:idx], y[idx:]
print("Shape X train: {}".format(X_train.shape))
print("Shape X test: {}".format(X_test.shape))
print("Shape X train (no anom): {}".format(X_train[no_anom_idx].shape))
X_no_anom = X_train[no_anom_idx]
y_no_anom = y_train[no_anom_idx]
errors = {}
scores = {}
for f in range(15^2, X_train.shape[1]):
modelt = train(X_no_anom[:,:f], y_no_anom, verbose=False)
err, score = metrics(modelt, X_train[:,:f], y_train, verbose=False)
if err > 1e-7:
errors[f] = err
scores[f] = score
else:
break
min_error_key = min(errors, key=errors.get)
max_score_key = max(scores, key=scores.get)
print("Min error: {:.3f}".format(errors[min_error_key]))
print("Max score: {:.3f}".format(scores[max_score_key]))
modelt = train(X_no_anom[:,:235], y_no_anom, verbose=False)
err, score = metrics(modelt, X_train[:,:235], y_train, verbose=False)
print(err)
print(score)
pd.Series(errors).plot()
pd.Series(scores).plot()
print(min_error_key)
import os
from sklearn.model_selection import cross_val_score
from sklearn.metrics import make_scorer
min_error_key = 235
def RMSE(y_true,y_pred):
rmse = mean_squared_error(y_true, y_pred)**0.5
return rmse
modelt = train(X_no_anom[:,:min_error_key], y_no_anom, verbose=False)
err, score = metrics(modelt, X_train[:,:min_error_key], y_train, verbose=False)
print("Err: {:.3f} | R2: {:.3f}".format(err, score))
scores = cross_val_score(modelt, X_train[:,:min_error_key], y_train,
scoring=make_scorer(RMSE, greater_is_better=True), cv=10)
print("Scores: {}".format(scores))
print("Score (mean): {:.3f}".format(scores.mean()))
predict = modelt.predict(X_test[:,:min_error_key])
#predictions are logs, return to the value
predict = np.exp(predict)
file = "Id,SalePrice" + os.linesep
startId = 1461
for i in range(len(X_test)):
file += "{},{}".format(startId, (int)(predict[i])) + os.linesep
startId += 1
# Save to file
with open('attempt.txt', 'w') as f:
f.write(file)
# Using XGRegressor?
#lr = XGBRegressor(max_depth=5, n_estimators=250,min_child_weight=10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step 1.
Step 2.
Step 3.
Step 4.
Step1: 1.3. Descriptive Statistics
Step2: 1.4. Data Cleaning
Step3: 2. Implementation of machine learning pipeline. (25%)
Step4: 3. Evaluation and test of model. (20%)
Step5: 4. Model fine-tuning. (35%)
Step6: 4.2. Machine Learning Model Hyperparameter search
|
<ASSISTANT_TASK:>
Python Code:
# import dependencies for creating a data frame
from pyspark.sql import SparkSession
from pyspark.sql import Row
from pyspark.sql.types import *
import csv
# Create SparkSession
spark = SparkSession.builder.getOrCreate()
# create RDD from csv files
trainRDD = spark.read.csv("hdfs://saltdean/data/data/santander-products/train_ver2.csv",
header=True, mode="DROPMALFORMED", schema=schema)
# alternatively...
# create RDD from csv files
trainRDD = sc.textFile("hdfs://saltdean/data/data/santander-products/train_ver2.csv")
trainRDD = trainRDD.mapPartitions(lambda x: csv.reader(x))
# alternatively... from https://spark.apache.org/docs/latest/sql-programming-guide.html#programmatically-specifying-the-schema
# create RDD from csv files
lines = sc.textFile("hdfs://saltdean/data/data/santander-products/train_ver2.csv")
elements = lines.map(lambda l: l.split(","))
# Each line is converted to a tuple.
clients = elements.map(lambda p: (p[0], p[1].strip(),p[2],...))
# The schema is encoded in a string.
schemaString = "name age ..."
fields = [StructField(field_name, StringType(), True) for field_name in schemaString.split()]
schema = StructType(fields)
# Apply the schema to the RDD and register the DataFrame to be used with Spark SQL.
trainRDD = spark.createDataFrame(clients, schema)
trainRDD.createOrReplaceTempView('trainingset')
# alternatively, as seen in tutorial 8:
lines = sc.textFile("hdfs://saltdean/data/data/santander-products/train_ver2.csv")
parts = lines.map(lambda l: l.split(","))
trainRDD = parts.map(lambda p: Row(userId=int(p[0]), movieId=int(p[1]),
rating=float(p[2]), timestamp=int(p[3])))
# Create DataFrame and register it to be used with Spark SQL.
trainData = spark.createDataFrame(trainRDD)
trainData.createOrReplaceTempView('Clients')
# For testing
print(trainData.describe()) # columns info
print(trainData.count()) # number of instances
# Code modified from
# https://www.kaggle.com/apryor6/santander-product-recommendation/detailed-cleaning-visualization-python/notebook
# import dependencies
import numpy as np
import pandas as pd
# create dataframe 'df'
limit_rows = 7000000
df = pd.read_csv("hdfs://saltdean/data/data/santander-products/train_ver2.csv",
dtype={"sexo":str, "ult_fec_cli_1t":str, "indext":str}, nrows=limit_rows)
unique_ids = pd.Series(df["ncodpers"].unique())
limit_people = 1.2e4
unique_id = unique_ids.sample(n=limit_people)
df = df[df.ncodpers.isin(unique_id)]
df.count() # number of instances
df.describe()
# find missing values
df.isnull().any()
# Remove age outliers and nan from dataframe
df.loc[df.age < 18,"age"] = df.loc[(df.age >= 18) & (df.age <= 30),"age"].mean(skipna=True) # replace outlier con mean
df.loc[df.age > 100,"age"] = df.loc[(df.age >= 30) & (df.age <= 100),"age"].mean(skipna=True) # replace outlier con mean
df["age"].fillna(df["age"].mean(),inplace=True) # replace nan with mean
df["age"] = df["age"].astype(int)
# Replace missing values
df.loc[df["ind_nuevo"].isnull(),"ind_nuevo"] = 1 # new customers id '1'
df.loc[df.antiguedad.isnull(),"antiguedad"] = df.antiguedad.min()
df.loc[df.antiguedad <0, "antiguedad"] = 0 # new customer antiguedad '0'
df.loc[df.indrel.isnull(),"indrel"] = 1
dates=df.loc[:,"fecha_alta"].sort_values().reset_index()
median_date = int(np.median(dates.index.values))
df.loc[df.fecha_alta.isnull(),"fecha_alta"] = dates.loc[median_date,"fecha_alta"] # fill join date missing values
df.loc[df.ind_actividad_cliente.isnull(),"ind_actividad_cliente"] = \
df["ind_actividad_cliente"].median() # fill in customer activity missing
df.loc[df.nomprov.isnull(),"nomprov"] = "UNKNOWN" # known values for city of residence
df.loc[df.indfall.isnull(),"indfall"] = "N" # missing deceased index set to N
df.loc[df.tiprel_1mes.isnull(),"tiprel_1mes"] = "A" # customer status, if missing = active
df.tiprel_1mes = df.tiprel_1mes.astype("category") # customer status as categorical
# Customer type normalization as categorical variable
map_dict = { 1.0:"1", "1.0":"1", "1":"1", "3.0":"3", "P":"P", 3.0:"3", 2.0:"2", "3":"3", "2.0":"2", "4.0":"4", "4":"4", "2":"2"}
df.indrel_1mes.fillna("P",inplace=True)
df.indrel_1mes = df.indrel_1mes.apply(lambda x: map_dict.get(x,x))
df.indrel_1mes = df.indrel_1mes.astype("category")
# Replace missing values in target features with 0
# target features = boolean indicator as to whether or not that product was owned that month
df.loc[df.ind_nomina_ult1.isnull(), "ind_nomina_ult1"] = 0
df.loc[df.ind_nom_pens_ult1.isnull(), "ind_nom_pens_ult1"] = 0
# Elimnate entries with nan values in given variable, eg:
print("Total number of entries before removing nan= ", df.count())
df.renta.isnull().sum()
df.na.drop(subset=["renta","indfall","tiprel_1mes","indrel_1mes"]) # !!!! need to be tested that only nan entries are removed
df.renta.isnull().sum()
print("Total number of entries after removing nan= ", df.count())
# Eliminate redundant variables
df.drop(["tipodom","cod_prov"],axis=1,inplace=True)
# check all missing values are gone
df.isnull().any()
# Convert target features column into integers
feature_cols = df.iloc[:1,].filter(regex="ind_+.*ult.*").columns.values
for col in feature_cols:
df[col] = df[col].astype(int)
# code modified from Spark documentation at:
# https://spark.apache.org/docs/2.1.0/ml-classification-regression.html#random-forest-classifier
# and DataBricks at:
# https://docs.databricks.com/spark/latest/mllib/binary-classification-mllib-pipelines.html
# imports dependencies for Random Forest pipeline
from pyspark.ml import Pipeline
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.feature import IndexToString, StringIndexer, VectorIndexer, OneHotEncoder, StringIndexer, VectorAssembler
# stages in the Pipeline
stages = []
# One-Hot Encoding
categoricalColumns = ["a", "b", "c", "d", "e", "f", "g", "j"]
for categoricalCol in categoricalColumns:
stringIndexer = StringIndexer(inputCol=categoricalCol, outputCol=categoricalCol+"Index") # Category Indexing with StringIndexer
encoder = OneHotEncoder(inputCol=categoricalCol+"Index", outputCol=categoricalCol+"classVec") # Use OneHotEncoder to convert categorical variables into binary SparseVectors
stages += [stringIndexer, encoder] # Add stages to the pipeline
# Convert labels into label indices using the StringIndexer
label_stringIdx = StringIndexer(inputCol = "add here target column in csv file", outputCol = "labels")
stages += [label_stringIdx] # Add stage to the pipeline
# Transform all features into a vector using VectorAssembler
numericCols = ["m", "n", "o", "p", "q", "r"]
assemblerInputs = map(lambda c: c + "classVec", categoricalColumns) + numericCols
assembler = VectorAssembler(inputCols=assemblerInputs, outputCol="features")
stages += [assembler] # Add stage to the pipeline
# Train a RandomForest model.
rf = RandomForestClassifier(labelCol="labels",
featuresCol="features",
numTrees=100, # Number of trees in the random forest
impurity='entropy', # Criterion used for information gain calculation
featureSubsetStrategy="auto",
predictionCol="prediction"
maxDepth=5,
maxBins=32,
minInstancesPerNode=1,
minInfoGain=0.0,
subsamplingRate=1.0)
stages += [rf] # Add stage to the pipeline
# Machine Learning Pipeline
pipeline = Pipeline(stages=stages)
# imports dependencies
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder, TrainValidationSplit
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
# Split data into training set and testing set
[trainData, testData] = trainData.randomSplit([0.8, 0.2], seed = 100)
# Train model in pipeline
rfModel = pipeline.fit(trainData)
# Make predictions for training set and compute training set accuracy
predictions = rfModel.transform(trainData)
evaluator = MulticlassClassificationEvaluator(labelCol="labels",
predictionCol="prediction",
metricName="accuracy")
accuracy = evaluator.evaluate(predictions)
print("Test Error = %g" % (1.0 - accuracy))
print(train_pipeline.stages[0]) # summary
# Run the feature transformations pipeline on the test data set
pipelineModel = prePro_pipeline.fit(testClients) # computes feature statistics
testData = pipelineModel.transform(testClients) # transforms the features
# Make predictions for test set and compute test error
test_predictions = rfModel.transform(testData)
test_accuracy = evaluator.evaluate(test_predictions)
print("Test Error = %g" % (1.0 - test_accuracy))
print('Training set size evaluation')
# size of different training set to be evaluated, and split of training set
sizes = [0.5, 0.1, 0.05, 0.01, 0.001]
data = trainData.randomSplit(sizes, seed = 100)
print('\n=== training set of size 100%')
# Train model in pipeline
tempModel = pipeline.fit(trainData)
# Make predictions for training set and compute training set accuracy
tempPredictions = tempModel.transform(trainData)
tempAccuracy = evaluator.evaluate(tempPredictions)
print("Classification Error = %g" % (1.0 - tempAccuracy))
for x in data:
print('\n=== training set of size reduced to %g' % x)
# Train model in pipeline
tempModel = pipeline.fit(data[x])
# Make predictions for training set and compute training set accuracy
tempPredictions = tempModel.transform(data[x])
tempAccuracy = evaluator.evaluate(tempPredictions)
print("Classification Error = %g" % (1.0 - tempAccuracy))
# Define hyperparameters and their values to search and evaluate
paramGrid = ParamGridBuilder() \
.addGrid(rf.numTrees, [10,20,50,100,200,500,1000,5000]) \
.addGrid(rf.minInstancesPerNode, [0,1,2,4,6,8,10]) \
.addGrid(rf.maxDepth, [2,5,10,20,50]).build()
# Grid Search and Cross Validation
crossVal = CrossValidator(estimator=rf, estimatorParamMaps=paramGrid, evaluator=evaluator)
print('starting Hyperparameter Grid Search with cross-validation')
rfCrosVal = crossVal.fit(trainData)
print('Grid Search has finished')
print(rfCrosVal.bestModel.rank)
paramMap = list(zip(rfCrosVal.getEstimatorParamMaps(),rfCrosVal.avgMetrics))
paramMax = max(paramMap, key=lambda x: x[1])
print(paramMax)
# Evaluate the model with test data
cvtest_predictions = rfCrosVal.transform(testData)
cvtest_accuracy = evaluator.evaluate(cvtest_predictions)
print("Test Error = %g" % (1.0 - cvtest_accuracy))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 微分
Step2: 极限
Step3: 求和
Step4: 替换
Step5: expression.subs({x
Step6: 方程
Step7: sy.solve(方程,未知数) # 方程求解 ,返回值为list 方程求解
Step8: sy.roots(expr) # 计算单变量方程expr=0的根 方程求解
Step9: 可以对微分方程进行符号求解,它的第一个参数是一个带未知函数的表达式,第二个参数是需要进行求解的未知函数。
|
<ASSISTANT_TASK:>
Python Code:
sy.integrate(6*x**5, x)
sy.integrate(x**3, (x, 0, 10)) #定积分
sy.integrate(6*x**5+y, x,y) #双重不定积分
sy.integrate(x**3+y, (x, -1, 1),(y,1,3) ) #双重定积分
print f(x).diff(x)
sy.diff(sy.sin(x), x) # 解析微分
sy.diff(sy.sin(2*x), x, 2) # 高阶微分 diff(func, var, n)
sy.diff(sy.sin(x*y),x,2,y,3) # 对表达式进行x的2阶求导,对y进行3阶求导
sy.diff(sy.sin(2*x),x)
sy.sin(2*x).diff(x,1)
t=sy.Derivative(sy.sin(x),x)
# Derivative是表示导函数的类,它的第一个参数是需要进行求导的数学函数,第二个参数是求导的自变量,
# 注意:Derivative所得到的是一个导函数,它不会进行求导运算。
sy.limit(sy.sin(x)/x, x, 0) # 极限
sy.summation(2*x - 1, (x, 1, y)) # 连加求和
(1 + x*y).subs(x, sy.pi)
(1 + x*y).subs([(x, sy.pi), (y, 2)])
# 用sy.Eq构建方程
u_max, rho_max, A, B = sy.symbols('u_max rho_max A B') # 符号对象, 可限定 real = True , positive = True, complex = True , integer = True ...
eq1 = sy.Eq( 0, u_max*rho_max*(1 - A*rho_max-B*rho_max**2) ) # 一般方程
eq1 = sy.Eq(f(x).diff(x)+f(x)+f(x)**2,0) # 微分方程
print eq1
sy.solve(eq1,x)
sy.solveset(sy.sin(x)+x,x) # 即对 sy.sin(x)+x=0 求解
sy.dsolve(eq1,f(x)) # 微分方程求解 微分方程求解
from sympy.utilities.lambdify import lambdify
lambdify([x],f(x),modules="numpy") # 第一个参数为自变量列表
sy.N(sy.pi,20)
sy.pi.evalf(n=50)
i=sy.Symbol('i')
f = sy.Sum(sy.Indexed('x',i)*sy.cos(i*sy.pi),(i,1,10))
f
func = lambdify(x, f, 'numpy') # returns a numpy-ready function
print func
numpy_array_of_results = func(y)
numpy_array_of_results
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Nomenclature
Step2: The type of an enumeration member is the enumeration it belongs to
Step3: Alternative way to create an Enum
|
<ASSISTANT_TASK:>
Python Code:
from enum import Enum
class MyEnum(Enum):
first = 1
second = 2
third = 3
print(MyEnum.first)
print(repr(MyEnum.first))
type(MyEnum.first)
SecondEnum = Enum('SecondEnum', 'first, second, third')
print(SecondEnum.first)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The focus of this post is to expand on an extremely useful, vectorizable isomorphism between indices, that comes up all the time
Step2: This brings us to our first numpy gem of this post, to check that our isomorphism is surjective, np.triu_indices.
Step3: The advantage over indexing into np.triu_indices is of course the scenario where you don't want to fully materialize all edges in memory, such as in frontier expansions for graph search.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(1234)
x = np.random.choice(10, replace=False, size=10)
s = np.argsort(x)
inverse = np.empty_like(s)
inverse[s] = np.arange(len(s), dtype=int)
np.all(x == inverse)
# an edge index is determined by the isomorphism from
# ([n] choose 2) to [n choose 2]
# drop (i, j) to (i, j - i - 1) first. then:
# (0, 0) (0, 1) (0, 2)
# (1, 0) (1, 1)
# (2, 0)
# isomorphism goes in downward diagonals
# like valence electrons in chemistry
def c2(n):
return n * (n - 1) // 2
def fromtup(i, j):
j = j - i - 1
diagonal = i + j
return c2(diagonal + 1) + i
def totup(x):
# https://math.stackexchange.com/a/1417583 + some int/float rewriting
diagonal = (1 + np.sqrt(8 * x + 1).astype(np.uint64)) // 2 - 1
i = x - c2(diagonal + 1)
j = diagonal - i
j = j + i + 1
return i, j
nverts = 1343
edges = np.arange(c2(nverts), dtype=int)
np.all(fromtup(*totup(edges)) == edges)
left, right = totup(edges)
expected_left, expected_right = np.triu_indices(nverts, k=1)
from collections import Counter
Counter(zip(left, right)) == Counter(zip(expected_left, expected_right))
x = 2**53
float(8 * x + 1) == float(8 * x)
def totup_flexible(x):
x = np.asarray(x)
assert np.all(x <= 2 ** 63 - 2**32)
if x > 2 ** 53:
s = np.sqrt(2) * np.sqrt(x)
s = s.astype(np.uint64)
# in principle, the extra multiplication here could require correction
# by at most 1 ulp; luckily (s+1)**2 is representable in u64
# because (sqrt(2)*sqrt(2**63 - 2**32)*(1+3*eps) + 1) is (just square it to see)
s3 = np.stack([s - 1, s, s + 1]).reshape(-1, 3)
s = 2 * s3[np.arange(len(s3)), np.argmin(s3 ** 2 - 2 * x, axis=-1)]
else:
s = np.sqrt(8 * x + 1).astype(np.uint64)
add = 0 if x > 2 ** 53 else 1
diagonal = (1 + s) // 2 - 1
diagonal = diagonal.reshape(x.shape)
i = x - c2(diagonal + 1)
j = diagonal - i
j = j + i + 1
return i, j
x = 2 ** 63 - 2 ** 32
fromtup(*totup_flexible(x)) == x
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define the vehicle class
Step2: Define the Moment Method class
Step3: Run simulation
|
<ASSISTANT_TASK:>
Python Code:
from opentire import OpenTire
from opentire.Core import TireState
import numpy as np
import matplotlib.pyplot as plt
class Vehicle():
def __init__(self):
self._mass = 1000
self._wb = 1
self._wd = 0.5
self._ft = None
self._rt = None
@property
def mass(self):
return self._mass
@mass.setter
def mass(self, value):
if value <= 0:
raise ValueError('Mass must be greater than zero')
self._mass = value
@property
def wheelbase(self):
return self._wb
@wheelbase.setter
def wheelbase(self, value):
if value <= 0:
raise ValueError('Wheelbase must be greater than zero')
self._wb = value
@property
def weight_dist(self):
return self._wd
@weight_dist.setter
def weight_dist(self, value):
if value >= 1 or value <= 0:
raise ValueError('Weight distribution must be a ratio between 0 and 1')
self._wd = value
@property
def front_tire(self):
return self._ft
@front_tire.setter
def front_tire(self, value):
if not isinstance(value, TireModelBase) and value is not None:
raise TypeError('Front tire must be a OpenTire model')
self._ft = value
@property
def rear_tire(self):
return self._rt
@front_tire.setter
def rear_tire(self, value):
if not isinstance(value, TireModelBase) and values is not None:
raise TypeError('Rear tire must be a OpenTire model')
self._rt = value
@property
def length_a(self):
return self.wheelbase * (1 - self.weight_dist)
@property
def length_b(self):
return self.wheelbase * self.weight_dist
class MomentMethodSolver():
def __init__(self):
self._vehicle = None
self._beta = np.linspace(-12, 12, 25) * 3.14 / 180
self._delta = np.linspace(-12, 12, 25) * 3.14 / 180
self._velocity = 36
@property
def vehicle(self):
return self._vehicle
@vehicle.setter
def vehicle(self, value):
if not isinstance(value, Vehicle) and value is not None:
raise TypeError('Solver vehicle must be of Vehicle type')
self._vehicle = value
@property
def beta_range(self):
return self._beta
@beta_range.setter
def beta_range(self, value):
self._beta = value
@property
def delta_range(self):
return self._delta
@delta_range.setter
def delta_range(self, value):
self._delta = value
@property
def velocity(self):
return self._velocity
@velocity.setter
def velocity(self, value):
self._velocity = value
def solve(self):
fy = np.empty([len(self.beta_range), len(self.delta_range)])
mz = np.empty([len(self.beta_range), len(self.delta_range)])
initial_guess = (0, 0)
for i, beta in enumerate(self.beta_range):
for j, delta in enumerate(self.delta_range):
# Use previous solution as a guess
if j > 0: initial_guess = self._invertSolution(fy[i][j-1], mz[i][j-1])
elif i > 0: initial_guess = self._invertSolution(fy[i-1][j], mz[i-1][j])
else: initial_guess = (0, 0)
result = self._solve(beta, delta, initial_guess)
fy[i][j] = result[0]
mz[i][j] = result[1]
return (fy, mz)
def _solve(self, beta, delta, initial_guess = (0, 0)):
state = TireState()
state['FZ'] = 1500
state['IA'] = 0.0
state['SR'] = 0.0
state['SA'] = 0.0
state['FY'] = 0.0
state['V'] = 10.0
state['P'] = 260000
MAX_ITER = 100
n = 0
error = 9999
tolerance = 0.1
yaw_velocity = 0
front_force = initial_guess[0]
rear_force = initial_guess[1]
while (n < MAX_ITER and abs(error) > tolerance):
# Yaw rate
yaw_velocity = (front_force + rear_force) / (self.vehicle.mass * self.velocity)
error = front_force + rear_force
# Slip Angles
sa_front = beta - delta + yaw_velocity * self.vehicle.length_a / self.velocity
sa_rear = beta - yaw_velocity * self.vehicle.length_b / self.velocity
# Front Tire
state['SA'] = sa_front
state['FZ'] = 0.5 * 9.81 * self.vehicle.mass * self.vehicle.weight_dist
self.vehicle.front_tire.solve(state)
front_force = state['FY']
state['SA'] = -sa_front
self.vehicle.front_tire.solve(state)
front_force -= state['FY']
# Rear Tire
state['SA'] = sa_rear
state['FZ'] = 0.5 * 9.81 * self.vehicle.mass * (1 - self.vehicle.weight_dist)
self.vehicle.rear_tire.solve(state)
rear_force = state['FY']
state['SA'] = -sa_rear
self.vehicle.rear_tire.solve(state)
rear_force -= state['FY']
error -= front_force + rear_force
n += 1
return (front_force + rear_force,
front_force * self.vehicle.length_a - rear_force * self.vehicle.length_b)
def _invertSolution(self, lateral_force, yaw_moment):
front_force = (1 / (self.vehicle.length_a + self.vehicle.length_b)) * (self.vehicle.length_b * lateral_force
+ yaw_moment)
rear_force = (1 / (self.vehicle.length_a + self.vehicle.length_b)) * (self.vehicle.length_a * lateral_force
- yaw_moment)
return (front_force, rear_force)
openTire = OpenTire()
myVehicle = Vehicle()
myVehicle.mass = 1250 # kg
myVehicle.wheelbase = 2.4 # m
myVehicle.weight_dist = 0.47 # ratio
myVehicle.front_tire = openTire.createmodel('PAC2002')
myVehicle.rear_tire = openTire.createmodel('PAC2002')
solver = MomentMethodSolver()
solver.vehicle = myVehicle
solver.beta = np.linspace(-15, 16, 31) * 3.14 / 180
solver.delta = np.linspace(-15, 16, 31) * 3.14 / 180
solver.velocity = 70 / 3.6
force_moments = solver.solve()
lateral_accel = force_moments[0] / myVehicle.mass / 9.81
yaw_moment = force_moments[1]
plt.plot(lateral_accel[:][:], yaw_moment[:][:], color='black')
plt.plot(np.transpose(lateral_accel[:][:]), np.transpose(yaw_moment[:][:]), color='red')
plt.grid()
plt.xlabel("Lateral Acceleration [g]")
plt.ylabel("Yaw Moment [Nm]")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Liblinear, a library for large- scale linear learning focusing on SVM, is used to do the classification. It supports different solver types.
Step2: We solve ${\bf w}\cdot{\bf x} + \text{b} = 0$ to visualise the separating hyperplane. The methods get_w() and get_bias() are used to get the necessary values.
Step3: The classifier is now applied on a X-Y grid of points to get predictions.
Step4: SVMs using kernels
Step5: Just for fun we compute the kernel matrix and display it. There are clusters visible that are smooth for the gaussian and polynomial kernel and block-wise for the linear one. The gaussian one also smoothly decays from some cluster centre while the polynomial one oscillates within the clusters.
Step6: Prediction using kernel based SVM
Step7: We could now check a number of properties like what the value of the objective function returned by the particular SVM learning algorithm or the explictly computed primal and dual objective function is
Step8: and based on the objectives we can compute the duality gap (have a look at reference [2]), a measure of convergence quality of the svm training algorithm . In theory it is 0 at the optimum and in reality at least close to 0.
Step9: Let's now apply on the X-Y grid data and plot the results.
Step10: Probabilistic Outputs
Step11: Soft margins and slack variables
Step12: You can see that lower value of C causes classifier to sacrifice linear separability in order to gain stability, in a sense that influence of any single datapoint is now bounded by C. For hard margin SVM, support vectors are the points which are "on the margin". In the picture above, C=1000 is pretty close to hard-margin SVM, and you can see the highlighted points are the ones that will touch the margin. In high dimensions this might lead to overfitting. For soft-margin SVM, with a lower value of C, it's easier to explain them in terms of dual (equation $(2)$) variables. Support vectors are datapoints from training set which are are included in the predictor, ie, the ones with non-zero $\alpha_i$ parameter. This includes margin errors and points on the margin of the hyperplane.
Step13: Now lets plot the contour output on a $-5...+5$ grid for
Step14: And voila! The SVM decision rule reasonably distinguishes the red from the blue points. Despite being optimized for learning the discriminative function maximizing the margin, the SVM output quality wise remotely resembles the original distribution of the gaussian mixture model.
Step15: Kernel Normalizers
Step16: Multiclass classification
Step17: Let us try the multiclass classification for different kernels.
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
import matplotlib.patches as patches
#To import all shogun classes
import modshogun as sg
import numpy as np
#Generate some random data
X = 2 * np.random.randn(10,2)
traindata=np.r_[X + 3, X + 7].T
feats_train=sg.RealFeatures(traindata)
trainlab=np.concatenate((np.ones(10),-np.ones(10)))
labels=sg.BinaryLabels(trainlab)
# Plot the training data
plt.figure(figsize=(6,6))
plt.gray()
_=plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.title("Training Data")
plt.xlabel('attribute1')
plt.ylabel('attribute2')
p1 = patches.Rectangle((0, 0), 1, 1, fc="k")
p2 = patches.Rectangle((0, 0), 1, 1, fc="w")
plt.legend((p1, p2), ["Class 1", "Class 2"], loc=2)
plt.gray()
#prameters to svm
#parameter C is described in a later section.
C=1
epsilon=1e-3
svm=sg.LibLinear(C, feats_train, labels)
svm.set_liblinear_solver_type(sg.L2R_L2LOSS_SVC)
svm.set_epsilon(epsilon)
#train
svm.train()
w=svm.get_w()
b=svm.get_bias()
#solve for w.x+b=0
x1=np.linspace(-1.0, 11.0, 100)
def solve (x1):
return -( ( (w[0])*x1 + b )/w[1] )
x2=map(solve, x1)
#plot
plt.figure(figsize=(6,6))
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.plot(x1,x2, linewidth=2)
plt.title("Separating hyperplane")
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.gray()
size=100
x1_=np.linspace(-5, 15, size)
x2_=np.linspace(-5, 15, size)
x, y=np.meshgrid(x1_, x2_)
#Generate X-Y grid test data
grid=sg.RealFeatures(np.array((np.ravel(x), np.ravel(y))))
#apply on test grid
predictions = svm.apply(grid)
#Distance from hyperplane
z=predictions.get_values().reshape((size, size))
#plot
plt.jet()
plt.figure(figsize=(16,6))
plt.subplot(121)
plt.title("Classification")
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.jet()
#Class predictions
z=predictions.get_labels().reshape((size, size))
#plot
plt.subplot(122)
plt.title("Separating hyperplane")
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.gray()
gaussian_kernel=sg.GaussianKernel(feats_train, feats_train, 100)
#Polynomial kernel of degree 2
poly_kernel=sg.PolyKernel(feats_train, feats_train, 2, True)
linear_kernel=sg.LinearKernel(feats_train, feats_train)
kernels=[linear_kernel, poly_kernel, gaussian_kernel]
plt.jet()
def display_km(kernels, svm):
plt.figure(figsize=(20,6))
plt.suptitle('Kernel matrices for different kernels', fontsize=12)
for i, kernel in enumerate(kernels):
plt.subplot(1, len(kernels), i+1)
plt.title(kernel.get_name())
km=kernel.get_kernel_matrix()
plt.imshow(km, interpolation="nearest")
plt.colorbar()
display_km(kernels, svm)
C=1
epsilon=1e-3
svm=sg.LibSVM(C, gaussian_kernel, labels)
_=svm.train()
libsvm_obj=svm.get_objective()
primal_obj, dual_obj=svm.compute_svm_primal_objective(), svm.compute_svm_dual_objective()
print libsvm_obj, primal_obj, dual_obj
print "duality_gap", dual_obj-primal_obj
out=svm.apply(sg.RealFeatures(grid))
z=out.get_values().reshape((size, size))
#plot
plt.jet()
plt.figure(figsize=(16,6))
plt.subplot(121)
plt.title("Classification")
c=plt.pcolor(x1_, x2_, z)
plt.contour(x1_ , x2_, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.jet()
z=out.get_labels().reshape((size, size))
plt.subplot(122)
plt.title("Decision boundary")
c=plt.pcolor(x1_, x2_, z)
plt.contour(x1_ , x2_, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.gray()
n=10
x1t_=np.linspace(-5, 15, n)
x2t_=np.linspace(-5, 15, n)
xt, yt=np.meshgrid(x1t_, x2t_)
#Generate X-Y grid test data
test_grid=sg.RealFeatures(np.array((np.ravel(xt), np.ravel(yt))))
labels_out=svm.apply(sg.RealFeatures(test_grid))
#Get values (Distance from hyperplane)
values=labels_out.get_values()
#Get probabilities
labels_out.scores_to_probabilities()
prob=labels_out.get_values()
#plot
plt.gray()
plt.figure(figsize=(10,6))
p1=plt.scatter(values, prob)
plt.title('Probabilistic outputs')
plt.xlabel('Distance from hyperplane')
plt.ylabel('Probability')
plt.legend([p1], ["Test samples"], loc=2)
def plot_sv(C_values):
plt.figure(figsize=(20,6))
plt.suptitle('Soft and hard margins with varying C', fontsize=12)
for i in range(len(C_values)):
plt.subplot(1, len(C_values), i+1)
linear_kernel=sg.LinearKernel(feats_train, feats_train)
svm1=sg.LibSVM(C_values[i], linear_kernel, labels)
svm1.train()
vec1=svm1.get_support_vectors()
X_=[]
Y_=[]
new_labels=[]
for j in vec1:
X_.append(traindata[0][j])
Y_.append(traindata[1][j])
new_labels.append(trainlab[j])
out1=svm1.apply(sg.RealFeatures(grid))
z1=out1.get_labels().reshape((size, size))
plt.jet()
c=plt.pcolor(x1_, x2_, z1)
plt.contour(x1_ , x2_, z1, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(X_, Y_, c=new_labels, s=150)
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=20)
plt.title('Support vectors for C=%.2f'%C_values[i])
plt.xlabel('attribute1')
plt.ylabel('attribute2')
C_values=[0.1, 1000]
plot_sv(C_values)
num=50;
dist=1.0;
gmm=sg.GMM(2)
gmm.set_nth_mean(np.array([-dist,-dist]),0)
gmm.set_nth_mean(np.array([dist,dist]),1)
gmm.set_nth_cov(np.array([[1.0,0.0],[0.0,1.0]]),0)
gmm.set_nth_cov(np.array([[1.0,0.0],[0.0,1.0]]),1)
gmm.set_coef(np.array([1.0,0.0]))
xntr=np.array([gmm.sample() for i in xrange(num)]).T
gmm.set_coef(np.array([0.0,1.0]))
xptr=np.array([gmm.sample() for i in xrange(num)]).T
traindata=np.concatenate((xntr,xptr), axis=1)
trainlab=np.concatenate((-np.ones(num), np.ones(num)))
#shogun format features
feats_train=sg.RealFeatures(traindata)
labels=sg.BinaryLabels(trainlab)
gaussian_kernel=sg.GaussianKernel(feats_train, feats_train, 10)
#Polynomial kernel of degree 2
poly_kernel=sg.PolyKernel(feats_train, feats_train, 2, True)
linear_kernel=sg.LinearKernel(feats_train, feats_train)
kernels=[gaussian_kernel, poly_kernel, linear_kernel]
#train machine
C=1
svm=sg.LibSVM(C, gaussian_kernel, labels)
_=svm.train()
size=100
x1=np.linspace(-5, 5, size)
x2=np.linspace(-5, 5, size)
x, y=np.meshgrid(x1, x2)
grid=sg.RealFeatures(np.array((np.ravel(x), np.ravel(y))))
grid_out=svm.apply(grid)
z=grid_out.get_labels().reshape((size, size))
plt.jet()
plt.figure(figsize=(16,5))
z=grid_out.get_values().reshape((size, size))
plt.subplot(121)
plt.title('Classification')
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.subplot(122)
plt.title('Original distribution')
gmm.set_coef(np.array([1.0,0.0]))
gmm.set_features(grid)
grid_out=gmm.get_likelihood_for_all_examples()
zn=grid_out.reshape((size, size))
gmm.set_coef(np.array([0.0,1.0]))
grid_out=gmm.get_likelihood_for_all_examples()
zp=grid_out.reshape((size, size))
z=zp-zn
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
def plot_outputs(kernels):
plt.figure(figsize=(20,5))
plt.suptitle('Binary Classification using different kernels', fontsize=12)
for i in range(len(kernels)):
plt.subplot(1,len(kernels),i+1)
plt.title(kernels[i].get_name())
svm.set_kernel(kernels[i])
svm.train()
grid_out=svm.apply(grid)
z=grid_out.get_values().reshape((size, size))
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=35)
plot_outputs(kernels)
f = open(os.path.join(SHOGUN_DATA_DIR, 'uci/ionosphere/ionosphere.data'))
mat = []
labels = []
# read data from file
for line in f:
words = line.rstrip().split(',')
mat.append([float(i) for i in words[0:-1]])
if str(words[-1])=='g':
labels.append(1)
else:
labels.append(-1)
f.close()
mat_train=mat[:30]
mat_test=mat[30:110]
lab_train=sg.BinaryLabels(np.array(labels[:30]).reshape((30,)))
lab_test=sg.BinaryLabels(np.array(labels[30:110]).reshape((len(labels[30:110]),)))
feats_train = sg.RealFeatures(np.array(mat_train).T)
feats_test = sg.RealFeatures(np.array(mat_test).T)
#without normalization
gaussian_kernel=sg.GaussianKernel()
gaussian_kernel.init(feats_train, feats_train)
gaussian_kernel.set_width(0.1)
C=1
svm=sg.LibSVM(C, gaussian_kernel, lab_train)
_=svm.train()
output=svm.apply(feats_test)
Err=sg.ErrorRateMeasure()
error=Err.evaluate(output, lab_test)
print 'Error:', error
#set normalization
gaussian_kernel=sg.GaussianKernel()
# TODO: currently there is a bug that makes it impossible to use Gaussian kernels and kernel normalisers
# See github issue #3504
#gaussian_kernel.set_normalizer(sg.SqrtDiagKernelNormalizer())
gaussian_kernel.init(feats_train, feats_train)
gaussian_kernel.set_width(0.1)
svm.set_kernel(gaussian_kernel)
svm.train()
output=svm.apply(feats_test)
Err=sg.ErrorRateMeasure()
error=Err.evaluate(output, lab_test)
print 'Error with normalization:', error
num=30;
num_components=4
means=np.zeros((num_components, 2))
means[0]=[-1.5,1.5]
means[1]=[1.5,-1.5]
means[2]=[-1.5,-1.5]
means[3]=[1.5,1.5]
covs=np.array([[1.0,0.0],[0.0,1.0]])
gmm=sg.GMM(num_components)
[gmm.set_nth_mean(means[i], i) for i in range(num_components)]
[gmm.set_nth_cov(covs,i) for i in range(num_components)]
gmm.set_coef(np.array([1.0,0.0,0.0,0.0]))
xntr=np.array([gmm.sample() for i in xrange(num)]).T
xnte=np.array([gmm.sample() for i in xrange(5000)]).T
gmm.set_coef(np.array([0.0,1.0,0.0,0.0]))
xntr1=np.array([gmm.sample() for i in xrange(num)]).T
xnte1=np.array([gmm.sample() for i in xrange(5000)]).T
gmm.set_coef(np.array([0.0,0.0,1.0,0.0]))
xptr=np.array([gmm.sample() for i in xrange(num)]).T
xpte=np.array([gmm.sample() for i in xrange(5000)]).T
gmm.set_coef(np.array([0.0,0.0,0.0,1.0]))
xptr1=np.array([gmm.sample() for i in xrange(num)]).T
xpte1=np.array([gmm.sample() for i in xrange(5000)]).T
traindata=np.concatenate((xntr,xntr1,xptr,xptr1), axis=1)
testdata=np.concatenate((xnte,xnte1,xpte,xpte1), axis=1)
l0 = np.array([0.0 for i in xrange(num)])
l1 = np.array([1.0 for i in xrange(num)])
l2 = np.array([2.0 for i in xrange(num)])
l3 = np.array([3.0 for i in xrange(num)])
trainlab=np.concatenate((l0,l1,l2,l3))
testlab=np.concatenate((l0,l1,l2,l3))
plt.title('Toy data for multiclass classification')
plt.jet()
plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=75)
feats_train=sg.RealFeatures(traindata)
labels=sg.MulticlassLabels(trainlab)
gaussian_kernel=sg.GaussianKernel(feats_train, feats_train, 2)
poly_kernel=sg.PolyKernel(feats_train, feats_train, 4, True)
linear_kernel=sg.LinearKernel(feats_train, feats_train)
kernels=[gaussian_kernel, poly_kernel, linear_kernel]
svm=sg.GMNPSVM(1, gaussian_kernel, labels)
_=svm.train(feats_train)
size=100
x1=np.linspace(-6, 6, size)
x2=np.linspace(-6, 6, size)
x, y=np.meshgrid(x1, x2)
grid=sg.RealFeatures(np.array((np.ravel(x), np.ravel(y))))
def plot_outputs(kernels):
plt.figure(figsize=(20,5))
plt.suptitle('Multiclass Classification using different kernels', fontsize=12)
for i in range(len(kernels)):
plt.subplot(1,len(kernels),i+1)
plt.title(kernels[i].get_name())
svm.set_kernel(kernels[i])
svm.train(feats_train)
grid_out=svm.apply(grid)
z=grid_out.get_labels().reshape((size, size))
plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=35)
plot_outputs(kernels)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Exploration
Step2: Question 1 - Feature Observation
Step4: Answer
Step5: Question 2 - Goodness of Fit
Step6: Answer
Step7: Question 3 - Training and Testing
Step8: Question 4 - Learning the Data
Step10: Question 5 - Bias-Variance Tradeoff
Step11: Making Predictions
Step12: Answer
Step13: Answer
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from sklearn.cross_validation import ShuffleSplit
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
# TODO: Minimum price of the data
minimum_price = np.min(data['MEDV'])
# TODO: Maximum price of the data
maximum_price = np.max(data['MEDV'])
# TODO: Mean price of the data
mean_price = np.mean(data['MEDV'])
# TODO: Median price of the data
median_price = np.median(data['MEDV'])
# TODO: Standard deviation of prices of the data
std_price = np.std(data['MEDV'])
# Show the calculated statistics
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)
import matplotlib.pyplot as plt
import numpy as np
for col in features.columns:
fig, ax = plt.subplots()
fit = np.polyfit(features[col],prices,deg=1)
plt.plot(features[col], prices,'o',color='black')
plt.plot(features[col], fit[0]*features[col]+fit[1], color ='red', linewidth = 3)
plt.title("Prices vs %s"%(col))
plt.xlabel(col)
plt.ylabel("prices")
# TODO: Import 'r2_score'
from sklearn.metrics import r2_score as r2
def performance_metric(y_true, y_predict):
Calculates and returns the performance score between
true and predicted values based on the metric chosen.
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2(y_true,y_predict)
# Return the score
return score
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)
# TODO: Import 'train_test_split'
from sklearn import cross_validation
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = cross_validation.train_test_split(features,prices,test_size=0.2,random_state=0)
# Success
print "Training and testing split was successful."
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
vs.ModelComplexity(X_train, y_train)
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.metrics import make_scorer
from sklearn import grid_search
from sklearn.tree import DecisionTreeRegressor
from sklearn import cross_validation
def fit_model(X, y):
Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y].
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor(random_state=0)
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth':range(1,11)}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search object
grid = grid_search.GridSearchCV(regressor, params, cv= cv_sets, scoring=scoring_fnc)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
vs.PredictTrials(features, prices, fit_model, client_data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TFRecord and tf.train.Example
Step5: tf.train.Example
Step6: Note
Step7: All proto messages can be serialized to a binary-string using the .SerializeToString method
Step8: Creating a tf.train.Example message
Step10: Each of these features can be coerced into a tf.train.Example-compatible type using one of _bytes_feature, _float_feature, _int64_feature. You can then create a tf.train.Example message from these encoded features
Step11: For example, suppose you have a single observation from the dataset, [False, 4, bytes('goat'), 0.9876]. You can create and print the tf.train.Example message for this observation using create_message(). Each single observation will be written as a Features message as per the above. Note that the tf.train.Example message is just a wrapper around the Features message
Step12: To decode the message use the tf.train.Example.FromString method.
Step13: TFRecords format details
Step14: Applied to a tuple of arrays, it returns a dataset of tuples
Step15: Use the tf.data.Dataset.map method to apply a function to each element of a Dataset.
Step16: Apply this function to each element in the dataset
Step17: And write them to a TFRecord file
Step18: Reading a TFRecord file
Step19: At this point the dataset contains serialized tf.train.Example messages. When iterated over it returns these as scalar string tensors.
Step20: These tensors can be parsed using the function below. Note that the feature_description is necessary here because tf.data.Datasets use graph-execution, and need this description to build their shape and type signature
Step21: Alternatively, use tf.parse example to parse the whole batch at once. Apply this function to each item in the dataset using the tf.data.Dataset.map method
Step22: Use eager execution to display the observations in the dataset. There are 10,000 observations in this dataset, but you will only display the first 10. The data is displayed as a dictionary of features. Each item is a tf.Tensor, and the numpy element of this tensor displays the value of the feature
Step23: Here, the tf.parse_example function unpacks the tf.train.Example fields into standard tensors.
Step24: Reading a TFRecord file
Step25: That returns a tf.train.Example proto which is dificult to use as is, but it's fundamentally a representation of a
Step26: Walkthrough
Step27: Write the TFRecord file
Step28: Notice that all of the features are now stored in the tf.train.Example message. Next, functionalize the code above and write the example messages to a file named images.tfrecords
Step29: Read the TFRecord file
Step30: Recover the images from the TFRecord file
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import numpy as np
import IPython.display as display
# The following functions can be used to convert a value to a type compatible
# with tf.train.Example.
def _bytes_feature(value):
Returns a bytes_list from a string / byte.
if isinstance(value, type(tf.constant(0))):
value = value.numpy() # BytesList won't unpack a string from an EagerTensor.
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
Returns a float_list from a float / double.
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _int64_feature(value):
Returns an int64_list from a bool / enum / int / uint.
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
print(_bytes_feature(b'test_string'))
print(_bytes_feature(u'test_bytes'.encode('utf-8')))
print(_float_feature(np.exp(1)))
print(_int64_feature(True))
print(_int64_feature(1))
feature = _float_feature(np.exp(1))
feature.SerializeToString()
# The number of observations in the dataset.
n_observations = int(1e4)
# Boolean feature, encoded as False or True.
feature0 = np.random.choice([False, True], n_observations)
# Integer feature, random from 0 to 4.
feature1 = np.random.randint(0, 5, n_observations)
# String feature.
strings = np.array([b'cat', b'dog', b'chicken', b'horse', b'goat'])
feature2 = strings[feature1]
# Float feature, from a standard normal distribution.
feature3 = np.random.randn(n_observations)
def serialize_example(feature0, feature1, feature2, feature3):
Creates a tf.train.Example message ready to be written to a file.
# Create a dictionary mapping the feature name to the tf.train.Example-compatible
# data type.
feature = {
'feature0': _int64_feature(feature0),
'feature1': _int64_feature(feature1),
'feature2': _bytes_feature(feature2),
'feature3': _float_feature(feature3),
}
# Create a Features message using tf.train.Example.
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
# This is an example observation from the dataset.
example_observation = []
serialized_example = serialize_example(False, 4, b'goat', 0.9876)
serialized_example
example_proto = tf.train.Example.FromString(serialized_example)
example_proto
tf.data.Dataset.from_tensor_slices(feature1)
features_dataset = tf.data.Dataset.from_tensor_slices((feature0, feature1, feature2, feature3))
features_dataset
# Use `take(1)` to only pull one example from the dataset.
for f0,f1,f2,f3 in features_dataset.take(1):
print(f0)
print(f1)
print(f2)
print(f3)
def tf_serialize_example(f0,f1,f2,f3):
tf_string = tf.py_function(
serialize_example,
(f0, f1, f2, f3), # Pass these args to the above function.
tf.string) # The return type is `tf.string`.
return tf.reshape(tf_string, ()) # The result is a scalar.
tf_serialize_example(f0, f1, f2, f3)
serialized_features_dataset = features_dataset.map(tf_serialize_example)
serialized_features_dataset
def generator():
for features in features_dataset:
yield serialize_example(*features)
serialized_features_dataset = tf.data.Dataset.from_generator(
generator, output_types=tf.string, output_shapes=())
serialized_features_dataset
filename = 'test.tfrecord'
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(serialized_features_dataset)
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(10):
print(repr(raw_record))
# Create a description of the features.
feature_description = {
'feature0': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature1': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature2': tf.io.FixedLenFeature([], tf.string, default_value=''),
'feature3': tf.io.FixedLenFeature([], tf.float32, default_value=0.0),
}
def _parse_function(example_proto):
# Parse the input `tf.train.Example` proto using the dictionary above.
return tf.io.parse_single_example(example_proto, feature_description)
parsed_dataset = raw_dataset.map(_parse_function)
parsed_dataset
for parsed_record in parsed_dataset.take(10):
print(repr(parsed_record))
# Write the `tf.train.Example` observations to the file.
with tf.io.TFRecordWriter(filename) as writer:
for i in range(n_observations):
example = serialize_example(feature0[i], feature1[i], feature2[i], feature3[i])
writer.write(example)
!du -sh {filename}
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(1):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
print(example)
result = {}
# example.features.feature is the dictionary
for key, feature in example.features.feature.items():
# The values are the Feature objects which contain a `kind` which contains:
# one of three fields: bytes_list, float_list, int64_list
kind = feature.WhichOneof('kind')
result[key] = np.array(getattr(feature, kind).value)
result
cat_in_snow = tf.keras.utils.get_file(
'320px-Felis_catus-cat_on_snow.jpg',
'https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg')
williamsburg_bridge = tf.keras.utils.get_file(
'194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg',
'https://storage.googleapis.com/download.tensorflow.org/example_images/194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg')
display.display(display.Image(filename=cat_in_snow))
display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'))
display.display(display.Image(filename=williamsburg_bridge))
display.display(display.HTML('<a "href=https://commons.wikimedia.org/wiki/File:New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg">From Wikimedia</a>'))
image_labels = {
cat_in_snow : 0,
williamsburg_bridge : 1,
}
# This is an example, just using the cat image.
image_string = open(cat_in_snow, 'rb').read()
label = image_labels[cat_in_snow]
# Create a dictionary with features that may be relevant.
def image_example(image_string, label):
image_shape = tf.io.decode_jpeg(image_string).shape
feature = {
'height': _int64_feature(image_shape[0]),
'width': _int64_feature(image_shape[1]),
'depth': _int64_feature(image_shape[2]),
'label': _int64_feature(label),
'image_raw': _bytes_feature(image_string),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
for line in str(image_example(image_string, label)).split('\n')[:15]:
print(line)
print('...')
# Write the raw image files to `images.tfrecords`.
# First, process the two images into `tf.train.Example` messages.
# Then, write to a `.tfrecords` file.
record_file = 'images.tfrecords'
with tf.io.TFRecordWriter(record_file) as writer:
for filename, label in image_labels.items():
image_string = open(filename, 'rb').read()
tf_example = image_example(image_string, label)
writer.write(tf_example.SerializeToString())
!du -sh {record_file}
raw_image_dataset = tf.data.TFRecordDataset('images.tfrecords')
# Create a dictionary describing the features.
image_feature_description = {
'height': tf.io.FixedLenFeature([], tf.int64),
'width': tf.io.FixedLenFeature([], tf.int64),
'depth': tf.io.FixedLenFeature([], tf.int64),
'label': tf.io.FixedLenFeature([], tf.int64),
'image_raw': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
# Parse the input tf.train.Example proto using the dictionary above.
return tf.io.parse_single_example(example_proto, image_feature_description)
parsed_image_dataset = raw_image_dataset.map(_parse_image_function)
parsed_image_dataset
for image_features in parsed_image_dataset:
image_raw = image_features['image_raw'].numpy()
display.display(display.Image(data=image_raw))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We follow the state-action pairs formulation approach.
Step2: The backward induction algorithm for finite horizon dynamic programs is offered
Step3: In the returns, vs is an $(N+1) \times n$ array that contains the optimal value functions,
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from scipy import sparse
import matplotlib.pyplot as plt
import quantecon as qe
from quantecon.markov import DiscreteDP, backward_induction, sa_indices
T = 0.5 # Time expiration (years)
vol = 0.2 # Annual volatility
r = 0.05 # Annual interest rate
strike = 2.1 # Strike price
p0 = 2 # Current price
N = 100 # Number of periods to expiration
# Time length of a period
tau = T/N
# Discount factor
beta = np.exp(-r*tau)
# Up-jump factor
u = np.exp(vol*np.sqrt(tau))
# Up-jump probability
q = 1/2 + np.sqrt(tau)*(r - (vol**2)/2)/(2*vol)
# Possible price values
ps = u**np.arange(-N, N+1) * p0
# Number of states
n = len(ps) + 1 # State n-1: "the option has been exercised"
# Number of actions
m = 2 # 0: hold, 1: exercise
# Number of feasible state-action pairs
L = n*m - 1 # At state n-1, there is only one action "do nothing"
# Arrays of state and action indices
s_indices, a_indices = sa_indices(n, m)
s_indices, a_indices = s_indices[:-1], a_indices[:-1]
# Reward vector
R = np.empty((n, m))
R[:, 0] = 0
R[:-1, 1] = strike - ps
R = R.ravel()[:-1]
# Transition probability array
Q = sparse.lil_matrix((L, n))
for i in range(L-1):
if a_indices[i] == 0:
Q[i, min(s_indices[i]+1, len(ps)-1)] = q
Q[i, max(s_indices[i]-1, 0)] = 1 - q
else:
Q[i, n-1] = 1
Q[L-1, n-1] = 1
# Create a DiscreteDP
ddp = DiscreteDP(R, Q, beta, s_indices, a_indices)
vs, sigmas = backward_induction(ddp, N)
v = vs[0]
max_exercise_price = ps[sigmas[::-1].sum(-1)-1]
fig, axes = plt.subplots(1, 2, figsize=(12, 4))
axes[0].plot([0, strike], [strike, 0], 'k--')
axes[0].plot(ps, v[:-1])
axes[0].set_xlim(0, strike*2)
axes[0].set_xticks(np.linspace(0, 4, 5, endpoint=True))
axes[0].set_ylim(0, strike)
axes[0].set_yticks(np.linspace(0, 2, 5, endpoint=True))
axes[0].set_xlabel('Asset Price')
axes[0].set_ylabel('Premium')
axes[0].set_title('Put Option Value')
axes[1].plot(np.linspace(0, T, N), max_exercise_price)
axes[1].set_xlim(0, T)
axes[1].set_ylim(1.6, strike)
axes[1].set_xlabel('Time to Maturity')
axes[1].set_ylabel('Asset Price')
axes[1].set_title('Put Option Optimal Exercise Boundary')
axes[1].tick_params(right='on')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: DCP на примере CVXPy
Step2: CVXPy
Step3: Проверим решение
Step4: Проверим DCP правила
Step5: Autodiff на примере PyTorch'a
|
<ASSISTANT_TASK:>
Python Code:
import cvxpy as cvx
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
print(cvx.installed_solvers())
USE_COLAB = False
if USE_COLAB == False:
plt.rc("text", usetex=True)
n = 1000
m = 10
x_true = np.random.randn(n)
x_true[np.abs(x_true) > 0.05] = 0
print("Num of nnz in x = {}".format(np.sum(x_true != 0)))
A = np.random.randn(m, n)
b = A.dot(x_true)
x = cvx.Variable(n)
objective = cvx.norm1(x)
constr = [A*x == b]
problem = cvx.Problem(cvx.Minimize(objective), constr)
problem.solve(verbose=True, max_iter=3000)
tol= 1e-4
print(np.linalg.norm(A.dot(x.value) - b))
print(np.linalg.norm(A[:, np.abs(x.value) > tol].dot(x.value[np.abs(x.value) > tol]) - b))
print(np.linalg.norm(x_true - x.value))
print("Num nnz = {}".format(np.sum(np.abs(x.value) > tol)))
plt.figure(figsize=(10, 8))
plt.plot(x.value, label=r"$x^*$")
plt.plot(x_true, label=r"True $x$")
plt.legend(fontsize=26)
plt.yticks(fontsize=26)
plt.figure(figsize=(9, 7))
plt.semilogy(np.sort(np.abs(x.value))[::-1])
plt.ylabel("$|x_i|$", fontsize=24)
plt.yticks(fontsize=24)
plt.xlabel(r"Sorted index of $x^*$", fontsize=24)
plt.xticks(fontsize=24)
# Non-affine equality constraint
y = cvx.Variable(1)
obj = cvx.Minimize(cvx.power(y, 3))
problem = cvx.Problem(obj, [cvx.power(y - 3, 2) == 0])
problem.solve(verbose=True)
# Non-convex objective function
y = cvx.Variable(2)
obj = cvx.Minimize(y[0]**2 - y[1]**2)
problem = cvx.Problem(obj)
problem.solve(verbose=True)
import torch
n = 5
A = torch.randn((n, n), requires_grad=True)
b = torch.randn((n,))
x = torch.randn((n,), requires_grad=True)
f = 0.5 * x @ A @ x - b @ x
f.backward()
print(f)
print(f.item())
manual_grad_x = 0.5 * (A + A.t()) @ x - b
print(manual_grad_x.data)
print(x.grad.data)
manual_grad_A = 0.5 * torch.ger(x, x)
print(manual_grad_A.data)
print(A.grad.data)
print(torch.norm(manual_grad_A.data - A.grad.data).item())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Contour plots of 2d wavefunctions
Step3: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction
Step4: Next make a visualization using one of the pcolor functions
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
def well2d(x, y, nx, ny, L=1.0):
Compute the 2d quantum well wave function.
psi = 2*np.sin(nx*np.pi*x/L)*np.sin(ny*np.pi*y/L)/L
return psi
psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)
psi
psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)
assert len(psi)==10
assert psi.shape==(10,)
x = np.linspace(0,1,100)
y = np.linspace(0,1,100)
fld = np.meshgrid(x, y) # Sets up coordinates
psi = well2d(fld[0],fld[1],3,2) # Creates 2-d array with well2d's values
f = plt.figure(figsize=(5,5))
plt.contourf(x,y,psi)
plt.tick_params(which='both',direction='out', top=False,right=False)
plt.set_cmap('copper')
plt.colorbar()
assert True # use this cell for grading the contour plot
plt.figure(figsize=(5,5))
plt.pcolormesh(x,y,psi)
plt.tick_params(which='both',direction='out', top=False, right=False)
plt.set_cmap('winter')
plt.colorbar()
assert True # use this cell for grading the pcolor plot
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Quick Look at the Data
Step2: Calibration Data
Step3: Perspective Transform
Step4: Color Thresholding
Step5: Coordinate Transformations
Step6: Read in saved data and ground truth map of the world
Step7: Write a function to process stored images
Step9: Make a video from processed image data
|
<ASSISTANT_TASK:>
Python Code:
%%HTML
<style> code {background-color : orange !important;} </style>
%matplotlib inline
#%matplotlib qt # Choose %matplotlib qt to plot to an interactive window (note it may show up behind your browser)
# Make some of the relevant imports
import cv2 # OpenCV for perspective transform
import numpy as np
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import scipy.misc # For saving images as needed
import glob # For reading in a list of images from a folder
path = '../test_dataset/IMG/*'
img_list = glob.glob(path)
# Grab a random image and display it
idx = np.random.randint(0, len(img_list)-1)
image = mpimg.imread(img_list[idx])
plt.imshow(image)
# In the simulator you can toggle on a grid on the ground for calibration
# You can also toggle on the rock samples with the 0 (zero) key.
# Here's an example of the grid and one of the rocks
example_grid = '../calibration_images/example_grid1.jpg'
example_rock = '../calibration_images/example_rock1.jpg'
grid_img = mpimg.imread(example_grid)
rock_img = mpimg.imread(example_rock)
fig = plt.figure(figsize=(12,3))
plt.subplot(121)
plt.imshow(grid_img)
plt.subplot(122)
plt.imshow(rock_img)
# Define a function to perform a perspective transform
# I've used the example grid image above to choose source points for the
# grid cell in front of the rover (each grid cell is 1 square meter in the sim)
# Define a function to perform a perspective transform
def perspect_transform(img, src, dst):
M = cv2.getPerspectiveTransform(src, dst)
warped = cv2.warpPerspective(img, M, (img.shape[1], img.shape[0]))# keep same size as input image
return warped
# Define calibration box in source (actual) and destination (desired) coordinates
# These source and destination points are defined to warp the image
# to a grid where each 10x10 pixel square represents 1 square meter
# The destination box will be 2*dst_size on each side
dst_size = 5
# Set a bottom offset to account for the fact that the bottom of the image
# is not the position of the rover but a bit in front of it
# this is just a rough guess, feel free to change it!
bottom_offset = 6
source = np.float32([[14, 140], [301 ,140],[200, 96], [118, 96]])
destination = np.float32([[image.shape[1]/2 - dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - 2*dst_size - bottom_offset],
[image.shape[1]/2 - dst_size, image.shape[0] - 2*dst_size - bottom_offset],
])
warped = perspect_transform(grid_img, source, destination)
plt.imshow(warped)
#scipy.misc.imsave('../output/warped_example.jpg', warped)
# Identify pixels above the threshold
# Threshold of RGB > 160 does a nice job of identifying ground pixels only
def color_thresh(img, rgb_thresh=(160, 160, 160)):
# Create an array of zeros same xy size as img, but single channel
color_select = np.zeros_like(img[:,:,0])
# Require that each pixel be above all three threshold values in RGB
# above_thresh will now contain a boolean array with "True"
# where threshold was met
above_thresh = (img[:,:,0] > rgb_thresh[0]) \
& (img[:,:,1] > rgb_thresh[1]) \
& (img[:,:,2] > rgb_thresh[2])
# Index the array of zeros with the boolean array and set to 1
color_select[above_thresh] = 1
# Return the binary image
return color_select
threshed = color_thresh(warped)
plt.imshow(threshed, cmap='gray')
#scipy.misc.imsave('../output/warped_threshed.jpg', threshed*255)
def rover_coords(binary_img):
# Identify nonzero pixels
ypos, xpos = binary_img.nonzero()
# Calculate pixel positions with reference to the rover position being at the
# center bottom of the image.
x_pixel = np.absolute(ypos - binary_img.shape[0]).astype(np.float)
y_pixel = -(xpos - binary_img.shape[0]).astype(np.float)
return x_pixel, y_pixel
# Define a function to convert to radial coords in rover space
def to_polar_coords(x_pixel, y_pixel):
# Convert (x_pixel, y_pixel) to (distance, angle)
# in polar coordinates in rover space
# Calculate distance to each pixel
dist = np.sqrt(x_pixel**2 + y_pixel**2)
# Calculate angle away from vertical for each pixel
angles = np.arctan2(y_pixel, x_pixel)
return dist, angles
# Define a function to map rover space pixels to world space
def pix_to_world(xpix, ypix, x_rover, y_rover, yaw_rover, world_size, scale):
# Map pixels from rover space to world coords
yaw = yaw_rover * np.pi / 180
# Perform rotation, translation and clipping all at once
x_pix_world = np.clip(np.int_((((xpix * np.cos(yaw)) - (ypix * np.sin(yaw)))/scale) + x_rover),
0, world_size - 1)
y_pix_world = np.clip(np.int_((((xpix * np.sin(yaw)) + (ypix * np.cos(yaw)))/scale) + y_rover),
0, world_size - 1)
return x_pix_world, y_pix_world
# Grab another random image
idx = np.random.randint(0, len(img_list)-1)
image = mpimg.imread(img_list[idx])
warped = perspect_transform(image, source, destination)
threshed = color_thresh(warped)
# Calculate pixel values in rover-centric coords and distance/angle to all pixels
xpix, ypix = rover_coords(threshed)
dist, angles = to_polar_coords(xpix, ypix)
mean_dir = np.mean(angles)
# Do some plotting
fig = plt.figure(figsize=(12,9))
plt.subplot(221)
plt.imshow(image)
plt.subplot(222)
plt.imshow(warped)
plt.subplot(223)
plt.imshow(threshed, cmap='gray')
plt.subplot(224)
plt.plot(xpix, ypix, '.')
plt.ylim(-160, 160)
plt.xlim(0, 160)
arrow_length = 100
x_arrow = arrow_length * np.cos(mean_dir)
y_arrow = arrow_length * np.sin(mean_dir)
plt.arrow(0, 0, x_arrow, y_arrow, color='red', zorder=2, head_width=10, width=2)
# Import pandas and read in csv file as a dataframe
import pandas as pd
# Change this path to your data directory
df = pd.read_csv('../test_dataset/robot_log.csv')
img_list_sorted = df["Path"].tolist() # Create list of image pathnames
# Read in ground truth map and create a 3-channel image with it
ground_truth = mpimg.imread('../calibration_images/map_bw.png')
ground_truth_3d = np.dstack((ground_truth*0, ground_truth, ground_truth*0)).astype(np.float)
# Creating a class to be the data container
# Will read in saved data from csv file and populate this object
# Worldmap is instantiated as 200 x 200 grids corresponding
# to a 200m x 200m space (same size as the ground truth map: 200 x 200 pixels)
# This encompasses the full range of output position values in x and y from the sim
class Databucket():
def __init__(self):
self.images = img_list_sorted
self.xpos = df["X_Position"].values
self.ypos = df["Y_Position"].values
self.yaw = df["Yaw"].values
self.count = 0
self.worldmap = np.zeros((200, 200, 3)).astype(np.float)
self.ground_truth = ground_truth_3d # Ground truth worldmap
# Instantiate a Databucket().. this will be a global variable/object
# that you can refer to in the process_image() function below
data = Databucket()
# Define a function to pass stored images to
# reading rover position and yaw angle from csv file
# This function will be used by moviepy to create an output video
def process_image(img):
# Example of how to use the Databucket() object defined in the previous cell
# print(data.xpos[0], data.ypos[0], data.yaw[0])
warp = perspect_transform(img, source, destination)
output_image = np.zeros((img.shape[0] + data.worldmap.shape[0], img.shape[1]*2, 3))
# Example
output_image[0:img.shape[0], 0:img.shape[1]] = img
output_image[0:img.shape[0], img.shape[1]:] = warp
cv2.putText(output_image,"Populate this image with your analyses to make a video!", (20, 20),
cv2.FONT_HERSHEY_COMPLEX, 0.4, (255, 255, 255), 1)
data.count += 1 # Keep track of the index in the Databucket()
return output_image
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from moviepy.editor import ImageSequenceClip
# Define pathname to save the output video
output = '../output/test_mapping.mp4'
clip = ImageSequenceClip(data.images, fps=60)
new_clip = clip.fl_image(process_image) #NOTE: this function expects color images!!
%time new_clip.write_videofile(output, audio=False)
# This last cell should function as an inline video player
# If it fails to render the video, you can simply have a look
# at the saved mp4 in your `/output` folder
from IPython.display import HTML
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(output))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Examples
Step2: Example 1
Step3: Example 2
Step4: Example 3
Step5: Equation
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
def dftmatrix(N):
x = np.arange(N).reshape(N,1)
u = x
Wn = np.exp(-1j*2*np.pi/N)
A = (1./np.sqrt(N)) * (Wn ** u.dot(x.T))
return A
testing = (__name__ == "__main__")
if testing:
! jupyter nbconvert --to python dftmatrix.ipynb
import numpy as np
import sys,os
import matplotlib.image as mpimg
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
if testing:
A = ia.dftmatrix(128)
ia.adshow(ia.normalize(A.real),'A.real')
ia.adshow(ia.normalize(A.imag),'A.imag')
if testing:
A = ia.dftmatrix(4)
print('A=\n', A.round(1))
print('A-A.T=\n', A - A.T)
print((np.abs(np.linalg.inv(A)-np.conjugate(A))).max() < 10E-15)
if testing:
u = x = np.arange(10).reshape(10,1)
print('u xT=\n', u.dot(x.T))
if testing:
print('testing dftmatrix')
print(repr(np.floor(0.5 + 10E4*ia.dftmatrix(4).real) / 10E4) == repr(np.array(
[[ 0.5, 0.5, 0.5, 0.5],
[ 0.5, 0. , -0.5, 0. ],
[ 0.5, -0.5, 0.5, -0.5],
[ 0.5, 0. , -0.5, 0. ]])))
print(repr(np.floor(0.5 + 10E4*ia.dftmatrix(4).imag) / 10E4) == repr(np.array(
[[ 0. , 0. , 0. , 0. ],
[ 0. , -0.5, 0. , 0.5],
[ 0. , 0. , 0. , 0. ],
[ 0. , 0.5, 0. , -0.5]])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Filter logs for public items
Step2: Make the data set manageably smaller by filtering out users with short/long review histories
Step3: Load the filtered logs and compute basic stats summarizing the data set
Step5: Apply more filters and format the log data into an InteractionHistory that can be understood by lentil
Step7: Perform analagous preprocessing steps for the MTurk data set
Step9: Pre-process the dutch_big dataset
Step11: Apply additional data filters
Step14: Explore basic stats about filtered, formatted interaction history
|
<ASSISTANT_TASK:>
Python Code:
public_itemids = defaultdict(set)
fs = [x for x in os.listdir(os.path.join('data', 'shared_decks')) if '.xml' in x]
for f in fs:
try:
e = xml.etree.ElementTree.parse(os.path.join('data', 'shared_decks', f)).getroot()
for x in e.findall('log'):
public_itemids[x.get('o_id')].add(f)
except:
continue
len(public_itemids)
num_logs_of_file = defaultdict(int)
num_logs_of_item = defaultdict(int)
logged_itemids = set()
num_public_logs = 0
with open(os.path.join('data', 'mnemosynelogs_itemids_full.csv'), 'rb') as f:
f.readline()
for line in f:
line = line.replace('\r\n', '')
if line != '':
if line in public_itemids:
num_public_logs += 1
num_logs_of_item[line] += 1
for f in public_itemids[line]:
num_logs_of_file[f] += 1
logged_itemids.add(line)
num_public_logs
len(logged_itemids)
sum(1 for x in public_itemids if x in logged_itemids)
sorted(num_logs_of_item.items(), key=lambda (k,v): v, reverse=True)[:500]
sorted(num_logs_of_file.items(), key=lambda (k,v): v, reverse=True)[:50]
def contents_of_items_in_file(f):
e = xml.etree.ElementTree.parse(os.path.join('data', 'shared_decks', f)).getroot()
D = {}
M = {}
for x in e.findall('log'):
if x.get('type') == '16':
b = x.find('b')
if b is None:
b = x.find('m_1')
f = x.find('f')
if b is not None or f is not None:
D[x.get('o_id')] = (b.text if b is not None else None, f.text if f is not None else None)
elif x.get('type') == '6':
M[x.get('o_id')] = x.get('fact')
return {k: D[v] for k, v in M.iteritems()}
contents_of_item_id = {}
for f in os.listdir(os.path.join('data', 'shared_decks')):
if '.xml' in f:
try:
contents_of_item_id.update(contents_of_items_in_file(f))
except:
pass
len(contents_of_item_id)
contents_of_item_id
with open(os.path.join('data', 'content_features.pkl'), 'wb') as f:
pickle.dump(contents_of_item_id, f, pickle.HIGHEST_PROTOCOL)
with open(os.path.join('data', 'mnemosynelogs_full.csv'), 'rb') as f:
with open(os.path.join('data', 'mnemosynelogs_full_filtered.csv'), 'wb') as g:
g.write(f.readline())
for line in f:
fields = line.split(',')
if fields[4] != '' and fields[3] in contents_of_item_id:
g.write(line)
unfiltered_logs = pd.read_table(os.path.join('data', 'mnemosynelogs_full_filtered.csv'), delimiter=',')
num_ixns_of_user = unfiltered_logs['user_id'].value_counts()
user_ids = unfiltered_logs['user_id'].unique()
mn = 10
mx = 50000
len(user_ids), sum(1 for x in user_ids if num_ixns_of_user[x] > mn and num_ixns_of_user[x] < mx), sum(num_ixns_of_user[x] for x in user_ids if num_ixns_of_user[x] > mn and num_ixns_of_user[x] < mx)
user_ids = {x for x in user_ids if num_ixns_of_user[x] > mn and num_ixns_of_user[x] < mx}
filtered_logs = unfiltered_logs[unfiltered_logs['user_id'].isin(user_ids)]
filtered_logs.to_csv(os.path.join('data', 'mnemosynelogs_full_filtered_pruned.csv'), index=False)
df = pd.read_csv(os.path.join('data', 'mnemosynelogs_full_filtered_pruned.csv'), delimiter=',')
print '\n'.join(df.columns)
len(df[~np.isnan(df['grade'])])
print "Number of interactions = %d" % len(df)
print "Number of unique students = %d" % len(df['user_id'].unique())
print "Number of unique modules = %d" % len(df['object_id'].unique())
av = np.array(df['actual_interval'].values)
sv = np.array(df['scheduled_interval'].values)
av, sv = zip(*[(x, y) for x, y in zip(av, sv) if x>0 and y>0 and not np.isnan(x) and not np.isnan(y)])
av = np.array(av)
sv = np.array(sv)
plt.xlabel('log10(Scheduled interval) (log10-milliseconds)')
plt.ylabel('Frequency (number of interactions)')
plt.hist(np.log10(sv+1), bins=20)
plt.show()
plt.xlabel('log10(Scheduled interval) (log10-milliseconds)')
plt.ylabel('log10(Actual interval) (log10-milliseconds)')
plt.scatter(np.log10(sv+1), np.log10(av+1), alpha=0.005)
#plt.savefig(os.path.join('figures', 'mnemosyne', 'scheduled-vs-actual-intervals.pdf'))
plt.show()
v = np.array(df['user_id'].value_counts().values)
plt.xlabel('log10(Number of interactions per student)')
plt.ylabel('Frequency (number of students)')
plt.hist(np.log10(v))
plt.show()
v = np.array(df['object_id'].value_counts().values)
plt.xlabel('log10(Number of interactions per problem)')
plt.ylabel('Frequency (number of problems)')
plt.hist(np.log10(v))
plt.show()
grades = np.array(df['grade'].values)
plt.xlabel('Grade')
plt.ylabel('Frequency (number of interactions)')
plt.hist(grades[~np.isnan(grades)])
plt.show()
def interaction_history_from_mnemosyne_data_set(data):
Parse Mnemosyne data set into an interaction history
:param pd.DataFrame data: A dataframe of raw log data
:rtype: datatools.InteractionHistory
:return: An interaction history object
data = data[data['grade'].apply(lambda x: not np.isnan(x))]
data = data[['user_id', 'student_id', 'object_id', 'grade', 'timestamp', 'thinking_time', 'actual_interval', 'scheduled_interval']]
data.columns = ['user_id', 'student_id', 'module_id', 'outcome', 'timestamp', 'duration', 'actual_interval', 'scheduled_interval']
data['outcome'] = data['outcome'].apply(lambda x: x > 1)
student_timesteps = defaultdict(int)
timesteps = [None] * len(data)
for i, (_, ixn) in enumerate(data.iterrows()):
student_timesteps[ixn['student_id']] += 1
timesteps[i] = student_timesteps[ixn['student_id']]
data['timestep'] = timesteps
data['module_type'] = [datatools.AssessmentInteraction.MODULETYPE] * len(data)
return datatools.InteractionHistory(data, sort_by_timestep=True)
df.sort('timestamp', inplace=True)
# this is helpful for splitting histories by user-item pair (instead of by user) in lentil.evaluate
df['student_id'] = [str(x['user_id'])+'-'+str(x['object_id']) for _, x in df.iterrows()]
unfiltered_history = interaction_history_from_mnemosyne_data_set(df)
unfiltered_history.data['outcome'].value_counts()
data = []
with open(os.path.join('data', 'first_mturk_experiment.dataset'), 'rb') as f:
for line in f:
data.append(json.loads(line))
df = pd.DataFrame(data)
df['delta_t'] = df['delta_t'] * 4 * 60 * 60 # seconds
num_ixns_per_user_item = {k: defaultdict(list) for k in df['user'].unique()}
for _, ixn in df.iterrows():
num_ixns_per_user_item[ixn['user']][ixn['item']].append(ixn['delta_t'])
start_time_of_user_item = {}
for user, num_ixns_per_item in num_ixns_per_user_item.iteritems():
start_time = 0
for item, delta_ts in num_ixns_per_item.iteritems():
start_time_of_user_item[(user, item)] = start_time
start_time += sum(delta_ts)
df.sort('n_reps', inplace=True)
timestamps = []
for _, ixn in df.iterrows():
user_item = (ixn['user'], ixn['item'])
start_time_of_user_item[user_item] += ixn['delta_t']
timestamps.append(start_time_of_user_item[user_item])
df['timestamp'] = timestamps
df.sort('timestamp', inplace=True)
def interaction_history_from_mturk_data_set(data):
Parse MTurk data set into an interaction history
:param pd.DataFrame data: A dataframe of raw log data
:rtype: datatools.InteractionHistory
:return: An interaction history object
data = data[['user', 'user', 'item', 'bin_score', 'timestamp']]
data.columns = ['user_id', 'student_id', 'module_id', 'outcome', 'timestamp']
data['outcome'] = data['outcome'].apply(lambda x: x == 1)
student_timesteps = defaultdict(int)
timesteps = [None] * len(data)
for i, (_, ixn) in enumerate(data.iterrows()):
student_timesteps[ixn['student_id']] += 1
timesteps[i] = student_timesteps[ixn['student_id']]
data['timestep'] = timesteps
data['module_type'] = [datatools.AssessmentInteraction.MODULETYPE] * len(data)
return datatools.InteractionHistory(data, sort_by_timestep=True)
unfiltered_history = interaction_history_from_mturk_data_set(df)
data = []
with open(os.path.join('data', 'dutch_big.dump'), 'rb') as f:
for line in f:
data.append((line.split('\t')[0], json.loads(line.split('\t')[1])))
original_of_module_id = {}
for _, h in data:
for x in h:
original_of_module_id[x['foreign']] = x['original']
with open(os.path.join('data', 'original_of_module_id.pkl'), 'wb') as f:
pickle.dump(original_of_module_id, f, pickle.HIGHEST_PROTOCOL)
ixns = []
timestamp_of_student = defaultdict(int)
for student_id, h in data:
for ixn in h:
timestamp_of_student[student_id] += 1
ixns.append(
{'student_id' : student_id, 'module_id' : ixn['foreign'],
'outcome' : ixn['score'] > 2, 'timestamp' : timestamp_of_student[student_id]})
df = pd.DataFrame(ixns)
df['user_id'] = df['student_id']
df['student_id'] = df['user_id'] + '-' + df['module_id']
len(df)
df.sort('timestamp', inplace=True)
def interaction_history_from_dutch_big_data_set(data):
Parse MTurk data set into an interaction history
:param pd.DataFrame data: A dataframe of raw log data
:rtype: datatools.InteractionHistory
:return: An interaction history object
data = data[['user_id', 'student_id', 'module_id', 'outcome', 'timestamp']]
data.columns = ['user_id', 'student_id', 'module_id', 'outcome', 'timestamp']
student_timesteps = defaultdict(int)
timesteps = [None] * len(data)
for i, (_, ixn) in enumerate(data.iterrows()):
student_timesteps[ixn['student_id']] += 1
timesteps[i] = student_timesteps[ixn['student_id']]
data['timestep'] = timesteps
data['module_type'] = [datatools.AssessmentInteraction.MODULETYPE] * len(data)
return datatools.InteractionHistory(data, sort_by_timestep=True)
unfiltered_history = interaction_history_from_dutch_big_data_set(df)
def filter_history(history, min_num_ixns=5, max_num_ixns=sys.maxint):
Filter history for students with histories of bounded length,
and modules with enough interactions
:param datatools.InteractionHistory history: An interaction history
:param int min_num_ixns: Minimum number of timesteps in student history,
and minimum number of interactions for module
:param int max_num_ixns: Maximum number of timesteps in student history
:rtype: datatools.InteractionHistory
:return: A filtered interaction history
students = set(history.data['student_id'][(
history.data['timestep'] > min_num_ixns) & (
history.data['module_type']==datatools.AssessmentInteraction.MODULETYPE)])
students -= set(history.data['student_id'][history.data['timestep'] >= max_num_ixns])
modules = {module_id for module_id, group in history.data.groupby('module_id') if len(group) > min_num_ixns}
return datatools.InteractionHistory(
history.data[(history.data['student_id'].isin(students)) & (
history.data['module_id'].isin(modules))],
reindex_timesteps=True,
size_of_test_set=0.2)
# apply the filter a couple of times, since removing student histories
# may cause certain modules to drop below the min_num_ixns threshold,
# and removing modules may cause student histories to drop below
# the min_num_ixns threshold
REPEATED_FILTER = 3 # number of times to repeat filtering
history = reduce(
lambda acc, _: filter_history(acc, min_num_ixns=2, max_num_ixns=10000),
range(REPEATED_FILTER), unfiltered_history)
history.data.sort('timestamp', inplace=True)
deck_of_student_item = {}
tlast_of_student_item = {}
nreps_of_student_item = {}
deck = []
tlast = []
nreps = []
for _, ixn in history.data.iterrows():
student_item = (ixn['user_id'], ixn['module_id'])
d = deck_of_student_item.get(student_item, 1)
deck.append(d)
if ixn['outcome']:
deck_of_student_item[student_item] = d + 1
else:
deck_of_student_item[student_item] = max(1, d-1)
n = nreps_of_student_item.get(student_item, 1)
nreps.append(n)
nreps_of_student_item[student_item] = n + 1
tlast.append(tlast_of_student_item.get(student_item, np.nan))
tlast_of_student_item[student_item] = ixn['timestamp']
history.data['deck'] = deck
history.data['nreps'] = nreps
history.data['tlast'] = tlast
# path to pickled interaction history file
history_path = os.path.join('data', 'mnemosyne_history_v2.pkl')
# serialize history
with open(history_path, 'wb') as f:
pickle.dump(history, f, pickle.HIGHEST_PROTOCOL)
# load history from file
with open(history_path, 'rb') as f:
history = pickle.load(f)
df = history.data
print "Number of interactions = %d" % len(df)
print "Number of unique students: %d" % len(df['user_id'].unique())
print "Number of unique assessments: %d" % history.num_assessments()
value_counts = df['outcome'].value_counts()
num_passes = value_counts.get(True, 0)
num_fails = value_counts.get(False, 0)
print "Overall pass rate: %f" % (num_passes / (num_passes + num_fails))
df.sort('timestamp', inplace=True)
v = []
for _, g in df.groupby(['user_id', 'module_id']):
ts = g['timestamp'].values
v.extend([nt-t for t, nt in zip(ts[:-1], ts[1:])])
v = np.array(v)
plt.xlabel('Time between reviews (log10-seconds)')
plt.ylabel('Frequency (number of reviews)')
plt.hist(np.log10(v+1), bins=20)
#plt.savefig(os.path.join('figures', 'mnemosyne', 'time-between-reviews.pdf'))
plt.show()
grouped = df.groupby(['user_id', 'module_id'])
pairs = [x for x, g in grouped if len(g) > 20]
len(pairs)
g = grouped.get_group(random.choice(pairs))
ts = g['timestamp'].values
intervals = [y-x for x, y in zip(ts[:-1], ts[1:])]
plt.xlabel('Number of reviews')
plt.ylabel('Time until next review (seconds)')
plt.title('Review intervals for a single user-item pair')
outcomes = g['outcome'].values
outcomes = outcomes[:-1]
plt.bar(range(len(outcomes)), [max(intervals)] * len(outcomes), width=1, color=['green' if x else 'red' for x in outcomes], alpha=0.25, linewidth=0.)
plt.step(range(len(intervals)+1), intervals+[intervals[-1]], where='post')
plt.yscale('log')
plt.xlim([0, len(intervals)])
plt.ylim([0, max(intervals)])
#plt.savefig(os.path.join('figures', 'mnemosyne', 'review-history-example.pdf'))
plt.show()
counts = df['user_id'].value_counts().values
plt.xlabel('Number of interactions per student')
plt.ylabel('Frequency (number of students)')
plt.hist(counts)
plt.yscale('log')
#plt.savefig(os.path.join('figures', 'mnemosyne', 'num_ixns_per_student.pdf'))
plt.show()
counts = df['module_id'][df['module_type'] == datatools.AssessmentInteraction.MODULETYPE].value_counts().values
plt.xlabel('Number of interactions per item')
plt.ylabel('Frequency (number of items)')
plt.hist(counts)
plt.yscale('log')
#plt.savefig(os.path.join('figures', 'mnemosyne', 'num_ixns_per_item.pdf'))
plt.show()
counts = df.groupby(['user_id', 'module_id']).size().values
plt.xlabel('Number of interactions per student per item')
plt.ylabel('Frequency (number of student-item pairs)')
plt.hist(counts)
plt.yscale('log')
#plt.savefig(os.path.join('figures', 'mnemosyne', 'num_ixns_per_student_per_item.pdf'))
plt.show()
num_students_per_module = [len(group['user_id'].unique()) for _, group in df.groupby('module_id')]
plt.xlabel('Number of students per item')
plt.ylabel('Frequency (number of items)')
plt.hist(num_students_per_module)
plt.yscale('log')
#plt.savefig(os.path.join('figures', 'mnemosyne', 'num-students-per-item.pdf'))
plt.show()
def get_pass_rates(grouped):
Get pass rate for each group
:param pd.GroupBy grouped: A grouped dataframe
:rtype: dict[str, float]
:return: A dictionary mapping group name to pass rate
pass_rates = {}
for name, group in grouped:
vc = group['outcome'].value_counts()
if True not in vc:
pass_rates[name] = 0
else:
pass_rates[name] = vc[True] / len(group)
return pass_rates
grouped = df[df['module_type']==datatools.AssessmentInteraction.MODULETYPE].groupby('user_id')
plt.xlabel('Student pass rate')
plt.ylabel('Frequency (number of students)')
plt.hist(get_pass_rates(grouped).values())
plt.yscale('log')
#plt.savefig(os.path.join('figures', 'mnemosyne', 'student-pass-rates.pdf'))
plt.show()
grouped = df[df['module_type']==datatools.AssessmentInteraction.MODULETYPE].groupby('module_id')
plt.xlabel('Assessment pass rate')
plt.ylabel('Frequency (number of assessments)')
plt.hist(get_pass_rates(grouped).values())
plt.yscale('log')
#plt.savefig(os.path.join('figures', 'mnemosyne', 'assessment-pass-rates.pdf'))
plt.show()
def make_flow_graph(interaction_logs):
Create a graphviz object for the graph of
module transitions across all student paths
:param pd.DataFrame interaction_logs: An interaction history
:rtype pgv.AGraph
:return Graph of module transitions in student paths
G = pgv.AGraph(directed=True)
for module_id in interaction_logs['module_id'].unique():
G.add_node(module_id)
E = defaultdict(set)
grouped = interaction_logs.groupby('user_id')
for student_id, group in grouped:
module_ids_in_student_path = group['module_id']
for source_node, target_node in zip(module_ids_in_student_path[:-1], module_ids_in_student_path[1:]):
if source_node != target_node: # stationary
E[(source_node, target_node)] |= {student_id}
for (source_node, target_node), students_that_made_transition in E.iteritems():
G.add_edge(
source_node,
target_node,
weight=len(students_that_made_transition))
return G
G = make_flow_graph(df)
G.write(os.path.join('figures', 'mnemosyne', 'mnemosyne_flow_graph.dot'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bayesian optimization or sequential model-based optimization uses a surrogate model
Step2: This shows the value of the two-dimensional branin function and the three minima.
Step3: Note that this can take a few minutes.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(123)
%matplotlib inline
import matplotlib.pyplot as plt
plt.set_cmap("viridis")
from skopt.benchmarks import branin as _branin
def branin(x, noise_level=0.):
return _branin(x) + noise_level * np.random.randn()
from matplotlib.colors import LogNorm
def plot_branin():
fig, ax = plt.subplots()
x1_values = np.linspace(-5, 10, 100)
x2_values = np.linspace(0, 15, 100)
x_ax, y_ax = np.meshgrid(x1_values, x2_values)
vals = np.c_[x_ax.ravel(), y_ax.ravel()]
fx = np.reshape([branin(val) for val in vals], (100, 100))
cm = ax.pcolormesh(x_ax, y_ax, fx,
norm=LogNorm(vmin=fx.min(),
vmax=fx.max()))
minima = np.array([[-np.pi, 12.275], [+np.pi, 2.275], [9.42478, 2.475]])
ax.plot(minima[:, 0], minima[:, 1], "r.", markersize=14, lw=0, label="Minima")
cb = fig.colorbar(cm)
cb.set_label("f(x)")
ax.legend(loc="best", numpoints=1)
ax.set_xlabel("X1")
ax.set_xlim([-5, 10])
ax.set_ylabel("X2")
ax.set_ylim([0, 15])
plot_branin()
from functools import partial
from skopt import gp_minimize, forest_minimize, dummy_minimize
func = partial(branin, noise_level=2.0)
bounds = [(-5.0, 10.0), (0.0, 15.0)]
n_calls = 80
def run(minimizer, n_iter=20):
return [minimizer(func, bounds, n_calls=n_calls, random_state=n)
for n in range(n_iter)]
# Random search
dummy_res = run(dummy_minimize)
# Gaussian processes
gp_res = run(gp_minimize)
# Random forest
rf_res = run(partial(forest_minimize, base_estimator="RF"))
# Extra trees
et_res = run(partial(forest_minimize, base_estimator="ET"))
from skopt.plots import plot_convergence
plot = plot_convergence(("dummy_minimize", dummy_res),
("gp_minimize", gp_res),
("forest_minimize('rf')", rf_res),
("forest_minimize('et)", et_res),
true_minimum=0.397887, yscale="log")
plot.legend(loc="best", prop={'size': 6}, numpoints=1);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set model run-time parameters
Step2: Investigate a couple of model variables
Step3: Run for a number of timesteps
|
<ASSISTANT_TASK:>
Python Code:
# First import the model. Here we use the HBV version
%pylab inline
from wflow.wflow_hbv import *
import IPython
from IPython.display import display, clear_output
#clear_output = IPython.core.display.clear_output
# define start and stop time of the run
startTime = 1
stopTime = 200
currentTime = 1
# set runid, cl;onemap and casename. Also define the ini file
runId = "memtest"
#configfile="wflow_hbv_mem.ini"
configfile="wflow_hbv.ini"
wflow_cloneMap = 'wflow_subcatch.map'
# the casename points to the complete model setup with both static and dynamic input
caseName="../examples/wflow_rhine_hbv"
#make a usermodel object
myModel = WflowModel(wflow_cloneMap, caseName,runId,configfile)
# initialise the framework
dynModelFw = wf_DynamicFramework(myModel, stopTime,startTime)
dynModelFw.createRunId(NoOverWrite=False,level=logging.ERROR)
dynModelFw.setQuiet(1)
# Run the initial part of the model (reads parameters and sets initial values)
dynModelFw._runInitial() # Runs initial part
dynModelFw._runResume() # gets the state variables from disk
# Get list of variables supplied by the model
print dynModelFw.wf_supplyVariableNamesAndRoles()
#
sm = dynModelFw.wf_supplyMapAsNumpy("SoilMoisture")
sm[sm == -999] = np.nan
uz = dynModelFw.wf_supplyMapAsNumpy("UpperZoneStorage")
uz[uz == -999] = np.nan
lz = dynModelFw.wf_supplyMapAsNumpy("LowerZoneStorage")
lz[lz == -999] = np.nan
imshow(sm)
title("Soilmoisture")
colorbar()
imshow(uz)
title("UpperZoneStorage")
colorbar()
imshow(lz)
title("LowerZoneStorage")
colorbar()
dynModelFw.wf_setValueLdd("TopoLdd",5.0,6.46823,51.6821)
npmap0 = dynModelFw.wf_supplyMapAsNumpy("TopoLdd")
ltt = dynModelFw.wf_supplyMapAsList("SurfaceRunoff")
f, ax = plt.subplots(1,3,figsize=(16, 6))
plotar = []
for ts in range(20,80):
# Get value at pit
inflowQ = dynModelFw.wf_supplyScalar("SurfaceRunoff",6.46823,51.6821)
plotar.append(inflowQ)
dynModelFw.wf_setValues("PET",scalar(2.0))
dynModelFw._runDynamic(ts,ts) # runs for all timesteps
npmap1 = dynModelFw.wf_supplyMapAsNumpy("P")
run = dynModelFw.wf_supplyMapAsNumpy("SurfaceRunoff")
uz = dynModelFw.wf_supplyMapAsNumpy("UpperZoneStorage")
sm = dynModelFw.wf_supplyMapAsNumpy("SoilMoisture")
sm[sm == -999] = np.nan
#dynModelFw.wf_setValues("UpperZoneStorage",uz * 10.1)
uz[uz == -999] = np.nan
run[run == -999] = np.nan
ax[0].imshow(uz)
ax[1].imshow(sm)
#ax[2].imshow(log(run))
ax[2].plot(plotar,'b')
ax[2].set_title(str(ts))
clear_output()
display(f)
plt.close()
dynModelFw._runSuspend() # saves the state variables
dynModelFw._wf_shutdown()
run = dynModelFw.wf_supplyMapAsNumpy("SurfaceRunoff")
run[run == -999] = np.nan
imshow(log(run))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Steppermotor ST4118S0206-A settings
Step2: Max Frequency calulation
Step3: Max Speed calculations
Step4: Max Acceleration calculations
Step5: Speed intended calculations
Step6: Max Step Frequency
|
<ASSISTANT_TASK:>
Python Code:
# Function to calculate the Bits needed fo a given number
def unsigned_num_bits(num):
_nbits = 1
_n = num
while(_n > 1):
_nbits = _nbits + 1
_n = _n / 2
return _nbits
rev_distance = 0.5 # mm
step_angle = 1.8 # °
# Calculation one Step
step_distance = rev_distance/360*step_angle
print("Step Distance = {} mm".format(step_distance))
print("Step Distance = {} um".format(step_distance*1000))
# Calculation max and min register position
RegBitNb = 32
regval_max = 2**(RegBitNb-1)-1
regval_min = -2**(RegBitNb-1)
step_distance_max = regval_max*step_distance
step_distance_min = regval_min*step_distance
print("Register Position Values = {} ... {}".format(regval_max, regval_min))
print("Position Register distances = {} m ... {} m".format(step_distance_max/1000, step_distance_min/1000))
speed_max = 60# rev/min
step_angle = 1.8 # °
steps_per_rev = 360/step_angle
speed_max_sec = speed_max/60 # rev/sec
f_max = speed_max_sec * steps_per_rev
print("Max Frequency of Steppermotor is {} Hz".format(f_max))
speed_resolution = 2**8 # different speed values
clk_freq = 100e6 # Hz
speed_max = 120*1/60 # rev/min * min/s = rev/s
steps_per_rev = 200 # steps per revolution
g_max_speed = ((speed_resolution-1)*clk_freq)/(speed_max*steps_per_rev)
print("g_MAX_SPEED = {} needs {} Bits".format(int(g_max_speed), unsigned_num_bits(int(g_max_speed))))
speed_resolution = 2**8 # different speed values
clk_freq = 100e6 # Hz
speed_max = 60*1/60 # rev/min * min/s = rev/s
max_acceleration_time = 2.0 # seconds from 0 to max speed
max_acceleration_rev = speed_max/max_acceleration_time # rev/s^2
max_decceleration_time = 1.0 # seconds from max to 0 speed
max_decceleration_rev = speed_max/max_decceleration_time # rev/s^2
g_max_acceleration = (speed_max*clk_freq)/((speed_resolution-1)*max_acceleration_rev)
g_max_decceleration = (speed_max*clk_freq)/((speed_resolution-1)*max_decceleration_rev)
print("g_MAX_ACCELERATION = {} needs {} Bits".format(int(g_max_acceleration),unsigned_num_bits(int(g_max_acceleration))))
print("g_MAX_DECCELERATION = {} needs {} Bits".format(int(g_max_decceleration),unsigned_num_bits(int(g_max_decceleration))))
import math
speed_resolution = 2**8 # different speed values
speed_max = 120*1/60 # rev/min * min/s = rev/s
max_acceleration_time = 2.0 # seconds from 0 to max speed
max_acceleration_rev = speed_max/max_acceleration_time # rev/s^2
def calc_speed_intended(max_acceleration_rev, position_difference):
# return round(math.sqrt(2*64*max_acceleration_rev*position_difference))
return round(41*math.log(max_acceleration_rev*position_difference+1))
for position_difference in [0,1,2,4,8,16,32,64,128,256,512,1024,2048,4096,8192,16384,32768,65536]:
speed_intended = calc_speed_intended(max_acceleration_rev, position_difference)
if speed_intended > speed_resolution-1:
speed_intended = speed_resolution-1
print("speed_intended: {:3} @ position_difference: {:5}".format(int(speed_intended),position_difference))
# Draw Plot
import numpy as np
import pylab as pl
pl.clf()
nbrOfPoints = 600
position_difference = np.linspace(0,nbrOfPoints,nbrOfPoints)
speed_intended = np.empty(shape=[len(position_difference)], dtype=np.float64)
for i in range(len(position_difference)):
speed_intended[i] = calc_speed_intended(max_acceleration_rev, position_difference[i])
if speed_intended[i] > speed_resolution-1:
speed_intended[i] = speed_resolution-1
# Plot graph
pl.plot(position_difference,speed_intended, label="Acceleration")
speed_intended = np.empty(shape=[len(position_difference)], dtype=np.float64)
for i in range(len(position_difference)):
speed_intended[i] = 255-calc_speed_intended(max_acceleration_rev, position_difference[i])
if speed_intended[i] <= 0:
speed_intended[i] = 0
# Plot graph
pl.plot(position_difference,speed_intended, label="Decceleration")
# Place legend, Axis and Title
pl.legend(loc='best')
pl.xlabel("PositionDifference [Steps]")
pl.ylabel("Speed [0-255]")
pl.title("Acceleration & Deccleration")
f_clk = 100e6 # Hz
f_step_max = 100e3 # Hz
g_step_freq = f_clk/f_step_max
print("Number of steps for max step frequency: {} needs {} Bits".format(int(g_step_freq), unsigned_num_bits(g_step_freq)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialize Oorb
Step2: Read in some orbits.
Step3: Generate ephemerides
Step4: Transform orbital elements
Step5: There can be larger differences in the tPeri, as this is degenerate (previous or next orbit?).
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import pyoorb as oo
# Initialize oorb
oo.pyoorb.oorb_init()
timeScales = {'UTC': 1, 'UT1': 2, 'TT': 3, 'TAI': 4}
elemType = {'CART': 1, 'COM': 2, 'KEP': 3, 'DEL': 4, 'EQX': 5}
# Set up some orbits
# orb is id, 6 elements, epoch_mjd, H, G, element type index
# keplerian appears to be element type index 3
# orbits = numpy.array([0.,1.,2.,3.,4.,5.,6.,5373.,1.,1.,3.])
o = np.loadtxt('test.des', dtype=([('objid', np.str_, 20), ('format', np.str_, 3),
('q', float), ('e', float), ('inc', float), ('Omega', float),
('argperi', float), ('tperi', float), ('H', float), ('epoch', float),
('index', int), ('npar', int), ('moid', float), ('compcode', np.str_, 10)]),
skiprows=1)
orbits = np.zeros([len(o), 12], dtype=np.double, order='F')
orbits[:, 0] = np.arange(0, len(o), 1, dtype=float)
orbits[:, 1] = o['q']
orbits[:, 2] = o['e']
orbits[:, 3] = o['inc']
orbits[:, 4] = o['Omega']
orbits[:, 5] = o['argperi']
orbits[:, 6] = o['tperi']
orbits[:, 7] = np.zeros(len(o), float) + elemType['COM']
orbits[:, 8] = o['epoch']
orbits[:, 9] = np.zeros(len(o), float) + timeScales['TT']
orbits[:, 10] = o['H']
orbits[:, 11] = np.zeros(len(o)) + 0.15
for i in range(3, 6):
orbits[:, i] = np.radians(orbits[:, i])
orbits
offset = 30.
times = [orbits[0][8], orbits[0][8] + offset]
timescale = [timeScales['UTC']] * len(times)
ephem_dates = np.array(list(zip(times, timescale)), dtype=np.double, order='F')
ephs, err = oo.pyoorb.oorb_ephemeris_full(in_orbits=orbits,
in_obscode='I11',
in_date_ephems=ephem_dates,
in_dynmodel='N')
if err != 0:
print(err)
i = 0
start = ephs[i][0]
end = ephs[i][1]
print("@ %f" % (ephem_dates[0][0]), start)
print("@ %f" % (ephem_dates[1][0]), end)
type2 = 'CART'
print("Transforming from COM to %s" % (type2))
newElems, err = oo.pyoorb.oorb_element_transformation(in_orbits=orbits, in_element_type=elemType[type2])
if err != 0:
print("error", err)
print("Input and output orbital elements (COM / CART)")
print("in ", orbits[i])
print("out", newElems[i])
print("Transforming back from %s to COM" % (type2))
newElems2, err = oo.pyoorb.oorb_element_transformation(in_orbits=newElems, in_element_type=elemType['COM'])
if err != 0:
print("error", err)
print("Input and output orbital elements (CART / COM)")
print("in ", newElems[i])
print("out", newElems2[i])
print('Differences in the initial/final COM orb elements')
diffs = newElems2 - orbits
print(diffs)
print(newElems2[1,3])
print(orbits[1,3])
print(diffs[1,3])
print("Propagating orbits by %d days" % (offset))
epoch_orig = orbits[i][8]
epoch_new = epoch_orig + offset
newEpoch = np.array([epoch_new, timeScales['TT']], dtype='double', order='F')
# Note that currently you have to use CART format elements.
elems = newElems
newOorbElems, err = oo.pyoorb.oorb_propagation(in_orbits=elems, in_epoch=newEpoch, in_dynmodel='N')
if err != 0:
print("error", err)
print("Input and output orbital elements (0 and %d days)" % (offset))
print("in ", orbits[i])
print("out", newOorbElems[i])
print("Transforming new orbital elements to COM")
newElems3, err = oo.pyoorb.oorb_element_transformation(in_orbits=newOorbElems, in_element_type=elemType['COM'])
if err != 0:
print("error", err)
print("Input and output orbital elements (CART/COM)")
print("in ", orbits[i])
print("out", newElems3[i])
print('epoch', epoch_new, newElems3[i][8])
# Generate ephemerides with these new elements, to compare.
ephs2, err = oo.pyoorb.oorb_ephemeris_full(in_orbits=newElems3, in_obscode='I11',
in_date_ephems=ephem_dates, in_dynmodel='N')
for i in range(len(newElems3)):
print('Object %d' % i)
print(' Orbit: ', orbits[i] )
diffs = ephs[i] - ephs2[i]
#print("Difference in ephemerides from ephemeris generation vs propagation + ephemeris generation")
#print(diffs)
print("Just difference in mas in RA and then Dec")
deltaRA = diffs[:,1] * 3600. * 1000.
deltaDec = diffs[:,2] * 3600. * 1000.
print(' ', deltaRA, deltaDec)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Using Bio.Entrez to list available databases
Step2: The variable record contains a list of the available databases at NCBI, which you can see by executing the cell below
Step3: You may recognise some of the database names, such as pubmed, nuccore, assembly, sra, and taxonomy.
Step4: The returned information can be viewed by running the cell below.
Step5: For now, we are interested in the list of database identifiers, in record['IdList']. We will use these to get information from the assembly database.
Step6: Linking across databases
Step8: The links variable may contain links to more than one version of the genome (NCBI keep third-party managed genome data in GenBank/INSDC records, and NCBI-'owned' data in RefSeq records).
Step9: You will use the extract_insdc() function to get the accession IDs for the sequences in this Ralstonia solanacearum genome, in the cell below.
Step10: Fetching sequence records from NCBI
Step11: By running the cell below, you can see that each sequence in the Ralstonia solanacearum assembly has been downloaded into a SeqRecord, and that it contains useful metadata, describing the sequence assembly and properties of the annotation.
Step12: Writing sequence data with Biopython
Step13: If you inspect the newly-created ralstonia.gbk file, you should see that it contains complete GenBank records, describing this genome.
|
<ASSISTANT_TASK:>
Python Code:
# This line imports the Bio.Entrez module, and makes it available
# as 'Entrez'.
from Bio import Entrez
# The line below imports the Bio.SeqIO module, which allows reading
# and writing of common bioinformatics sequence formats.
from Bio import SeqIO
# This line sets the variable 'Entrez.email' to the specified
# email address. You should substitute your own address for the
# example address provided below. Please do not provide a
# fake name.
Entrez.email = "[email protected]"
# This line sets the name of the tool that is making the queries
Entrez.tool = "01-genome_data.ipynb"
# The line below uses the Entrez.einfo() function to
# ask NCBI what databases are available. The result is
# 'stored' in a variable called 'handle'
handle = Entrez.einfo()
# In the line below, the response from NCBI is read
# into a record, that organises NCBI's response into
# something you can work with.
record = Entrez.read(handle)
print(record["DbList"])
# The line below carries out a search of the `assembly` database at NCBI,
# using the phrase `Ralstonia solanacearum` as the search query,
# and asks NCBI to return up to the first 100 results
handle = Entrez.esearch(db="assembly", term="Ralstonia solanacearum", retmax=100)
# This line converts the returned information from NCBI into a form we
# can use, as before.
record = Entrez.read(handle)
# This line prints the downloaded information from NCBI, so
# we can read it.
print(record)
# The line below takes the first value in the list of
# database accessions record["IdList"], and places it in
# the variable 'accession'
accession = record["IdList"][0]
# Show the contents of the variable 'accession'
print(accession)
# The line below requests the identifiers (UIDs) for all
# records in the `nucleotide` database that correspond to the
# assembly UID that is stored in the variable 'accession'
handle = Entrez.elink(dbfrom="assembly", db="nucleotide",
from_uid=accession)
# We place the downloaded information in the variable 'links'
links = Entrez.read(handle)
# The code below provides a function that extracts nucleotide
# database accessions for INSDC data from the result of an
# Entrez.elink() query.
def extract_insdc(links):
Returns the link UIDs for RefSeq entries, from the
passed Elink search results
# Work only with INSDC accession UIDs
linkset = [ls for ls in links[0]['LinkSetDb'] if
ls['LinkName'] == 'assembly_nuccore_insdc']
if 0 == len(linkset): # There are no INSDC UIDs
raise ValueError("Elink() output has no assembly_nuccore_insdc data")
# Make a list of the INSDC UIDs
uids = [i['Id'] for i in linkset[0]['Link']]
return uids
# The line below uses the extract_insdc() function to get INSDC/GenBank
# accession UIDs for the components of the genome/assembly referred to
# in the 'links' variable. These will be stored in the variable
# 'nuc_uids'
nuc_uids = extract_insdc(links)
# Show the contents of 'nuc_uids'
print(nuc_uids)
# The lines below retrieve (fetch) the GenBank records for
# each database entry specified in `nuc_uids`, in plain text
# format. These are parsed with Biopython's SeqIO module into
# SeqRecords, which structure the data into a usable format.
# The SeqRecords are placed in the variable 'records'.
records = []
for nuc_uid in nuc_uids:
handle = Entrez.efetch(db="nucleotide", rettype="gbwithparts", retmode="text",
id=nuc_uid)
records.append(SeqIO.read(handle, 'genbank'))
# Show the contents of each downloaded `SeqRecord`.
for record in records:
print(record, "\n")
# The line below writes the sequence data in 'seqdata' to
# the local file "data/ralstonia.gbk", in GenBank format.
# The function returns the number of sequences that were written to file
SeqIO.write(records, "data/ralstonia.gbk", "genbank")
# The line below writes the sequence data in 'seqdata' to
# the local file "data/ralstonia.fasta", in FASTA format.
SeqIO.write(records, "data/ralstonia.fasta", "fasta")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id='svr_poly'></a>
Step2: <a id='svr_rbf'></a>
Step3: <a id='svr_sigmoid'></a>
Step4: SVR defaults<a id='svr_defaults'></a>
Step5: SVR generator<a id='svr_gen'></a>
|
<ASSISTANT_TASK:>
Python Code:
def svr_linear_config():
return {
'kernel': ('linear',),
'tol': (0.001,), # TODO add relevant range
'C': (1.0,), # ditto
'epsilon': (0.1,), # ditto
'shrinking': (True, False),
'max_iter': (-1,), # TODO add relevant range
}
def svr_poly_config():
return {
'kernel': ('poly',),
'degree': (3,), # TODO add relevant range
'gamma': ('auto',), # ditto
'coef0': (0.0,), # ditto
'tol': (0.001,), # ditto
'C': (1.0,),
'epsilon': (0.1,),
'shrinking': (True, False),
'max_iter': (-1,), # TODO add relevant range
}
def svr_rbf_config():
return {
'kernel': ('rbf',),
'gamma': ('auto',), # TODO add relevant range
'tol': (0.001,), # ditto
'C': (1.0,),
'epsilon': (0.1,),
'shrinking': (True, False),
'max_iter': (-1,), # TODO add relevant range
}
def svr_sigmoid_config():
return {
'kernel': ('sigmoid',),
'gamma': ('auto',), # TODO add relevant range
'coef0': (0.0,), # ditto
'tol': (0.001,), # ditto
'C': (1.0,),
'epsilon': (0.1,),
'shrinking': (True, False),
'max_iter': (-1,), # TODO add relevant range
}
import modules.utils as utils
svr_defaults = [('kernel', 'S10', 'rbf'),
('degree', 'i4', 3),
('gamma', 'S5', 'auto'), # TODO auto or float
('coef0', 'f4', 0.0),
('tol', 'f4', 0.001),
('C', 'f4', 1.0),
('epsilon', 'f4', 0.1),
('shrinking', '?', True),
('cache_size', 'f4', 200),
('verbose', '?', False),
('max_iter', 'i4', -1)]
svr_configs = (svr_linear_config(),
svr_poly_config(),
svr_rbf_config(),
svr_sigmoid_config())
import modules.utils as utils
import modules.values as values
utils.generate_parameters(values.parameters_path + 'svr', svr_configs, svr_defaults)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Simple Math in the Jupyter Notebook
Step2: uncomment this to download the data
Step3: Loading Data with Pandas
Step4: Now we can use the read_csv command to read the comma-separated-value data
Step5: The shape attribute shows us the number of elements
Step6: The columns attribute gives us the column names
Step7: 4. Manipulating data with pandas
Step8: Mathematical operations on columns happen element-wise
Step9: Columns can be created (or overwritten) with the assignment operator.
Step10: The simplest version of a groupby looks like this, and you can use almost any aggregation function you wish (mean, median, sum, minimum, maximum, standard deviation, count, etc.)
Step11: Now we can simply call the plot() method of any series or dataframe to get a reasonable view of the data
Step12: Adjusting the Plot Style
|
<ASSISTANT_TASK:>
Python Code:
a = 1
a_list = [1, 'a', [1,2]]
a_list.append(2)
a_list
dir(a_list)
a_list.count(1)
a = 2
!ls
#!curl -o pronto.csv https://data.seattle.gov/api/views/tw7j-dfaw/rows.csv?accessType=DOWNLOAD
import pandas as pd
df = pd.read_csv('pronto.csv')
df.head()
df.columns
df.shape
df.dtypes
df_small = df[ 'stoptime']
type(df_small)
trip_duration_hours = df['tripduration']/3600
trip_duration_hours[:3]
df['trip_duration_hours'] = df['tripduration']/3600
del df['trip_duration_hours']
df.head()
df.loc[[0,1],:]
df_long_trips = df[df['tripduration'] >10000]
sel = df['tripduration'] >10000
df_long_trips = df[sel]
len(df)
# Make a copy of a slice
df_subset = df[['starttime', 'stoptime']].copy()
df_subset['trip_hours'] = df['tripduration']/3600
df.head()
df_count = df.groupby(['from_station_id']).count()
df_count.head()
df_count1 = df_count[['trip_id']]
df_count2 = df_count1.rename(columns={'trip_id': 'count'})
df_count2['new'] = 1
df_count2.head()
df_mean = df.groupby(['from_station_id']).mean()
df_mean.head()
dfgroup = df.groupby(['from_station_id'])
dfgroup.groups
%matplotlib inline
import matplotlib.pyplot as plt
df['tripduration'].hist()
# A script for creating a dataframe with counts of the occurrence of a columns' values
df_count = df.groupby('from_station_id').count()
df_count1 = df_count[['trip_id']]
df_count2 = df_count1.rename(columns={'trip_id': 'count'})
df_count2.head()
def make_table_count(df_arg, groupby_column):
df_count = df_arg.groupby(groupby_column).count()
column_name = df.columns[0]
df_count1 = df_count[[column_name]]
df_count2 = df_count1.rename(columns={column_name: 'count'})
return df_count2
dff = make_table_count(df, 'from_station_id')
dff.head()
import table_modifiers as tm
dir(tm)
tm.table_counter(df, 'from_station_id')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Lorenz system
Step4: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
Step6: Write a function plot_lorentz that
Step7: Use interact to explore your plot_lorenz function with
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
def lorentz_derivs(yvec, t, sigma, rho, beta):
Compute the the derivatives for the Lorentz system at yvec(t).
x = yvec[0]
y = yvec[1]
z = yvec[2]
dx = sigma*(y-x)
dy = x*(rho-z)-y
dz = x*y - beta*z
return np.array([dx,dy,dz])
assert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Solve the Lorenz system for a single initial condition.
Parameters
----------
ic : array, list, tuple
Initial conditions [x,y,z].
max_time: float
The max time to use. Integrate with 250 points per time unit.
sigma, rho, beta: float
Parameters of the differential equation.
Returns
-------
soln : np.ndarray
The array of the solution. Each row will be the solution vector at that time.
t : np.ndarray
The array of time points used.
t = np.linspace(0,max_time,int(max_time*250))
soln = odeint(lorentz_derivs,ic,t,args=(sigma,rho,beta))
return soln,t
assert True # leave this to grade solve_lorenz
N = 5
colors = plt.cm.hot(np.linspace(0,1,N))
for i in range(N):
# To use these colors with plt.plot, pass them as the color argument
print(colors[i])
def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
np.random.seed(1)
ic = np.random.rand(N,3)*30-15
plt.figure(figsize=(10,7))
for n in ic:
plt.plot(solve_lorentz(n,max_time,sigma,rho,beta)[0][:,0],solve_lorentz(n,max_time,sigma,rho,beta)[0][:,2])
plt.xlabel('x(t)')
plt.ylabel('z(t)')
plt.title('x(t) vs. z(t)')
plt.tick_params(top=False,right=False)
plt.ylim(-20,65)
plt.xlim(-30,30)
plot_lorentz()
assert True # leave this to grade the plot_lorenz function
interact(plot_lorentz,N=(1,50),max_time=(1,10),sigma=(0.0,50.0),rho=(0.0,50.0),beta=fixed(8.0/3.0));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Correlation analysis
Step2: P-value非常小,而ks statistic数值较大,认为FRI/FRII有一定的可分性。即原假设FRI/FRII的射电光学和光度服从统一分布是错误的。
|
<ASSISTANT_TASK:>
Python Code:
lumo_fr1_typical = lumo[idx2_same] * 10**-22
lumo_fr2_typical = lumo[idx3_same] * 10**-22
mag_fr1_typical = mag_abs[idx2_same]
mag_fr2_typical = mag_abs[idx3_same]
lumo_fr1_like = lumo[idx_fr1] * 10**-22
lumo_fr2_like = lumo[idx_fr2] * 10**-22
mag_fr1_like = mag_abs[idx_fr1]
mag_fr2_like = mag_abs[idx_fr2]
mag_fr1 = np.hstack([mag_abs[idx_fr1], mag_abs[idx2_same]])
mag_fr2 = np.hstack([mag_abs[idx_fr2], mag_abs[idx3_same]])
lumo_fr1 = np.hstack([lumo[idx_fr1], lumo[idx2_same]]) * 10 ** -22
lumo_fr2 = np.hstack([lumo[idx_fr2], lumo[idx3_same]]) * 10 ** -22
import scipy.stats.stats as stats
from sklearn.feature_selection import chi2
# ks test
# https://docs.scipy.org/doc/scipy-0.7.x/reference/generated/scipy.stats.ks_2samp.html#scipy.stats.ks_2sam
lumo_ks_D_t,lumo_ks_p_t = stats.ks_2samp(lumo_fr1_typical,lumo_fr2_typical)
print("KS statistic of lumo: typical %.5f" % lumo_ks_D_t)
print("P-value of lumo: typical %.5e" % lumo_ks_p_t)
mag_ks_D_t,mag_ks_p_t = stats.ks_2samp(mag_fr1_typical,mag_fr2_typical)
print("KS statistic of Mr: typical %.5f" % mag_ks_D_t)
print("P-value of Mr: typical %.5e" % mag_ks_p_t)
# FR like
lumo_ks_D_l,lumo_ks_p_l = stats.ks_2samp(lumo_fr1_like,lumo_fr2_like)
print("KS statistic of lumo: like %.5f" % lumo_ks_D_l)
print("P-value of lumo: like %.5e" % lumo_ks_p_l)
mag_ks_D_l,mag_ks_p_l = stats.ks_2samp(mag_fr1_like,mag_fr2_like)
print("KS statistic of Mr: like %.5f" % mag_ks_D_l)
print("P-value of Mr: like %.5e" % mag_ks_p_l)
# FR
lumo_ks_D,lumo_ks_p = stats.ks_2samp(lumo_fr1,lumo_fr2)
print("KS statistic of lumo: %.5f" % lumo_ks_D)
print("P-value of lumo: %.5e" % lumo_ks_p)
mag_ks_D,mag_ks_p = stats.ks_2samp(mag_fr1,mag_fr2)
print("KS statistic of Mr: %.5f" % mag_ks_D)
print("P-value of Mr: %.5e" % mag_ks_p)
x_lumo = np.hstack((lumo_fr1,lumo_fr2))
x_lumo.shape
x_lumo = np.log10(np.hstack((lumo_fr1,lumo_fr2)))
x_mag = np.hstack((mag_fr1,mag_fr2))
x_lumo_norm = (x_lumo - x_lumo.min()) / (x_lumo.max() - x_lumo.min())
x_mag_norm = (x_mag - x_mag.min()) / (x_mag.max() - x_mag.min())
x = np.vstack([x_lumo_norm,x_mag_norm])
x = x.transpose()
y = np.zeros(len(mag_abs))
y[idx2_same] = 1
y[idx_fr1] = 1
y[idx3_same] = 2
y[idx_fr2] = 2
y = y[np.where(y > 0)]
scores, pvalues = chi2(x, y)
pvalues
from scipy.stats import chisquare
chisquare(x_lumo_norm, y)
np.random.seed(12222222)
x = np.random.normal(0,1,size=(20000,))
y = np.random.normal(0,1,size=(20000,))
stats.ks_2samp(x,y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: train_test split
Step2: QDA
Step3: LDA
Step4: solver
Step5: NB
Step6: BernoulliNB 쓸수 없음 => 타겟변수뿐만아니라 독립변수도 0또는 1값을 가져야 하기때문
Step7: Decision Tree
Step8: Percetron
Step9: SVM
Step10: model comparing
|
<ASSISTANT_TASK:>
Python Code:
df1 = df.ix[:,0:8]
df1.tail() # 박사님께서 설명을 위해 뒷부분에 컬럼을 채워놓은 것같아서 who 부터 끝까지 잘랐습니다
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df1['sex']= le.fit_transform(df1['sex'])
df1['embarked'] = le.fit_transform(df1['embarked'])
from sklearn.preprocessing import Imputer
imp = Imputer(missing_values='NaN', strategy = 'median', axis = 0)
df2 = imp.fit_transform(df1)
df2
df3 = pd.DataFrame(df2, columns = ['survived', 'pclass','sex','age','sibsp','parch','fare','embarked'])
df3.tail()
df3.info()
y = df3.ix[:,0]
x = df3.ix[:,1:]
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.33, random_state= 42 )
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
qda = QuadraticDiscriminantAnalysis(store_covariances = True).fit(x,y)
qda.priors_
549.0/(549 + 342)
qda.means_
qda.covariances_
qda_score_train = qda.score(x_train,y_train)
print "Train Score :", qda_score_train
print "=" * 40
qda_score_test = qda.score(x_test, y_test)
print "Test Score :", qda_score_test
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
LDA = LinearDiscriminantAnalysis(n_components = 2, solver = 'svd', store_covariance = True).fit(x,y)
LDA.coef_
LDA.intercept_
LDA.covariance_
LDA.means_
LDA.n_components
LDA.explained_variance_ratio_
LDA_score_train = LDA.score(x_train,y_train)
print "Train Score :", LDA_score_train
print "=" * 40
LDA_score_test = LDA.score(x_test, y_test)
print "Test Score :", LDA_score_test
from sklearn.naive_bayes import GaussianNB
NB = GaussianNB().fit(x,y)
NB.classes_
NB.class_count_
NB.class_prior_
NB.theta_
NB_score_train = NB.score(x_train,y_train)
print "Train Score :", NB_score_train
print "=" * 40
NB_score_test = NB.score(x_test, y_test)
print "Test Score :", NB_score_test
from sklearn.naive_bayes import MultinomialNB
MNB = MultinomialNB().fit(x,y)
MNB.classes_
MNB.class_count_
MNB_score_train = MNB.score(x_train,y_train)
print "Train Score :", MNB_score_train
print "=" * 40
MNB_score_test = MNB.score(x_test, y_test)
print "Test Score :", MNB_score_test
from sklearn.pipeline import Pipeline
from sklearn.metrics import classification_report
clf_1 = Pipeline([
('clf', MultinomialNB())
])
clf_1.fit(x_train, y_train)
print(classification_report(y_test, clf_1.predict(x_test), digits = 4))
from sklearn.tree import DecisionTreeClassifier
tree1 = DecisionTreeClassifier(criterion = 'entropy', max_depth = 1, random_state = 0).fit(x,y)
from sklearn.tree import DecisionTreeClassifier
model = DecisionTreeClassifier(criterion='entropy', max_depth=3, min_samples_leaf=5).fit(x_train, y_train)
from sklearn import tree
decision = tree.DecisionTreeClassifier()
decision.fit(x_train, y_train)
decision_score_train = decision.score(x_train, y_train)
print "Train score : ",decision_score_train
decision_score_test = decision.score(x_test, y_test)
print "-" * 40
print "Test score : ",decision_score_test
from sklearn.linear_model import Perceptron
perceptron = Perceptron(n_iter = 1, eta0 = 0.3, random_state= 1).fit(x,y)
perceptron_score_train = perceptron.score(x_train, y_train)
print "Train score : ", perceptron_score_train
print "=" *40
perceptron_score_test = perceptron.score(x_test, y_test)
print "Test score : ", perceptron_score_test
perceptron = Perceptron(n_iter = 5, eta0 = 0.3, random_state= 1).fit(x,y)
perceptron_score_train = perceptron.score(x_train, y_train)
print "Train score : ", perceptron_score_train
print "=" *40
perceptron_score_test = perceptron.score(x_test, y_test)
print "Test score : ", perceptron_score_test
perceptron = Perceptron(n_iter = 10, eta0 = 0.3, random_state= 1).fit(x,y)
perceptron_score_train = perceptron.score(x_train, y_train)
print "Train score : ", perceptron_score_train
print "=" *40
perceptron_score_test = perceptron.score(x_test, y_test)
print "Test score : ", perceptron_score_test
perceptron = Perceptron(n_iter = 20, eta0 = 0.3, random_state= 1).fit(x,y)
perceptron_score_train = perceptron.score(x_train, y_train)
print "Train score : ", perceptron_score_train
print "=" *40
perceptron_score_test = perceptron.score(x_test, y_test)
print "Test score : ", perceptron_score_test
perceptron = Perceptron(n_iter = 100, eta0 = 0.3, random_state= 1).fit(x,y)
perceptron_score_train = perceptron.score(x_train, y_train)
print "Train score : ", perceptron_score_train
print "=" *40
perceptron_score_test = perceptron.score(x_test, y_test)
print "Test score : ", perceptron_score_test
perceptron = Perceptron(n_iter = 150, eta0 = 0.3, random_state= 1).fit(x,y)
perceptron_score_train = perceptron.score(x_train, y_train)
print "Train score : ", perceptron_score_train
print "=" *40
perceptron_score_test = perceptron.score(x_test, y_test)
print "Test score : ", perceptron_score_test
from sklearn.svm import SVC
svm = SVC() # defalt 값은 rbf 이다
svm.fit(x_train, y_train)
svm_score_train = svm.score(x_train, y_train)
print "Train Score : ", svm_score_train
svm_score_test = svm.score(x_test, y_test)
print "-" * 40
print "Test Score : ", svm_score_test
from sklearn.svm import SVC
svm = SVC(kernel = 'linear')
svm.fit(x_train, y_train)
svm_score_train = svm.score(x_train, y_train)
print "Train Score : ", svm_score_train
svm_score_test = svm.score(x_test, y_test)
print "-" * 40
print "Test Score : ", svm_score_test
from sklearn.svm import SVC
svm = SVC(kernel = 'sigmoid')
svm.fit(x_train, y_train)
svm_score_train = svm.score(x_train, y_train)
print "Train Score : ", svm_score_train
svm_score_test = svm.score(x_test, y_test)
print "-" * 40
print "Test Score : ", svm_score_test
from sklearn.svm import SVC
svm = SVC(kernel = 'poly')
svm.fit(x_train, y_train)
svm_score_train = svm.score(x_train, y_train)
print "Train Score : ", svm_score_train
svm_score_test = svm.score(x_test, y_test)
print "-" * 40
print "Test Score : ", svm_score_test
models = pd.DataFrame({
'Model' : ['QDA', 'LDA', 'GausianNB','MultinomialNB', 'DecisionTree','SVM'],
'Train_Score' : [qda_score_train, LDA_score_train, NB_score_train, MNB_score_train, decision_score_train, svm_score_train],
'Test_Score' : [qda_score_test, LDA_score_test, NB_score_test, MNB_score_test, decision_score_test, svm_score_test]
})
models.sort_values(by='Test_Score', ascending=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: All we need to do is
|
<ASSISTANT_TASK:>
Python Code:
# Update data
# autoupdate.autoupdate() # Comment in if needed, and loop if needed
# manip.get_5v5_player_log(2017, force_create) # Comment in if needed, and loop if needed
log = pd.concat([manip.get_5v5_player_log(season).assign(Season=season) for season in range(2012, 2018)])
sch = pd.concat([schedules.get_season_schedule(season).assign(Season=season) for season in range(2012, 2018)])
log.head()
# Teams
teamtoi = log.assign(TOI=log.TOION + log.TOIOFF) \
[['Season', 'Game', 'TOI']] \
.groupby(['Season', 'Game'], as_index=False) \
.max() # take max to avoid floating point errors that may fell drop_duplicates
teamtoi = sch[['Season', 'Game', 'Home', 'Road']] \
.melt(id_vars=['Season', 'Game'], var_name='HR', value_name='TeamID') \
.merge(teamtoi, how='inner', on=['Season', 'Game']) \
.drop_duplicates()
# Make names into str, and convert TOI from hours to minutes
teamtoi.loc[:, 'Team'] = teamtoi.TeamID.apply(lambda x: team_info.team_as_str(x))
teamtoi.loc[:, 'TOI(min)'] = teamtoi.TOI * 60
teamtoi = teamtoi.drop(['TeamID', 'TOI'], axis=1)
teamtoi.head()
# Individuals
indivtoi = log[['Season', 'Game', 'PlayerID', 'TOION', 'TeamID']]
# IDs to names and TOI from hours to minutes
indivtoi.loc[:, 'Player'] = players.playerlst_as_str(indivtoi.PlayerID.values)
indivtoi.loc[:, 'Team'] = indivtoi.TeamID.apply(lambda x: team_info.team_as_str(x))
indivtoi.loc[:, 'TOI(min)'] = indivtoi.TOION * 60
indivtoi = indivtoi.drop(['TeamID', 'TOION', 'PlayerID'], axis=1)
indivtoi.head()
# Write to file
teamtoi.to_csv('/Users/muneebalam/Desktop/teamtoi.csv')
indivtoi.to_csv('/Users/muneebalam/Desktop/indivtoi.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model
Step2: check rates
Step3: It slightly underestimates heterogeneity, but is close for max rate
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import ABCPRC as prc
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
def ibm(*ps):
m0,k = ps[0],ps[1]
T0 = 0.5
#measurements in regular increments throughout the year
ms,ts = np.zeros(100),np.linspace(0,1,100)
ms = (m0/2)*(np.sin(np.pi/2 + (ts-T0)*2*np.pi)+1)
ps = k/(k+ms)
outs = np.array([stats.nbinom(n=k,p=p).rvs() if p>0 else 0 for m,p in zip(ms,ps)])
return outs
m0,k = 50.0,1.0
xs = ibm(m0,k)
plt.plot(xs,'ko')
#measurements in regular increments throughout the year
ms,ts = np.zeros(100),np.linspace(0,1,100)
ms = (m0/2)*(np.sin(np.pi/2 + (ts-0.5)*2*np.pi)+1)
plt.plot(ts,ms)
plt.ylabel('rate')
plt.xlabel('time (yrs)')
m = prc.ABC()
priors = [stats.expon(scale=100.0).rvs,stats.expon(scale=0.5).rvs]
m.setup(modelFunc=ibm,xs=xs,priors=priors)
m.fit(sample_size=100)
m.run(1000)
res = m.trace()
plt.figure()
print('Initial Distribution')
m.trace(plot=True,tol=0)
plt.figure()
print('Middle Tolerance')
m.trace(plot=True,tol=5)
plt.figure()
print('Final Distribution')
m.trace(plot=True,tol=-1)
ps = np.round(m.paramMAP(),decimals=2)
print('MAP for max rate is : {}, MAP for heterogeneity is {}'.format(*ps))
res = m.fitSummary()
m.save('ecology_model_example')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The sgrid object
Step2: The object knows about sgrid conventions
Step3: Being generic is nice! This is an improvement up on my first design ;-) ...
Step4: (Don't be scared, you do not need the sgrid object to get the variables. This just shows that there is a one-to-one mapping from the sgrid object to the netCDF4 object.)
Step5: ... but we need a better way to deal with the slice of the slice!
Step6: Some thing for the angle information
Step7: Average velocity vectors to cell centers
Step8: Rotate vectors by angles
Step9: Speed
Step10: Lon, lat of the center grid
Step11: Plotting
|
<ASSISTANT_TASK:>
Python Code:
from netCDF4 import Dataset
url = ('http://geoport.whoi.edu/thredds/dodsC/clay/usgs/users/'
'jcwarner/Projects/Sandy/triple_nest/00_dir_NYB05.ncml')
nc = Dataset(url)
import pysgrid
# The object creation is a little bit slow. Can we defer some of the loading/computations?
sgrid = pysgrid.from_nc_dataset(nc)
sgrid # We need a better __repr__ and __str__ !!!
sgrid.edge1_coordinates, sgrid.edge1_dimensions, sgrid.edge1_padding
u_var = sgrid.u
u_var.center_axis, u_var.node_axis
v_var = sgrid.v
v_var.center_axis, v_var.node_axis
u_var.center_slicing
v_var.center_slicing
u_velocity = nc.variables[u_var.variable]
v_velocity = nc.variables[v_var.variable]
from datetime import datetime, timedelta
from netCDF4 import date2index
t_var = nc.variables['ocean_time']
start = datetime(2012, 10, 30, 0, 0)
time_idx = date2index(start, t_var, select='nearest')
v_idx = 0
# Slice of the slice!
u_data = u_velocity[time_idx, v_idx, u_var.center_slicing[-2], u_var.center_slicing[-1]]
v_data = v_velocity[time_idx, v_idx, v_var.center_slicing[-2], v_var.center_slicing[-1]]
angle = sgrid.angle
angles = nc.variables[angle.variable][angle.center_slicing]
from pysgrid.processing_2d import avg_to_cell_center
u_avg = avg_to_cell_center(u_data, u_var.center_axis)
v_avg = avg_to_cell_center(v_data, v_var.center_axis)
from pysgrid.processing_2d import rotate_vectors
u_rot, v_rot = rotate_vectors(u_avg, v_avg, angles)
from pysgrid.processing_2d import vector_sum
uv_vector_sum = vector_sum(u_rot, v_rot)
grid_cell_centers = sgrid.centers # Array of lon, lat pairs.
lon_var_name, lat_var_name = sgrid.face_coordinates
sg_lon = getattr(sgrid, lon_var_name)
sg_lat = getattr(sgrid, lat_var_name)
lon_data = grid_cell_centers[..., 0][sg_lon.center_slicing]
lat_data = grid_cell_centers[..., 1][sg_lat.center_slicing]
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
from cartopy.io import shapereader
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
def make_map(projection=ccrs.PlateCarree(), figsize=(9, 9)):
fig, ax = plt.subplots(figsize=figsize,
subplot_kw=dict(projection=projection))
gl = ax.gridlines(draw_labels=True)
gl.xlabels_top = gl.ylabels_right = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
return fig, ax
sub = 10
scale = 0.06
fig, ax = make_map()
kw = dict(scale=1.0/scale, pivot='middle', width=0.003, color='black')
q = plt.quiver(lon_data[::sub, ::sub], lat_data[::sub, ::sub],
u_rot[::sub, ::sub], v_rot[::sub, ::sub], zorder=2, **kw)
cs = plt.pcolormesh(lon_data[::sub, ::sub],
lat_data[::sub, ::sub],
uv_vector_sum[::sub, ::sub], zorder=1, cmap=plt.cm.rainbow)
_ = ax.coastlines('10m')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
def findnum(s1 ) :
v =[]
a = 0
b = 0
sa = 0
sb = 0
i = 0
if(s1[0 ] == ' - ' ) :
sa = 1
i = 1
while(s1[i ] . isdigit() ) :
a = a * 10 +(int(s1[i ] ) )
i += 1
if(s1[i ] == ' + ' ) :
sb = 0
i += 1
if(s1[i ] == ' - ' ) :
sb = 1
i += 1
while(i < len(s1 ) and s1[i ] . isdigit() ) :
b = b * 10 +(int(s1[i ] ) )
i += 1
if(sa ) :
a *= - 1
if(sb ) :
b *= - 1
v . append(a )
v . append(b )
return v
def complexNumberMultiply(v ) :
while(len(v ) != 1 ) :
v1 = findnum(v[0 ] )
v2 = findnum(v[1 ] )
del v[0 ]
del v[0 ]
r =(v1[0 ] * v2[0 ] - v1[1 ] * v2[1 ] )
img = v1[0 ] * v2[1 ] + v1[1 ] * v2[0 ]
res = ""
res += str(r )
res += ' + '
res += str(img ) + ' i '
v . insert(0 , res )
return v[0 ]
if __name__== ' __main __' :
n = 3
v =["3 + 1i ", "2 + 1i ", "- 5 + - 7i "]
print(complexNumberMultiply(v ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Creating the parameter file
Step4: Let's describe the bands we will use. This must be a superset (ideally the union) of all the bands involved in the training and target sets, including cross-validation.
Step6: Let's now describe the system of SED templates to use (needed for the mean fct of the GP, for simulating objects, and for the template fitting routines).
Step8: The next section if for simulating a photometric catalogue from the templates.
Step10: We now describe the training file.
Step12: The section of the target catalog has very similar structure and parameters. The catFile, bandOrder, referenceBand, and extraFracFluxError have the same meaning as for the training, but of course don't have to be the same.
Step14: Finally, there are various other parameters related to the method itself.
Step15: Let's write this to a file.
Step16: Running Delight
Step17: Second, we will process the library of SEDs and project them onto the filters,
Step18: Third, we will make some mock data with those filters and SEDs
Step19: Train and apply
Step20: Analyze the outputs
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats
import sys
sys.path.append('../')
from delight.io import *
from delight.utils import *
from delight.photoz_gp import PhotozGP
%cd ..
paramfile_txt =
# DELIGHT parameter file
# Syntactic rules:
# - You can set parameters with : or =
# - Lines starting with # or ; will be ignored
# - Multiple values (band names, band orders, confidence levels)
# must beb separated by spaces
# - The input files should contain numbers separated with spaces.
# - underscores mean unused column
paramfile_txt +=
[Bands]
names: U_SDSS G_SDSS R_SDSS I_SDSS Z_SDSS
directory: data/FILTERS
paramfile_txt +=
[Templates]
directory: ./data/CWW_SEDs
names: El_B2004a Sbc_B2004a Scd_B2004a SB3_B2004a SB2_B2004a Im_B2004a ssp_25Myr_z008 ssp_5Myr_z008
p_t: 0.27 0.26 0.25 0.069 0.021 0.11 0.0061 0.0079
p_z_t:0.23 0.39 0.33 0.31 1.1 0.34 1.2 0.14
lambdaRef: 4.5e3
paramfile_txt +=
[Simulation]
numObjects: 1000
noiseLevel: 0.03
trainingFile: data/galaxies-fluxredshifts.txt
targetFile: data/galaxies-fluxredshifts2.txt
paramfile_txt +=
[Training]
catFile: data/galaxies-fluxredshifts.txt
bandOrder: U_SDSS U_SDSS_var G_SDSS G_SDSS_var _ _ I_SDSS I_SDSS_var Z_SDSS Z_SDSS_var redshift
referenceBand: I_SDSS
extraFracFluxError: 1e-4
paramFile: data/galaxies-gpparams.txt
crossValidate: False
CVfile: data/galaxies-gpCV.txt
crossValidationBandOrder: _ _ _ _ R_SDSS R_SDSS_var _ _ _ _ _
numChunks: 1
paramfile_txt +=
[Target]
catFile: data/galaxies-fluxredshifts2.txt
bandOrder: U_SDSS U_SDSS_var G_SDSS G_SDSS_var _ _ I_SDSS I_SDSS_var Z_SDSS Z_SDSS_var redshift
referenceBand: I_SDSS
extraFracFluxError: 1e-4
redshiftpdfFile: data/galaxies-redshiftpdfs.txt
redshiftpdfFileTemp: data/galaxies-redshiftpdfs-cww.txt
metricsFile: data/galaxies-redshiftmetrics.txt
metricsFileTemp: data/galaxies-redshiftmetrics-cww.txt
useCompression: False
Ncompress: 10
compressIndicesFile: data/galaxies-compressionIndices.txt
compressMargLikFile: data/galaxies-compressionMargLikes.txt
redshiftpdfFileComp: data/galaxies-redshiftpdfs-comp.txt
paramfile_txt +=
[Other]
rootDir: ./
zPriorSigma: 0.2
ellPriorSigma: 0.5
fluxLuminosityNorm: 1.0
alpha_C: 1.0e3
V_C: 0.1
alpha_L: 1.0e2
V_L: 0.1
lines_pos: 6500 5002.26 3732.22
lines_width: 20.0 20.0 20.0
redshiftMin: 0.1
redshiftMax: 1.101
redshiftNumBinsGPpred: 100
redshiftBinSize: 0.001
redshiftDisBinSize: 0.2
confidenceLevels: 0.1 0.50 0.68 0.95
with open('tests/parametersTest.cfg','w') as out:
out.write(paramfile_txt)
%run ./scripts/processFilters.py tests/parametersTest.cfg
%run ./scripts/processSEDs.py tests/parametersTest.cfg
%run ./scripts/simulateWithSEDs.py tests/parametersTest.cfg
%run ./scripts/templateFitting.py tests/parametersTest.cfg
%run ./scripts/delight-learn.py tests/parametersTest.cfg
%run ./scripts/delight-apply.py tests/parametersTest.cfg
# First read a bunch of useful stuff from the parameter file.
params = parseParamFile('tests/parametersTest.cfg', verbose=False)
bandCoefAmplitudes, bandCoefPositions, bandCoefWidths, norms\
= readBandCoefficients(params)
bandNames = params['bandNames']
numBands, numCoefs = bandCoefAmplitudes.shape
fluxredshifts = np.loadtxt(params['target_catFile'])
fluxredshifts_train = np.loadtxt(params['training_catFile'])
bandIndices, bandNames, bandColumns, bandVarColumns, redshiftColumn,\
refBandColumn = readColumnPositions(params, prefix='target_')
redshiftDistGrid, redshiftGrid, redshiftGridGP = createGrids(params)
dir_seds = params['templates_directory']
dir_filters = params['bands_directory']
lambdaRef = params['lambdaRef']
sed_names = params['templates_names']
nt = len(sed_names)
f_mod = np.zeros((redshiftGrid.size, nt, len(params['bandNames'])))
for t, sed_name in enumerate(sed_names):
f_mod[:, t, :] = np.loadtxt(dir_seds + '/' + sed_name + '_fluxredshiftmod.txt')
# Load the PDF files
metricscww = np.loadtxt(params['metricsFile'])
metrics = np.loadtxt(params['metricsFileTemp'])
# Those of the indices of the true, mean, stdev, map, and map_std redshifts.
i_zt, i_zm, i_std_zm, i_zmap, i_std_zmap = 0, 1, 2, 3, 4
i_ze = i_zm
i_std_ze = i_std_zm
pdfs = np.loadtxt(params['redshiftpdfFile'])
pdfs_cww = np.loadtxt(params['redshiftpdfFileTemp'])
pdfatZ_cww = metricscww[:, 5] / pdfs_cww.max(axis=1)
pdfatZ = metrics[:, 5] / pdfs.max(axis=1)
nobj = pdfatZ.size
#pdfs /= pdfs.max(axis=1)[:, None]
#pdfs_cww /= pdfs_cww.max(axis=1)[:, None]
pdfs /= np.trapz(pdfs, x=redshiftGrid, axis=1)[:, None]
pdfs_cww /= np.trapz(pdfs_cww, x=redshiftGrid, axis=1)[:, None]
ncol = 4
fig, axs = plt.subplots(5, ncol, figsize=(7, 6), sharex=True, sharey=False)
axs = axs.ravel()
z = fluxredshifts[:, redshiftColumn]
sel = np.random.choice(nobj, axs.size, replace=False)
lw = 2
for ik in range(axs.size):
k = sel[ik]
print(k, end=" ")
axs[ik].plot(redshiftGrid, pdfs_cww[k, :],lw=lw, label='Standard template fitting')# c="#2ecc71",
axs[ik].plot(redshiftGrid, pdfs[k, :], lw=lw, label='New method') #, c="#3498db"
axs[ik].axvline(fluxredshifts[k, redshiftColumn], c="k", lw=1, label=r'Spec-$z$')
ymax = np.max(np.concatenate((pdfs[k, :], pdfs_cww[k, :])))
axs[ik].set_ylim([0, ymax*1.2])
axs[ik].set_xlim([0, 1.1])
axs[ik].set_yticks([])
axs[ik].set_xticks([0.0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4])
for i in range(ncol):
axs[-i-1].set_xlabel('Redshift', fontsize=10)
axs[0].legend(ncol=3, frameon=False, loc='upper left', bbox_to_anchor=(0.0, 1.4))
fig.tight_layout()
fig.subplots_adjust(wspace=0.1, hspace=0.1, top=0.96)
fig, axs = plt.subplots(2, 2, figsize=(7, 7))
zmax = 1.5
rr = [[0, zmax], [0, zmax]]
nbins = 30
h = axs[0, 0].hist2d(metricscww[:, i_zt], metricscww[:, i_zm], nbins, cmap='Greys', range=rr)
hmin, hmax = np.min(h[0]), np.max(h[0])
axs[0, 0].set_title('CWW z mean')
axs[0, 1].hist2d(metricscww[:, i_zt], metricscww[:, i_zmap], nbins, cmap='Greys', range=rr, vmax=hmax)
axs[0, 1].set_title('CWW z map')
axs[1, 0].hist2d(metrics[:, i_zt], metrics[:, i_zm], nbins, cmap='Greys', range=rr, vmax=hmax)
axs[1, 0].set_title('GP z mean')
axs[1, 1].hist2d(metrics[:, i_zt], metrics[:, i_zmap], nbins, cmap='Greys', range=rr, vmax=hmax)
axs[1, 1].set_title('GP z map')
axs[0, 0].plot([0, zmax], [0, zmax], c='k')
axs[0, 1].plot([0, zmax], [0, zmax], c='k')
axs[1, 0].plot([0, zmax], [0, zmax], c='k')
axs[1, 1].plot([0, zmax], [0, zmax], c='k')
fig.tight_layout()
fig, axs = plt.subplots(1, 2, figsize=(7, 3.5))
chi2s = ((metrics[:, i_zt] - metrics[:, i_ze])/metrics[:, i_std_ze])**2
axs[0].errorbar(metrics[:, i_zt], metrics[:, i_ze], yerr=metrics[:, i_std_ze], fmt='o', markersize=5, capsize=0)
axs[1].errorbar(metricscww[:, i_zt], metricscww[:, i_ze], yerr=metricscww[:, i_std_ze], fmt='o', markersize=5, capsize=0)
axs[0].plot([0, zmax], [0, zmax], 'k')
axs[1].plot([0, zmax], [0, zmax], 'k')
axs[0].set_xlim([0, zmax])
axs[1].set_xlim([0, zmax])
axs[0].set_ylim([0, zmax])
axs[1].set_ylim([0, zmax])
axs[0].set_title('New method')
axs[1].set_title('Standard template fitting')
fig.tight_layout()
cmap = "coolwarm_r"
vmin = 0.0
alpha = 0.9
s = 5
fig, axs = plt.subplots(1, 2, figsize=(10, 3.5))
vs = axs[0].scatter(metricscww[:, i_zt], metricscww[:, i_zmap],
s=s, c=pdfatZ_cww, cmap=cmap, linewidth=0, vmin=vmin, alpha=alpha)
vs = axs[1].scatter(metrics[:, i_zt], metrics[:, i_zmap],
s=s, c=pdfatZ, cmap=cmap, linewidth=0, vmin=vmin, alpha=alpha)
clb = plt.colorbar(vs, ax=axs.ravel().tolist())
clb.set_label('Normalized probability at spec-$z$')
for i in range(2):
axs[i].plot([0, zmax], [0, zmax], c='k', lw=1, zorder=0, alpha=1)
axs[i].set_ylim([0, zmax])
axs[i].set_xlim([0, zmax])
axs[i].set_xlabel('Spec-$z$')
axs[0].set_ylabel('MAP photo-$z$')
axs[0].set_title('Standard template fitting')
axs[1].set_title('New method')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Improving Reading Ability
Step3: Exercise
Step9: Paintball
Step10: Exercise
Step11: Exercise
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
% matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import math
import numpy as np
from thinkbayes2 import Pmf, Cdf, Suite, Joint
import thinkplot
import pandas as pd
df = pd.read_csv('drp_scores.csv', skiprows=21, delimiter='\t')
df.head()
grouped = df.groupby('Treatment')
for name, group in grouped:
print(name, group.Response.mean())
from scipy.stats import norm
class Normal(Suite, Joint):
def Likelihood(self, data, hypo):
data: sequence of test scores
hypo: mu, sigma
mu, sigma = hypo
likes = norm.pdf(data, mu, sigma)
return np.prod(likes)
from itertools import product
mus = np.linspace(20, 80, 101)
sigmas = np.linspace(5, 30, 101)
control = Normal(product(mus, sigmas))
data = df[df.Treatment=='Control'].Response
control.Update(data)
thinkplot.Contour(control, pcolor=True)
pmf_mu0 = control.Marginal(0)
thinkplot.Pdf(pmf_mu0)
pmf_sigma0 = control.Marginal(1)
thinkplot.Pdf(pmf_sigma0)
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
class Paintball(Suite, Joint):
Represents hypotheses about the location of an opponent.
def __init__(self, alphas, betas, locations):
Makes a joint suite of parameters alpha and beta.
Enumerates all pairs of alpha and beta.
Stores locations for use in Likelihood.
alphas: possible values for alpha
betas: possible values for beta
locations: possible locations along the wall
self.locations = locations
pairs = [(alpha, beta)
for alpha in alphas
for beta in betas]
Suite.__init__(self, pairs)
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
hypo: pair of alpha, beta
data: location of a hit
Returns: float likelihood
alpha, beta = hypo
x = data
pmf = MakeLocationPmf(alpha, beta, self.locations)
like = pmf.Prob(x)
return like
def MakeLocationPmf(alpha, beta, locations):
Computes the Pmf of the locations, given alpha and beta.
Given that the shooter is at coordinates (alpha, beta),
the probability of hitting any spot is inversely proportionate
to the strafe speed.
alpha: x position
beta: y position
locations: x locations where the pmf is evaluated
Returns: Pmf object
pmf = Pmf()
for x in locations:
prob = 1.0 / StrafingSpeed(alpha, beta, x)
pmf.Set(x, prob)
pmf.Normalize()
return pmf
def StrafingSpeed(alpha, beta, x):
Computes strafing speed, given location of shooter and impact.
alpha: x location of shooter
beta: y location of shooter
x: location of impact
Returns: derivative of x with respect to theta
theta = math.atan2(x - alpha, beta)
speed = beta / math.cos(theta)**2
return speed
alphas = range(0, 31)
betas = range(1, 51)
locations = range(0, 31)
suite = Paintball(alphas, betas, locations)
suite.UpdateSet([15, 16, 18, 21])
locations = range(0, 31)
alpha = 10
betas = [10, 20, 40]
thinkplot.PrePlot(num=len(betas))
for beta in betas:
pmf = MakeLocationPmf(alpha, beta, locations)
pmf.label = 'beta = %d' % beta
thinkplot.Pdf(pmf)
thinkplot.Config(xlabel='Distance',
ylabel='Prob')
marginal_alpha = suite.Marginal(0, label='alpha')
marginal_beta = suite.Marginal(1, label='beta')
print('alpha CI', marginal_alpha.CredibleInterval(50))
print('beta CI', marginal_beta.CredibleInterval(50))
thinkplot.PrePlot(num=2)
thinkplot.Cdf(Cdf(marginal_alpha))
thinkplot.Cdf(Cdf(marginal_beta))
thinkplot.Config(xlabel='Distance',
ylabel='Prob')
betas = [10, 20, 40]
thinkplot.PrePlot(num=len(betas))
for beta in betas:
cond = suite.Conditional(0, 1, beta)
cond.label = 'beta = %d' % beta
thinkplot.Pdf(cond)
thinkplot.Config(xlabel='Distance',
ylabel='Prob')
thinkplot.Contour(suite.GetDict(), contour=False, pcolor=True)
thinkplot.Config(xlabel='alpha',
ylabel='beta',
axis=[0, 30, 0, 20])
d = dict((pair, 0) for pair in suite.Values())
percentages = [75, 50, 25]
for p in percentages:
interval = suite.MaxLikeInterval(p)
for pair in interval:
d[pair] += 1
thinkplot.Contour(d, contour=False, pcolor=True)
thinkplot.Text(17, 4, '25', color='white')
thinkplot.Text(17, 15, '50', color='white')
thinkplot.Text(17, 30, '75')
thinkplot.Config(xlabel='alpha',
ylabel='beta',
legend=False)
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Modo correcto de definir una función
Step2: Ejemplo de uso de funciónes
Step3: y se desa calcular el promedio para cada uno, una forma de hacerlo seria la siguiente, calculando individualmente el promedio para cada uno e imprimiendo los resultados por pantalla
Step4: Pero como se puede apreciar eso complica el código haciendolo repetitivo y largo, la manera mas optima para hacer esto seria creando una función que calculara el promedio, para eso se utiliza la sintaxis
Step5: Ahora con la función promedio definida el código se puede simplificar de la siguiente manera
Step6: Parametros opcionales
Step7: Parametros predefinidos
|
<ASSISTANT_TASK:>
Python Code:
printfunc(3)
def func(num):
return(num**num+num)
def func(num):
return(num**num+num)
func(3)
alex=[90,70,80,60,90]
kate=[60,70,90,70,90]
david=[90,60,80,90,80]
#Promedio para Alex
i=0
sumatoria = 0
while i < len(alex):
sumatoria += alex[i]
i += 1
promedio=sumatoria/len(alex)
print(promedio)
#Promedio para Kate
i=0
sumatoria = 0
while i < len(kate):
sumatoria += kate[i]
i += 1
promedio=sumatoria/len(kate)
print(promedio)
#Promedio para David
i=0
sumatoria = 0
while i < len(david):
sumatoria += david[i]
i += 1
promedio=sumatoria/len(david)
print(promedio)
def promedio(alumno):
n = len(alumno)
i = 0
sumatoria = 0
while i < n:
sumatoria += alumno[i]
i += 1
resultado= sumatoria/n
print(resultado)
promedio(alex)
promedio(kate)
promedio(david)
def saludo():
print("Hola!!!")
saludo()
# Definicion de la función con el parámetro hora predeterminado como True
def saludo(hora=True):
if hora==True:
print("Hola, buen día!")
else:
print("Hola, buenas noches")
# Se ejecuta la función sin asignación de párametro
saludo()
# Se ejecuta la funcion con asignación de parámetro
t=False
saludo(t)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Saving and loading fields
Step2: Running the simulation
Step3: (Optional) Plotting
|
<ASSISTANT_TASK:>
Python Code:
from crpropa import *
randomSeed = 42
turbSpectrum = SimpleTurbulenceSpectrum(Brms=8*nG, lMin = 60*kpc, lMax=800*kpc, sIndex=5./3.)
gridprops = GridProperties(Vector3d(0), 256, 30*kpc)
BField = SimpleGridTurbulence(turbSpectrum, gridprops, randomSeed)
# print some properties of our field
print('Lc = {:.1f} kpc'.format(BField.getCorrelationLength() / kpc)) # correlation length
print('sqrt(<B^2>) = {:.1f} nG'.format(BField.getBrms() / nG)) # RMS
print('<|B|> = {:.1f} nG'.format(BField.getMeanFieldStrength() / nG)) # mean
print('B(10 Mpc, 0, 0) =', BField.getField(Vector3d(10,0,0) * Mpc) / nG, 'nG')
# save the field
# format: (Bx, By, Bz)(x, y, z) with z changing the quickest.
#dumpGrid(BField.getGrid(), 'myfield.dat') # binary, single precision
#dumpGridToTxt(Bfield.getGrid(), 'myfield.txt') # ASCII
# load your own field
#vgrid=Grid3f(gridprops)
#loadGrid(vgrid, 'myfield.dat')
#loadGridFromTxt(vgrid, 'myfield.txt')
sim = ModuleList()
sim.add(PropagationCK(BField))
sim.add(PhotoPionProduction(CMB()))
sim.add(PhotoPionProduction(IRB_Kneiske04()))
sim.add(PhotoDisintegration(CMB()))
sim.add(PhotoDisintegration(IRB_Kneiske04()))
sim.add(ElectronPairProduction(CMB()))
sim.add(ElectronPairProduction(IRB_Kneiske04()))
sim.add(NuclearDecay())
sim.add(MaximumTrajectoryLength(25 * Mpc))
output = TextOutput('trajectory.txt', Output.Trajectory3D)
sim.add(output)
x = Vector3d(0,0,0) # position
p = Vector3d(1,1,0) # direction
c = Candidate(nucleusId(16, 8), 100 * EeV, x, p)
sim.run(c, True)
%matplotlib inline
from pylab import *
from mpl_toolkits.mplot3d import axes3d
output.close()
data = genfromtxt('trajectory.txt', names=True)
# trajectory points
x, y, z = data['X'], data['Y'], data['Z']
# translate particle ID to charge number
Z = [chargeNumber(int(Id)) for Id in data['ID'].astype(int)]
# translate the charge number to color and size
# --> protons are blue, Helium is green, everthing else is red
colorDict = {0:'k', 1:'b', 2:'g', 3:'r', 4:'r', 5:'r', 6:'r', 7:'r', 8:'r'}
sizeDict = {0:4, 1:4, 2:8, 3:10, 4:10, 5:10, 6:10, 7:10, 8:10}
colors = [colorDict[z] for z in Z]
sizes = [sizeDict[z] for z in Z]
fig = plt.figure(figsize=(12, 5))#plt.figaspect(0.5))
ax = fig.gca(projection='3d')# , aspect='equal'
ax.scatter(x,y,z+6, 'o', s=sizes, color=colors)
ax.set_xlabel('x / Mpc', fontsize=18)
ax.set_ylabel('y / Mpc', fontsize=18)
ax.set_zlabel('z / Mpc', fontsize=18)
ax.set_xlim((-1, 16))
ax.set_ylim((-1, 16))
ax.set_zlim((-1, 16))
ax.xaxis.set_ticks((0, 5, 10, 15))
ax.yaxis.set_ticks((0, 5, 10, 15))
ax.zaxis.set_ticks((0, 5, 10, 15))
show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import required modules
Step2: Target Configuration
Step3: Workload Execution and Functions Profiling Data Collection
Step4: Parse Trace and Profiling Data
Step5: Report Functions Profiling Data
Step6: Plot Functions Profiling Data
|
<ASSISTANT_TASK:>
Python Code:
import logging
from conf import LisaLogging
LisaLogging.setup()
# Generate plots inline
%matplotlib inline
import json
import os
# Support to access the remote target
import devlib
from env import TestEnv
from executor import Executor
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Ramp
# Support for trace events analysis
from trace import Trace
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'linux',
"board" : 'juno',
"host" : '192.168.0.1',
"password" : 'juno',
# Folder where all the results will be collected
"results_dir" : "TraceAnalysis_FunctionsProfiling",
# Define devlib modules to load
"exclude_modules" : [ 'hwmon' ],
# FTrace events to collect for all the tests configuration which have
# the "ftrace" flag enabled
"ftrace" : {
"functions" : [
"pick_next_task_fair",
"select_task_rq_fair",
"enqueue_task_fair",
"update_curr_fair",
"dequeue_task_fair",
],
"buffsize" : 100 * 1024,
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'rt-app' ],
# Comment this line to calibrate RTApp in your own platform
"rtapp-calib" : {"0": 360, "1": 142, "2": 138, "3": 352, "4": 352, "5": 353},
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False, force_new=True)
target = te.target
def experiment(te):
# Create and RTApp RAMP task
rtapp = RTA(te.target, 'ramp', calibration=te.calibration())
rtapp.conf(kind='profile',
params={
'ramp' : Ramp(
start_pct = 60,
end_pct = 20,
delta_pct = 5,
time_s = 0.5).get()
})
# FTrace the execution of this workload
te.ftrace.start()
rtapp.run(out_dir=te.res_dir)
te.ftrace.stop()
# Collect and keep track of the trace
trace_file = os.path.join(te.res_dir, 'trace.dat')
te.ftrace.get_trace(trace_file)
# Collect and keep track of the Kernel Functions performance data
stats_file = os.path.join(te.res_dir, 'trace.stats')
te.ftrace.get_stats(stats_file)
# Dump platform descriptor
te.platform_dump(te.res_dir)
experiment(te)
# Base folder where tests folder are located
res_dir = te.res_dir
logging.info('Content of the output folder %s', res_dir)
!tree {res_dir}
with open(os.path.join(res_dir, 'platform.json'), 'r') as fh:
platform = json.load(fh)
print json.dumps(platform, indent=4)
logging.info('LITTLE cluster max capacity: %d',
platform['nrg_model']['little']['cpu']['cap_max'])
trace = Trace(platform, res_dir, events=[])
# Get the DataFrame for the specified list of kernel functions
df = trace.data_frame.functions_stats(['enqueue_task_fair', 'dequeue_task_fair'])
df
# Get the DataFrame for the single specified kernel function
df = trace.data_frame.functions_stats('select_task_rq_fair')
df
# Plot Average and Total execution time for the specified
# list of kernel functions
trace.analysis.functions.plotProfilingStats(
functions = [
'select_task_rq_fair',
'enqueue_task_fair',
'dequeue_task_fair'
],
metrics = [
# Average completion time per CPU
'avg',
# Total execution time per CPU
'time',
]
)
# Plot Average execution time for the single specified kernel function
trace.analysis.functions.plotProfilingStats(
functions = 'update_curr_fair',
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <table align="left">
Step2: Selection Bias
Step3: The treated group has a lower outcome mean than that of the control group, but the difference is not necessarily attributed to the causal effect of the treatment.
Step4: The distributions of covariates or transformed covariates don't completely overlap between the treated and control groups.
Step5: Correctly Specified Model
Step6: With correctly specified covariates to match ($Z1, \dots, Z4$),
Step7: Misspecified Model
Step8: Adding transformations of covariates
Step9: Adding extra covariates
Step10: Benchmark Execution Time
|
<ASSISTANT_TASK:>
Python Code:
#@title Copyright 2019 The Empirical Calibration Authors.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import patsy
import seaborn as sns
sns.set_style('whitegrid')
%config InlineBackend.figure_format='retina'
from google.colab import widgets
# install and import ec
!pip install -q git+https://github.com/google/empirical_calibration
import empirical_calibration as ec
np.random.seed(123)
simulation = ec.data.kang_schafer.Simulation(size=2000)
df = pd.DataFrame(
np.column_stack([
simulation.treatment, simulation.covariates,
simulation.transformed_covariates, simulation.outcome
]))
df.columns = [
"treatment", "z1", "z2", "z3", "z4", "x1", "x2", "x3", "x4", "outcome"
]
print(df.groupby("treatment").mean().T)
def show_hist(name):
plt.figure(figsize=(6, 2))
plt.hist(
df.loc[df['treatment'] == 1, name],
bins=20,
alpha=0.4,
color='#00BFC4',
label='treated',
edgecolor='none')
plt.hist(
df.loc[df['treatment'] == 0, name],
bins=20,
alpha=0.4,
color='#F8766D',
label='control',
edgecolor='none')
plt.xlabel(name)
plt.legend(loc='upper left', prop={'size': 12})
plt.show()
tb = widgets.TabBar(['covariates', 'transformed_covariates', 'outcome'])
with tb.output_to('covariates'):
for name in ["z1", "z2", "z3", "z4"]:
show_hist(name)
with tb.output_to('transformed_covariates'):
for name in ["x1", "x2", "x3", "x4"]:
show_hist(name)
with tb.output_to('outcome'):
show_hist("outcome")
def estimate_mean(formula):
simulation = ec.data.kang_schafer.Simulation(size=1000)
t = simulation.treatment
y = simulation.outcome
df = pd.DataFrame(
np.column_stack(
[simulation.covariates, simulation.transformed_covariates]))
df.columns = ["z1", "z2", "z3", "z4", "x1", "x2", "x3", "x4"]
x = patsy.dmatrix(formula, df, return_type="dataframe").values
weights = ec.from_formula(formula=formula,
df=df.loc[t==1],
target_df=df)[0]
return np.mean(np.sum(y[t == 1] * weights))
def show_estimates(estimates):
estimates = pd.Series(estimates)
ax = estimates.hist(bins=20, alpha=0.8, edgecolor='none')
plt.axvline(estimates.mean(), linestyle='dashed', color='red')
# True population mean is 210.
print('bias is {}'.format(estimates.mean() - 210.))
print('rmse is {}'.format(np.sqrt(np.mean((estimates - 210.) ** 2))))
estimates_correct = [estimate_mean("-1 + z1 + z2 + z3 + z4") for i in xrange(1000)]
show_estimates(estimates_correct)
estimates_miss = [estimate_mean("-1 + x1 + x2 + x3 + x4") for i in xrange(1000)]
show_estimates(estimates_miss)
formula = ("-1 + (x1 + x2 + x3 + x4)**2 + I(np.log(x1)) + I(np.log(x2)) + "
"I(np.log(x3)) + I(np.log(x4))")
estimates_expanded = [estimate_mean(formula) for i in xrange(1000)]
show_estimates(estimates_expanded)
formula = "-1 + z1 + z2 + z3 + z4 + x1 + x2 + x3 + x4"
estimates_redundant = [estimate_mean(formula) for i in xrange(1000)]
show_estimates(estimates_redundant)
np.random.seed(123)
simulation = ec.data.kang_schafer.Simulation(size=2000)
x1 = simulation.covariates[simulation.treatment == 1]
x0 = simulation.covariates[simulation.treatment == 0]
%timeit weights = ec.maybe_exact_calibrate(x0, x1)[0]
np.random.seed(123)
simulation = ec.data.kang_schafer.Simulation(size=20000)
x1 = simulation.covariates[simulation.treatment == 1]
x0 = simulation.covariates[simulation.treatment == 0]
%timeit weights = ec.maybe_exact_calibrate(x0, x1)[0]
np.random.seed(123)
simulation = ec.data.kang_schafer.Simulation(size=200000)
x1 = simulation.covariates[simulation.treatment == 1]
x0 = simulation.covariates[simulation.treatment == 0]
%timeit weights = ec.maybe_exact_calibrate(x0, x1)[0]
np.random.seed(123)
simulation = ec.data.kang_schafer.Simulation(size=2000000)
x1 = simulation.covariates[simulation.treatment == 1]
x0 = simulation.covariates[simulation.treatment == 0]
%timeit weights = ec.maybe_exact_calibrate(x0, x1)[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Creation
Step2: The model is set up by default with a meridional diffusion term.
Step3: Create new subprocess
Step4: Note that the model's whole state dictionary is given as input to the subprocess. In case only the temperature field ebm_budyko.state['Ts'] is given, a new state dictionary would be created which holds the surface temperature with the key 'default'. That raises an error as the budyko transport process refers the temperature with key 'Ts'.
Step5: Model integration & Plotting
Step6: The following code plots the current surface temperature, albedo and energy budget
Step7: The two right sided plots show that the model is not in equilibrium. The net radiation reveals that the model currently gains heat and therefore warms up at the poles and loses heat at the equator. From the Energy plot we can see that latitudinal energy balance is not met.
Step8: Now we can see that the latitudinal energy balance is statisfied. Each latitude gains as much heat (net radiation) as is transported out of it (diffusion transport). There is a net radiation surplus in the equator region, so more shortwave radiation is absorbed there than is emitted through longwave radiation. At the poles there is a net radiation deficit. That imbalance is compensated by the diffusive energy transport term.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import climlab
from climlab import constants as const
# model creation
ebm_budyko = climlab.EBM()
# print model states and suprocesses
print(ebm_budyko)
# create Budyko subprocess
budyko_transp = climlab.dynamics.BudykoTransport(b=3.81,
state=ebm_budyko.state,
**ebm_budyko.param)
# add the new transport subprocess
ebm_budyko.add_subprocess('budyko_transport',budyko_transp)
# remove the old diffusion subprocess
ebm_budyko.remove_subprocess('diffusion')
print(ebm_budyko)
# integrate model for a single timestep
ebm_budyko.step_forward()
# creating plot figure
fig = plt.figure(figsize=(15,10))
# Temperature plot
ax1 = fig.add_subplot(221)
ax1.plot(ebm_budyko.lat,ebm_budyko.Ts)
ax1.set_xticks([-90,-60,-30,0,30,60,90])
ax1.set_xlim([-90,90])
ax1.set_title('Surface Temperature', fontsize=14)
ax1.set_ylabel('(degC)', fontsize=12)
ax1.grid()
# Albedo plot
ax2 = fig.add_subplot(223, sharex = ax1)
ax2.plot(ebm_budyko.lat,ebm_budyko.albedo)
ax2.set_title('Albedo', fontsize=14)
ax2.set_xlabel('latitude', fontsize=10)
ax2.set_ylim([0,1])
ax2.grid()
# Net Radiation plot
ax3 = fig.add_subplot(222, sharex = ax1)
ax3.plot(ebm_budyko.lat, ebm_budyko.OLR, label='OLR',
color='cyan')
ax3.plot(ebm_budyko.lat, ebm_budyko.ASR, label='ASR',
color='magenta')
ax3.plot(ebm_budyko.lat, ebm_budyko.ASR-ebm_budyko.OLR,
label='net radiation',
color='red')
ax3.set_title('Net Radiation', fontsize=14)
ax3.set_ylabel('(W/m$^2$)', fontsize=12)
ax3.legend(loc='best')
ax3.grid()
# Energy Balance plot
net_rad = ebm_budyko.net_radiation
transport = ebm_budyko.subprocess['budyko_transport'].heating_rate['Ts']
ax4 = fig.add_subplot(224, sharex = ax1)
ax4.plot(ebm_budyko.lat, net_rad, label='net radiation',
color='red')
ax4.plot(ebm_budyko.lat, transport, label='heat transport',
color='blue')
ax4.plot(ebm_budyko.lat, net_rad+transport, label='balance',
color='black')
ax4.set_title('Energy', fontsize=14)
ax4.set_xlabel('latitude', fontsize=10)
ax4.set_ylabel('(W/m$^2$)', fontsize=12)
ax4.legend(loc='best')
ax4.grid()
plt.show()
# integrate model until solution converges
ebm_budyko.integrate_converge()
# creating plot figure
fig = plt.figure(figsize=(15,10))
# Temperature plot
ax1 = fig.add_subplot(221)
ax1.plot(ebm_budyko.lat,ebm_budyko.Ts)
ax1.set_xticks([-90,-60,-30,0,30,60,90])
ax1.set_xlim([-90,90])
ax1.set_title('Surface Temperature', fontsize=14)
ax1.set_ylabel('(degC)', fontsize=12)
ax1.grid()
# Albedo plot
ax2 = fig.add_subplot(223, sharex = ax1)
ax2.plot(ebm_budyko.lat,ebm_budyko.albedo)
ax2.set_title('Albedo', fontsize=14)
ax2.set_xlabel('latitude', fontsize=10)
ax2.set_ylim([0,1])
ax2.grid()
# Net Radiation plot
ax3 = fig.add_subplot(222, sharex = ax1)
ax3.plot(ebm_budyko.lat, ebm_budyko.OLR, label='OLR',
color='cyan')
ax3.plot(ebm_budyko.lat, ebm_budyko.ASR, label='ASR',
color='magenta')
ax3.plot(ebm_budyko.lat, ebm_budyko.ASR-ebm_budyko.OLR,
label='net radiation',
color='red')
ax3.set_title('Net Radiation', fontsize=14)
ax3.set_ylabel('(W/m$^2$)', fontsize=12)
ax3.legend(loc='best')
ax3.grid()
# Energy Balance plot
net_rad = ebm_budyko.net_radiation
transport = ebm_budyko.subprocess['budyko_transport'].heating_rate['Ts']
ax4 = fig.add_subplot(224, sharex = ax1)
ax4.plot(ebm_budyko.lat, net_rad, label='net radiation',
color='red')
ax4.plot(ebm_budyko.lat, transport, label='heat transport',
color='blue')
ax4.plot(ebm_budyko.lat, net_rad+transport, label='balance',
color='black')
ax4.set_title('Energy', fontsize=14)
ax4.set_xlabel('latitude', fontsize=10)
ax4.set_ylabel('(W/m$^2$)', fontsize=12)
ax4.legend(loc='best')
ax4.grid()
plt.show()
print('The global mean temperature is %.2f deg C.' %climlab.global_mean(ebm_budyko.Ts))
print('The modeled ice edge is at %.2f deg latitude.' %np.max(ebm_budyko.icelat))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's evaluate how much the membrane potential depends on Input resistance and
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import pandas as pd
#mypath = '/fs3/group/jonasgrp/MachineLearning/Cell_types.xlsx'
mypath = './Cell_types.xlsx'
df = pd.read_excel(io=mypath, sheetname='PFC', skiprows=1)
df.head()
df.columns
df['CellID']
for key in df.columns:
if df[key].dtype != object:
print"%-10s = %2.4f + %2.4f" %(key ,df[key].mean(), df[key].std())
means = df[ df.columns ].mean()
means
df_means = pd.DataFrame(means).T
df_means.head()
df.InputR
df['Vrest'].mean()
df['Vrest'].unique() # get NumPy array
df['Sag'].plot(marker='o', color='red'); # plots
plt.scatter(df['InputR'], df['rheobase'], color='black')
plt.xlabel('Input Resistance (M$\Omega$)'), plt.ylabel('Rheobase (pA)')
plt.xlim(0,400), plt.ylim(0,400);
x = df[['InputR', 'Sag','Tau_mb']]
y = df[['Vrest']]
# import standard regression models (sm)
import statsmodels.api as sm
K = sm.add_constant(x) # k0, k1, k2 and k3...
# get estimation
est = sm.OLS(y, K).fit() # ordinary least square regression
est.summary() # need more data for kurtosis :)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Damped, driven nonlinear pendulum
Step4: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
Step5: Simple pendulum
Step7: Damped pendulum
Step8: Here is an example of the output of your plot_pendulum function that should show a decaying spiral.
Step9: Use interact to explore the plot_pendulum function with
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
g = 9.81 # m/s^2
l = 0.5 # length of pendulum, in meters
tmax = 50. # seconds
t = np.linspace(0, tmax, int(100*tmax))
odeint.
def derivs(y, t, a, b, omega0):
Compute the derivatives of the damped, driven pendulum.
Parameters
----------
y : ndarray
The solution vector at the current time t[i]: [theta[i],omega[i]].
t : float
The current time t[i].
a, b, omega0: float
The parameters in the differential equation.
Returns
-------
dy : ndarray
The vector of derviatives at t[i]: [dtheta[i],domega[i]].
theta,omega=y
dtheta=omega
domega=[(-g/l)*np.sin(theta)-a*omega-b*np.sin(omega0*t)]
dy=[dtheta,domega]
return dy
assert np.allclose(derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0), [1.,-1.])
def energy(y):
Compute the energy for the state array y.
The state array y can have two forms:
1. It could be an ndim=1 array of np.array([theta,omega]) at a single time.
2. It could be an ndim=2 array where each row is the [theta,omega] at single
time.
Parameters
----------
y : ndarray, list, tuple
A solution vector
Returns
-------
E/m : float (ndim=1) or ndarray (ndim=2)
The energy per mass.
theta=y[0,::1]
omega=y[1,::1]
Em=g*l*(1-np.cos(theta))+.5*l**2*omega**2
return Em
a=np.ones((10,2))
print(a[::1,0])
#assert np.allclose(energy(np.array([np.pi,0])),g)
assert np.allclose(energy(np.ones((10,2))), np.ones(10)*energy(np.array([1,1])))
# YOUR CODE HERE
raise NotImplementedError()
# YOUR CODE HERE
raise NotImplementedError()
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this to grade the two plots and their tuning of atol, rtol.
def plot_pendulum(a=0.0, b=0.0, omega0=0.0):
Integrate the damped, driven pendulum and make a phase plot of the solution.
# YOUR CODE HERE
raise NotImplementedError()
plot_pendulum(0.5, 0.0, 0.0)
# YOUR CODE HERE
raise NotImplementedError()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Genome version
Step2: Products
|
<ASSISTANT_TASK:>
Python Code:
bsmaploc="/Applications/bioinfo/BSMAP/bsmap-2.74/"
!curl \
ftp://ftp.ensemblgenomes.org/pub/release-32/metazoa/fasta/crassostrea_gigas/dna/Crassostrea_gigas.GCA_000297895.1.dna_sm.toplevel.fa.gz \
> /Volumes/caviar/wd/data/Crassostrea_gigas.GCAz_000297895.1.dna_sm.toplevel.fa.gz
!curl ftp://ftp.ensemblgenomes.org/pub/release-32/metazoa/fasta/crassostrea_gigas/dna/CHECKSUMS
!ls /Volumes/caviar/wd/data/
!md5 /Volumes/caviar/wd/data/Crassostrea_gigas.GCAz_000297895.1.dna_sm.toplevel.fa.gz
cd /Volumes/caviar/wd/
mkdir $(date +%F)
ls
ls /Volumes/web/nightingales/C
!curl \
http://owl.fish.washington.edu/nightingales/C_gigas/9_GATCAG_L001_R1_001.fastq.gz \
> /Volumes/caviar/wd/2016-10-11/9_GATCAG_L001_R1_001.fastq.gz
!curl \
http://owl.fish.washington.edu/nightingales/C_gigas/10_TAGCTT_L001_R1_001.fastq.gz \
> /Volumes/caviar/wd/2016-10-11/10_TAGCTT_L001_R1_001.fastq.gz
cd 2016-10-11/
!cp 9_GATCAG_L001_R1_001.fastq.gz M2.fastq.gz
!cp 10_TAGCTT_L001_R1_001.fastq.gz M3.fastq.gz
for i in ("M2","M3"):
!{bsmaploc}bsmap \
-a {i}.fastq.gz \
-d ../data/Crassostrea_gigas.GCAz_000297895.1.dna_sm.toplevel.fa \
-o bsmap_out_{i}.sam \
-p 6
for i in ("M2","M3"):
!python {bsmaploc}methratio.py \
-d ../data/Crassostrea_gigas.GCAz_000297895.1.dna_sm.toplevel.fa \
-u -z -g \
-o methratio_out_{i}.txt \
-s {bsmaploc}samtools \
bsmap_out_{i}.sam \
!head /Volumes/caviar/wd/2016-10-11/methratio_out_M2.txt
!curl https://raw.githubusercontent.com/che625/olson-ms-nb/master/scripts/mr3x.awk \
> /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr3x.awk
!curl https://raw.githubusercontent.com/che625/olson-ms-nb/master/scripts/mr_gg.awk.sh \
> /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr_gg.awk.sh
#first methratio files are converted to filter for CG context, 3x coverage (mr3x.awk), and reformatting (mr_gg.awk.sh).
#due to issue passing variable to awk, simple scripts were used (included in repository)
for i in ("M2","M3"):
!echo {i}
!grep "[A-Z][A-Z]CG[A-Z]" <methratio_out_{i}.txt> methratio_out_{i}CG.txt
!awk -f /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr3x.awk methratio_out_{i}CG.txt \
> mr3x.{i}.txt
!awk -f /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr_gg.awk.sh \
mr3x.{i}.txt > mkfmt_{i}.txt
#first methratio files are converted to filter for CG context, 3x coverage (mr3x.awk), and reformatting (mr_gg.awk.sh).
#due to issue passing variable to awk, simple scripts were used (included in repository)
for i in ("M2","M3"):
!echo {i}
!grep -i "[A-Z][A-Z]CG[A-Z]" <methratio_out_{i}.txt> methratio_out_{i}CGi.txt
!awk -f /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr3x.awk methratio_out_{i}CGi.txt \
> mr3xi.{i}.txt
!awk -f /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr_gg.awk.sh \
mr3xi.{i}.txt > mkfmti_{i}.txt
#maybe we need to ignore case
!md5 mkfmt_M2.txt mkfmti_M2.txt | head
#nope
!head -100 mkfmt_M2.txt
cd /Users/sr320/git-repos/sr320.github.io/jupyter
mkdir analyses
mkdir analyses/$(date +%F)
for i in ("M2","M3"):
!cp /Volumes/caviar/wd/2016-10-11/mkfmt_{i}.txt analyses/$(date +%F)/mkfmt_{i}.txt
!head analyses/$(date +%F)/*
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Where in your code does this error come from?
Step2: Stack trace
Step3: Printing during execution
Step4: Printing attributes of a variable
Step5: Most Frequent NaN Causes
Step6: Test Value
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import theano
import theano.tensor as T
x = T.vector()
y = T.vector()
z = x + x
z = z * y
f = theano.function([x, y], z)
f(np.ones((2,)), np.ones((3,)))
# TODO: finish to define the mode below
mode=...
import numpy as np
import theano
import theano.tensor as T
x = T.vector()
y = T.vector()
z = x + x
z.name = "z1"
z = z * y
z.name = "z2"
f = theano.function([x, y], z, mode=mode)
f(np.ones((2,)), np.ones((3,)))
import theano
import theano.tensor as T
from theano.tests.breakpoint import PdbBreakpoint
input, target = T.fvectors(['x', 'y'])
mse = (input - target) ** 2
# Conditional breakpoint to be activated if the total
# MSE > 100. The breakpoint will monitor the inputs,
# targets as well as the individual error values
breakpointOp = PdbBreakpoint("MSE too high")
condition = T.gt(mse.sum(), 100)
mse, monitored_input, monitored_target = breakpointOp(
condition, mse, input, target)
# Compile the theano function
fct = theano.function([input, target], mse)
# Use the function
print fct([10, 0], [10, 5]) # Will NOT activate the breakpoint
print fct([0, 0], [10, 5]) # Will activate the breakpoint
import theano
x = theano.tensor.vector()
o = theano.printing.Print("a message")(x)
f = theano.function([x], o)
d = f([3, 4])
o = theano.printing.Print("Attributes of x:", attrs=('min', 'mean', 'max'))(x)
f = theano.function([x], o)
d = f([3, 1, 4, 9])
import numpy
import theano
import theano.compile.nanguardmode
from theano import tensor as T
x = T.matrix()
w = theano.shared(numpy.random.randn(5, 7).astype(theano.config.floatX))
y = T.dot(x, w)
mode=theano.compile.nanguardmode.NanGuardMode(nan_is_error=True,
inf_is_error=True,
big_is_error=True)
fun = theano.function(
[x], y, mode=mode)
infa = numpy.tile(
(numpy.asarray(100.) ** 1000000), (3, 5))
fun(infa)
# Can also be 'off', 'ignore', 'raise', 'pdb'
theano.config.compute_test_value = 'warn'
# input which will be of shape (5, 10)
x, y = T.matrices('xy')
# provide Theano with a default test-value
x.tag.test_value = numpy.random.rand(5, 10)
y.tag.test_value = numpy.random.rand(4, 10)
x + y # warn about the shape error
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Run a pre-trained Transformer
Step2: 2. Features and resources
Step3: Gradients can be calculated using trax.fastmath.grad.
Step4: Layers
Step9: Models
Step10: Data
Step11: Using the trax.data module you can create input processing pipelines, e.g., to tokenize and shuffle your data. You create data pipelines using trax.data.Serial and they are functions that you apply to streams to create processed streams.
Step12: Supervised training
Step13: After training the model, run it like any layer to get results.
|
<ASSISTANT_TASK:>
Python Code:
#@title
# Copyright 2020 Google LLC.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import numpy as np
#@title
# Import Trax
!pip install -q -U trax
import trax
# Create a Transformer model.
# Pre-trained model config in gs://trax-ml/models/translation/ende_wmt32k.gin
model = trax.models.Transformer(
input_vocab_size=33300,
d_model=512, d_ff=2048,
n_heads=8, n_encoder_layers=6, n_decoder_layers=6,
max_len=2048, mode='predict')
# Initialize using pre-trained weights.
model.init_from_file('gs://trax-ml/models/translation/ende_wmt32k.pkl.gz',
weights_only=True)
# Tokenize a sentence.
sentence = 'It is nice to learn new things today!'
tokenized = list(trax.data.tokenize(iter([sentence]), # Operates on streams.
vocab_dir='gs://trax-ml/vocabs/',
vocab_file='ende_32k.subword'))[0]
# Decode from the Transformer.
tokenized = tokenized[None, :] # Add batch dimension.
tokenized_translation = trax.supervised.decoding.autoregressive_sample(
model, tokenized, temperature=0.0) # Higher temperature: more diverse results.
# De-tokenize,
tokenized_translation = tokenized_translation[0][:-1] # Remove batch and EOS.
translation = trax.data.detokenize(tokenized_translation,
vocab_dir='gs://trax-ml/vocabs/',
vocab_file='ende_32k.subword')
print(translation)
from trax.fastmath import numpy as fastnp
trax.fastmath.use_backend('jax') # Can be 'jax' or 'tensorflow-numpy'.
matrix = fastnp.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
print(f'matrix =\n{matrix}')
vector = fastnp.ones(3)
print(f'vector = {vector}')
product = fastnp.dot(vector, matrix)
print(f'product = {product}')
tanh = fastnp.tanh(product)
print(f'tanh(product) = {tanh}')
def f(x):
return 2.0 * x * x
grad_f = trax.fastmath.grad(f)
print(f'grad(2x^2) at 1 = {grad_f(1.0)}')
print(f'grad(2x^2) at -2 = {grad_f(-2.0)}')
from trax import layers as tl
# Create an input tensor x.
x = np.arange(15)
print(f'x = {x}')
# Create the embedding layer.
embedding = tl.Embedding(vocab_size=20, d_feature=32)
embedding.init(trax.shapes.signature(x))
# Run the layer -- y = embedding(x).
y = embedding(x)
print(f'shape of y = {y.shape}')
model = tl.Serial(
tl.Embedding(vocab_size=8192, d_feature=256),
tl.Mean(axis=1), # Average on axis 1 (length of sentence).
tl.Dense(2), # Classify 2 classes.
)
# You can print model structure.
print(model)
train_stream = trax.data.TFDS('imdb_reviews', keys=('text', 'label'), train=True)()
eval_stream = trax.data.TFDS('imdb_reviews', keys=('text', 'label'), train=False)()
print(next(train_stream)) # See one example.
data_pipeline = trax.data.Serial(
trax.data.Tokenize(vocab_file='en_8k.subword', keys=[0]),
trax.data.Shuffle(),
trax.data.FilterByLength(max_length=2048, length_keys=[0]),
trax.data.BucketByLength(boundaries=[ 32, 128, 512, 2048],
batch_sizes=[512, 128, 32, 8, 1],
length_keys=[0]),
trax.data.AddLossWeights()
)
train_batches_stream = data_pipeline(train_stream)
eval_batches_stream = data_pipeline(eval_stream)
example_batch = next(train_batches_stream)
print(f'shapes = {[x.shape for x in example_batch]}') # Check the shapes.
from trax.supervised import training
# Training task.
train_task = training.TrainTask(
labeled_data=train_batches_stream,
loss_layer=tl.WeightedCategoryCrossEntropy(),
optimizer=trax.optimizers.Adam(0.01),
n_steps_per_checkpoint=500,
)
# Evaluaton task.
eval_task = training.EvalTask(
labeled_data=eval_batches_stream,
metrics=[tl.WeightedCategoryCrossEntropy(), tl.WeightedCategoryAccuracy()],
n_eval_batches=20 # For less variance in eval numbers.
)
# Training loop saves checkpoints to output_dir.
output_dir = os.path.expanduser('~/output_dir/')
!rm -rf {output_dir}
training_loop = training.Loop(model,
train_task,
eval_tasks=[eval_task],
output_dir=output_dir)
# Run 2000 steps (batches).
training_loop.run(2000)
example_input = next(eval_batches_stream)[0][0]
example_input_str = trax.data.detokenize(example_input, vocab_file='en_8k.subword')
print(f'example input_str: {example_input_str}')
sentiment_log_probs = model(example_input[None, :]) # Add batch dimension.
print(f'Model returned sentiment probabilities: {np.exp(sentiment_log_probs)}')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1) create a vector of $d$ parameters
Step2: 2) Create an array of covariates
Step3: 3) Construct the conditional intensity $\lambda(t)$
Step4: Generate the random variables
Step5: Given events at times $t_1,...,t_N$ the conditional likelihood is
Step6: The gradient is
Step7: Set an initial guess
Step8: Maximize the likelihood
Step9: Calculate a prediction of the response.
|
<ASSISTANT_TASK:>
Python Code:
N = 10000# number of observations
d = 5 # number of covariates
theta = np.random.normal(size = (d,))
X = 0.1*np.random.normal(size = (d,N))
# X = linalg.orth(X.T).T
# X = np.eye((d))
l = np.exp(np.dot(X.T,theta))
dt = 0.001 # discretization step
u = np.random.uniform(size = len(l))
y = 1*(l*dt>u)
fig, ax = plt.subplots(figsize=(12, 2))
fig = ax.matshow(np.reshape(y,(1,len(y))),cmap = 'Greys',aspect = 3)
def negLogL(theta,X,y,dt):
# calculate the intensity of the Poisson process
intensity = np.exp(np.dot(X.T,theta))
# bins with spikes vs no spikes
l = sum(intensity)*dt - sum(y*np.log(intensity))
return(l)
def grad(theta,X,y,dt):
intensity = np.exp(np.dot(X.T,theta))
g = np.dot(X,intensity)*dt - np.dot(X,y)
return(g)
theta_0 = np.random.normal(size = (d,))
opts = {'disp':True,'maxiter': 1000}
theta_MLE = optimize.minimize(negLogL, theta_0,jac = grad, args = (X,y,dt),options = opts).x
print ('Error: ' + str(np.sqrt(sum((theta_MLE - theta)**2))))
theta_MLE = optimize.minimize(negLogL, theta_0, args = (X,y,dt),options = opts).x
print ('Error: ' + str(np.sqrt(sum((theta_MLE - theta)**2))))
plt.hold(False)
plt1, = plt.plot(theta,'ro',label = 'theta true')
plt.hold(True)
plt2, = plt.plot(theta_MLE,'o',label = 'theta MLE')
plt.legend(handles = [plt1,plt2])
l_MLE = np.exp(np.dot(X.T,theta_MLE))
u = np.random.uniform(size = len(l_MLE))
y_predicted = 1*(l_MLE*dt>u)
fig, ax = plt.subplots(figsize=(12, 2))
fig = ax.matshow(np.reshape(y,(1,len(y))),cmap = 'Greys',aspect = 3)
fig, ax = plt.subplots(figsize=(12, 2))
fig = ax.matshow(np.reshape(y_predicted,(1,len(y))),cmap = 'Greys',aspect = 3)
mismatch_error = sum(np.abs(y-y_predicted))/sum(y+y_predicted)
print(mismatch_error)
fig = plt.figure(figsize = (12,2))
fig = plt.plot(l)
plt.title('True Intensity')
fig = plt.figure(figsize = (12,2))
fig = plt.plot(l_MLE)
plt.title('Estimated Intensity')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ScrollZoomToggler
Step2: MarkerCluster
Step3: Terminator
Step4: Leaflet.boatmarker
Step5: Leaflet.TextPath
|
<ASSISTANT_TASK:>
Python Code:
# This is to import the repository's version of folium ; not the installed one.
import sys, os
sys.path.insert(0,'..')
import folium
from folium import plugins
import numpy as np
import json
%load_ext autoreload
%autoreload 2
m = folium.Map([45.,3.], zoom_start=4)
plugins.ScrollZoomToggler().add_to(m)
m
N = 100
data = np.array([
np.random.uniform(low=35,high=60, size=N), # random latitudes in Europe
np.random.uniform(low=-12,high=30, size=N), # random longitudes in Europe
range(N), # popups are simple numbers
]).T
m = folium.Map([45.,3.], zoom_start=4)
plugins.MarkerCluster(data).add_to(m)
m
m = folium.Map([45.,3.], zoom_start=1)
plugins.Terminator().add_to(m)
plugins.ScrollZoomToggler().add_to(m)
m
m = folium.Map([30.,0.], zoom_start=3)
plugins.BoatMarker((34,-43), heading=45, wind_heading=150, wind_speed=45, color="#8f8").add_to(m)
plugins.BoatMarker((46,-30), heading=-20, wind_heading=46, wind_speed=25, color="#88f").add_to(m)
m
m = folium.Map([20.,0.], zoom_start=3)
wind_locations = [[59.3556, -31.99219], [55.17887, -42.89062], [47.7541, -43.94531], [38.27269, -37.96875],
[27.05913, -41.13281], [16.29905, -36.5625], [8.40717, -30.23437], [1.05463, -22.5],
[-8.75479, -18.28125], [-21.61658, -20.03906], [-31.35364, -24.25781], [-39.90974, -30.9375],
[-43.83453, -41.13281], [-47.7541, -49.92187], [-50.95843, -54.14062], [-55.9738, -56.60156]]
wind_line = folium.PolyLine(wind_locations, weight=15, color='#8EE9FF').add_to(m)
attr = {'fill': '#007DEF', 'font-weight': 'bold', 'font-size': '24'}
plugins.PolyLineTextPath(wind_line, ") ", repeat=True, offset=7, attributes=attr).add_to(m)
danger_line = folium.PolyLine([[-40.311, -31.952], [-12.086, -18.727]], weight=10, color='orange', opacity=0.8).add_to(m)
attr = {'fill': 'red'}
plugins.PolyLineTextPath(danger_line, "\u25BA", repeat=True, offset=6, attributes=attr).add_to(m)
plane_line = folium.PolyLine([[-49.38237, -37.26562], [-1.75754, -14.41406], [51.61802, -23.20312]], weight=1, color='black').add_to(m)
attr = {'font-weight':'bold', 'font-size':'24'}
plugins.PolyLineTextPath(plane_line, "\u2708 ", repeat=True, offset=8, attributes=attr).add_to(m)
line_to_new_dehli = folium.PolyLine([[46.67959447, 3.33984375],
[46.5588603, 29.53125],
[42.29356419, 51.328125],
[35.74651226, 68.5546875],
[28.65203063, 76.81640625]]).add_to(m)
line_to_hanoi = folium.PolyLine([[28.76765911, 77.60742188],
[27.83907609, 88.72558594],
[25.68113734, 97.3828125],
[21.24842224, 105.77636719]]).add_to(m)
plugins.PolyLineTextPath(line_to_new_dehli, "To New Delhi", offset=-5).add_to(m)
plugins.PolyLineTextPath(line_to_hanoi, "To Hanoi", offset=-5).add_to(m)
m
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Считаем данные по росту и весу (weights_heights.csv, приложенный в задании) в объект Pandas DataFrame
Step2: Чаще всего первое, что надо надо сделать после считывания данных - это посмотреть на первые несколько записей. Так можно отловить ошибки чтения данных (например, если вместо 10 столбцов получился один, в названии которого 9 точек с запятой). Также это позволяет познакомиться с данными, как минимум, посмотреть на признаки и их природу (количественный, категориальный и т.д.).
Step3: Аргументы
Step4: Один из эффективных методов первичного анализа данных - отображение попарных зависимостей признаков. Создается $m \times m$ графиков (m - число признаков), где по диагонали рисуются гистограммы распределения признаков, а вне диагонали - scatter plots зависимости двух признаков. Это можно делать с помощью метода $scatter_matrix$ Pandas Data Frame или pairplot библиотеки Seaborn.
Step5: [3]. Постройте картинку, на которой будут отображены попарные зависимости признаков , 'Height', 'Weight' и 'BMI' друг от друга. Используйте метод pairplot библиотеки Seaborn.
Step6: Часто при первичном анализе данных надо исследовать зависимость какого-то количественного признака от категориального (скажем, зарплаты от пола сотрудника). В этом помогут "ящики с усами" - boxplots библиотеки Seaborn. Box plot - это компактный способ показать статистики вещественного признака (среднее и квартили) по разным значениям категориального признака. Также помогает отслеживать "выбросы" - наблюдения, в которых значение данного вещественного признака сильно отличается от других.
Step7: [5]. Постройте scatter plot зависимости роста от веса, используя метод plot для Pandas DataFrame с аргументом kind='scatter'. Подпишите картинку.
Step8: Задание 2. Минимизация квадратичной ошибки
Step9: Итак, мы решаем задачу
Step10: Минимизация квадратичной функции ошибки - относительная простая задача, поскольку функция выпуклая. Для такой задачи существует много методов оптимизации. Посмотрим, как функция ошибки зависит от одного параметра (наклон прямой), если второй параметр (свободный член) зафиксировать.
Step11: Теперь методом оптимизации найдем "оптимальный" наклон прямой, приближающей зависимость роста от веса, при фиксированном коэффициенте $w_0 = 50$.
Step12: При анализе многомерных данных человек часто хочет получить интуитивное представление о природе данных с помощью визуализации. Увы, при числе признаков больше 3 такие картинки нарисовать невозможно. На практике для визуализации данных в 2D и 3D в данных выделаяют 2 или, соответственно, 3 главные компоненты (как именно это делается - мы увидим далее в курсе) и отображают данные на плоскости или в объеме.
Step13: Создаем объекты типа matplotlib.figure.Figure (рисунок) и matplotlib.axes._subplots.Axes3DSubplot (ось).
Step14: [10]. Постройте 3D-график зависимости функции ошибки, посчитанной в п.6 от параметров $w_0$ и $w_1$. Подпишите ось $x$ меткой «Intercept», ось $y$ – меткой «Slope», a ось $z$ – меткой «Error».
Step15: [11]. С помощью метода minimize из scipy.optimize найдите минимум функции, определенной в п. 6, для значений параметра $w_0$ в диапазоне [-100,100] и $w_1$ - в диапазоне [-5, 5]. Начальная точка – ($w_0$, $w_1$) = (0, 0). Используйте метод оптимизации L-BFGS-B (аргумент method метода minimize). Проведите на графике из п. 5 Задания 1 прямую, соответствующую найденным оптимальным значениям параметров $w_0$ и $w_1$. Подпишите оси и график.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
data = pd.read_csv('weights_heights.csv', index_col='Index')
data.plot(y='Height', kind='hist',
color='red', title='Height (inch.) distribution')
data.head(n=5)
data.plot(y='Weight', kind='hist',
color='green', title='Weight distribution')
def make_bmi(height_inch, weight_pound):
METER_TO_INCH, KILO_TO_POUND = 39.37, 2.20462
return (weight_pound / KILO_TO_POUND) / \
(height_inch / METER_TO_INCH) ** 2
data['BMI'] = data.apply(lambda row: make_bmi(row['Height'],
row['Weight']), axis=1)
sns.pairplot(data)
def weight_category(weight):
if weight < 120:
return 1
elif weight >= 150:
return 3
else:
return 2
data['weight_cat'] = data['Weight'].apply(weight_category)
ax = sns.boxplot(x='weight_cat', y='Height', data=data, )
ax.set_xlabel('Весовая категория')
ax.set_ylabel('Рост')
plt.plot()
data.plot(x='Weight', y='Height', kind='scatter', title='Зависимость роста от веса')
def error(w0, w1):
return np.sum(np.square((w1 * data['Weight'] + w0) - data['Height']))
def func(w0, w1, x):
return w1 * x + w0
plt.scatter(x=data['Weight'], y=data['Height'])
x_points = np.linspace(75, 175)
y1 = [func(60, 0.05, x) for x in x_points]
y2 = [func(50, 0.16, x) for x in x_points]
plt.plot(x_points, y1, color='red')
plt.plot(x_points, y2, color='green')
plt.xlim((75, 175))
plt.ylim((55, 80))
plt.xlabel('Вес')
plt.ylabel('Рост')
plt.title('Зависимость роста от веса')
W1 = np.linspace(-10, 10)
y = np.array([error(50, w1) for w1 in W1])
plt.plot(W1, y)
plt.xlabel('$w_1$')
plt.ylabel('Value of error')
plt.title('Зависимость ошибки от значения $w_1$')
plt.show()
import scipy.optimize as opt
error_wrap = lambda x: error(50, x)
result = opt.minimize_scalar(error_wrap, bounds=[-5, 5])
result
plt.scatter(x=data['Weight'], y=data['Height'])
x_points = np.linspace(75, 175)
y1 = [func(50, result.x, x) for x in x_points]
plt.plot(x_points, y1, color='red')
plt.xlim((75, 175))
plt.ylim((55, 80))
plt.xlabel('Вес')
plt.ylabel('Рост')
plt.title('Зависимость роста от веса')
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d') # get current axis
# Создаем массивы NumPy с координатами точек по осям X и У.
# Используем метод meshgrid, при котором по векторам координат
# создается матрица координат. Задаем нужную функцию Z(x, y).
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
X, Y = np.meshgrid(X, Y)
Z = np.sin(np.sqrt(X**2 + Y**2))
# Наконец, используем метод *plot_surface* объекта
# типа Axes3DSubplot. Также подписываем оси.
surf = ax.plot_surface(X, Y, Z)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
fig = plt.figure()
ax = fig.gca(projection='3d') # get current axis
W0 = np.arange(-5, 5, 0.25)
W1 = np.arange(-5, 5, 0.25)
ERROR = np.array([[error(w0, w1) for w0 in W0] for w1 in W1])
W0, W1 = np.meshgrid(W0, W1)
surf = ax.plot_surface(W0, W1, ERROR)
ax.set_xlabel('Intercept')
ax.set_ylabel('Slope')
ax.set_zlabel('Error')
plt.show()
def error(w):
return np.sum(np.square((w[1] * data['Weight'] + w[0]) - data['Height']))
result = opt.minimize(error, [0, 0], method='L-BFGS-B')
result
plt.scatter(x=data['Weight'], y=data['Height'])
x_points = np.linspace(75, 175)
y1 = [func(result.x[0], result.x[1], x) for x in x_points]
plt.plot(x_points, y1, color='red')
plt.xlim((75, 175))
plt.ylim((55, 80))
plt.xlabel('Вес')
plt.ylabel('Рост')
plt.title('Зависимость роста от веса')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So far the systems we have studied have been physical in the sense that they exist in the world, but they have not been physics, in the sense of what physics classes are usually about. In the next few chapters, we'll do some physics, starting with thermal systems, that is, systems where the temperature of objects changes as heat transfers from one to another.
Step2: The values of T_init, volume, and t_end come from the statement of the problem
Step3: I chose the value of r arbitrarily for now; we will figure out how to estimate it soon.
Step4: We can test it with the initial temperature of the coffee, like this
Step5: With dt=1 minute, the temperature drops by about 0.7 °C/min, at least for this value of r.
Step6: This function is similar to previous versions of run_simulation.
Step7: The result is a TimeSeries with one row per time step.
Step8: Here's what the results look like.
Step9: The temperature after 30 minutes is 72.3 °C, which is a little higher than what's stated in the problem, 70 °C.
Step10: By trial and error, we could find the value of r where the final temperature is precisely 70 °C.
Step11: Now we call root_scalar like this
Step12: The first argument is the function whose roots we want. The second
Step13: If we provide a different interval, we find a different root.
Step14: If the interval doesn't contain a root, you'll get a ValueError and a message like "f(a) and f(b) must have different signs".
Step15: This is called an "error function" because it returns the
Step16: The result is an error of 2.3 °C, because the final temperature with
Step17: With r=0.02, the error is about -11°C, which means that the final temperature is too low. So we know that the correct value must be in between.
Step18: The first argument is the error function.
Step19: In this example, r_coffee turns out to be about 0.0115, in units of min$^{-1}$ (inverse minutes).
Step20: The final temperature is very close to 70 °C.
Step21: Exercise
|
<ASSISTANT_TASK:>
Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
def make_system(T_init, volume, r, t_end):
return System(T_init=T_init,
T_final=T_init,
volume=volume,
r=r,
t_end=t_end,
T_env=22,
t_0=0,
dt=1)
coffee = make_system(T_init=90, volume=300, r=0.01, t_end=30)
def change_func(T, t, system):
r, T_env, dt = system.r, system.T_env, system.dt
return -r * (T - T_env) * dt
change_func(coffee.T_init, 0, coffee)
def run_simulation(system, change_func):
t_array = linrange(system.t_0, system.t_end, system.dt)
n = len(t_array)
series = TimeSeries(index=t_array)
series.iloc[0] = system.T_init
for i in range(n-1):
t = t_array[i]
T = series.iloc[i]
series.iloc[i+1] = T + change_func(T, t, system)
system.t_end = t_array[-1]
system.T_final = series.iloc[-1]
return series
results = run_simulation(coffee, change_func)
results.head()
results.plot(label='coffee')
decorate(xlabel='Time (minute)',
ylabel='Temperature (C)',
title='Coffee Cooling')
coffee.T_final
def func(x):
return (x-1) * (x-2) * (x-3)
res = root_scalar(func, bracket=[1.5, 2.5])
res
res.root
res = root_scalar(func, bracket=[2.5, 3.5])
res.root
def error_func(r, system):
system.r = r
results = run_simulation(system, change_func)
return system.T_final - 70
coffee = make_system(T_init=90, volume=300, r=0.01, t_end=30)
error_func(0.01, coffee)
error_func(0.02, coffee)
res = root_scalar(error_func, coffee, bracket=[0.01, 0.02])
res.flag
r_coffee = res.root
r_coffee
coffee.r = res.root
run_simulation(coffee, change_func)
coffee.T_final
# Solution
milk = make_system(T_init=5, t_end=15, r=0.1, volume=50)
results_milk = run_simulation(milk, change_func)
milk.T_final
# Solution
results_milk.plot(color='C1', label='milk')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
# Solution
def error_func2(r, system):
system.r = r
results = run_simulation(system, change_func)
return system.T_final - 20
# Solution
root_scalar(error_func2, milk, bracket=[0.1, 0.2])
res
# Solution
run_simulation(milk, change_func)
milk.T_final
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: Lesson
Step3: Project 1
Step4: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
Step5: TODO
Step6: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
Step7: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
Step8: Examine the ratios you've calculated for a few words
Step9: Looking closely at the values you just calculated, we see the following
Step10: Examine the new ratios you've calculated for the same words from before
Step11: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Step12: End of Project 1.
Step13: Project 2
Step14: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
Step15: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
Step16: TODO
Step17: Run the following cell. It should display (1, 74074)
Step18: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
Step20: TODO
Step21: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
Step23: TODO
Step24: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
Step25: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
Step30: End of Project 2.
Step31: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
Step32: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
Step33: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
Step34: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
Step35: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
Step36: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
Step41: Project 4
Step42: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
Step43: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
Step44: End of Project 4.
Step48: Project 5
Step49: Run the following cell to recreate the network and train it once again.
Step50: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
Step51: End of Project 5.
Step55: Project 6
Step56: Run the following cell to train your network with a small polarity cutoff.
Step57: And run the following cell to test it's performance. It should be
Step58: Run the following cell to train your network with a much larger polarity cutoff.
Step59: And run the following cell to test it's performance.
Step60: End of Project 6.
|
<ASSISTANT_TASK:>
Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
from collections import Counter
import numpy as np
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for review, label in zip(reviews, labels):
words = review.split(' ')
if label == 'POSITIVE':
positive_counts.update(words)
else:
negative_counts.update(words)
total_counts.update(words)
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
pos_neg_ratios.update({
word: positive_counts[word] / float(negative_counts[word]+1)
for word, val in positive_counts.items() if val > 100
})
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
for x, val in pos_neg_ratios.items():
pos_neg_ratios[x] = np.log(val)
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
# TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
from IPython.display import Image
Image(filename='sentiment_network_2.png')
# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
layer_0 = np.array([np.zeros(vocab_size, dtype=int)])
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {x: i for i, x in enumerate(vocab)}
word2index
def update_input_layer(review):
Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
for word in review.split(' '):
layer_0[0,word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
return 0 if label == 'NEGATIVE' else 1
labels[0]
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab), hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set(' '.join(reviews).split(' '))
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set(labels)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {x: i for i, x in enumerate(self.review_vocab)}
# Create a dictionary of labels mapped to index positions
self.label2index = {x: i for i, x in enumerate(self.label_vocab)}
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1, input_nodes))
def update_input_layer(self, review):
Update input layer with word counts
self.layer_0 *= 0
for word in review.split(' '):
if word in self.word2index:
self.layer_0[0, self.word2index[word]] += 1
def get_target_for_label(self,label):
return self.label2index[label]
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self, output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
n_records = len(training_reviews)
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i, (review, label) in enumerate(zip(training_reviews, training_labels)):
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.update_input_layer(review)
label = self.label2index[label]
layer_0 = self.layer_0 # (1, n_i)
layer_1 = self.layer_0 @ self.weights_0_1 # (1, n_i)@(n_i, n_h) = (1, n_h)
# the hidden layer doesn't change the input
hidden_output = layer_1
output = self.sigmoid(hidden_output @ self.weights_1_2)
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
error = label - output # output error
output_error_term = error * self.sigmoid_output_2_derivative(output)
hidden_error_term = self.weights_1_2 @ output_error_term # * 1
self.weights_0_1 += self.learning_rate * (layer_0.T @ hidden_error_term.T)
self.weights_1_2 += self.learning_rate * (layer_1.T @ output_error_term.T)
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if abs(error[0,0]) < 0.5:
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
print("Progress: %.4f%% Speed(reviews/sec): %.1f "
"Correct: %s #Trained: %s "
"Training Accuracy: %s" % (
100 * i/n_records, reviews_per_second,
correct_so_far, i+1,
100 * correct_so_far/(i+1)
), end="\r")
if(i and i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.update_input_layer(review.lower())
# input to (and output from) the hidden layer
hidden_output = self.layer_0 @ self.weights_0_1
# final output
output = self.sigmoid(hidden_output @ self.weights_1_2)
return 'POSITIVE' if output[0] > 0.5 else 'NEGATIVE'
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.test(reviews[-1000:],labels[-1000:])
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab), hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set(' '.join(reviews).split(' '))
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set(labels)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {x: i for i, x in enumerate(self.review_vocab)}
# Create a dictionary of labels mapped to index positions
self.label2index = {x: i for i, x in enumerate(self.label_vocab)}
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1, input_nodes))
def update_input_layer(self, review):
Update input layer with word counts
self.layer_0 *= 0
for word in set(review.split(' ')):
if word in self.word2index:
self.layer_0[0, self.word2index[word]] = 1
def get_target_for_label(self,label):
return self.label2index[label]
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self, output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
n_records = len(training_reviews)
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i, (review, label) in enumerate(zip(training_reviews, training_labels)):
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.update_input_layer(review)
label = self.label2index[label]
layer_0 = self.layer_0 # (1, n_i)
layer_1 = self.layer_0 @ self.weights_0_1 # (1, n_i)@(n_i, n_h) = (1, n_h)
# the hidden layer doesn't change the input
hidden_output = layer_1
output = self.sigmoid(hidden_output @ self.weights_1_2)
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
error = label - output # output error
output_error_term = error * self.sigmoid_output_2_derivative(output)
hidden_error_term = self.weights_1_2 @ output_error_term # * 1
self.weights_0_1 += self.learning_rate * (layer_0.T @ hidden_error_term.T)
self.weights_1_2 += self.learning_rate * (layer_1.T @ output_error_term.T)
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if abs(error[0,0]) < 0.5:
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
print("Progress: %.4f%% Speed(reviews/sec): %.1f "
"Correct: %s #Trained: %s "
"Training Accuracy: %.5f" % (
100 * i/n_records, reviews_per_second,
correct_so_far, i+1,
100 * correct_so_far/(i+1)
), end="\r")
if(i and i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.update_input_layer(review.lower())
# input to (and output from) the hidden layer
hidden_output = self.layer_0 @ self.weights_0_1
# final output
output = self.sigmoid(hidden_output @ self.weights_1_2)
return 'POSITIVE' if output[0] > 0.5 else 'NEGATIVE'
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab), hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set(' '.join(reviews).split(' '))
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set(labels)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {x: i for i, x in enumerate(self.review_vocab)}
# Create a dictionary of labels mapped to index positions
self.label2index = {x: i for i, x in enumerate(self.label_vocab)}
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Generate weights
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.layer_1 = np.zeros((1, self.hidden_nodes))
def get_target_for_label(self,label):
return self.label2index[label]
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self, output):
return output * (1 - output)
def train(self, training_reviews_raw, training_labels):
training_reviews = [[self.word2index[x] for x in set(review.split(' '))]
for review in training_reviews_raw]
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct = 0
n_records = len(training_reviews)
# Remember when we started for printing time statistics
start = time.time()
layer_1 = self.layer_1
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i, (words, label) in enumerate(zip(training_reviews, training_labels)):
label = self.label2index[label]
layer_1 *= 0
for index in words:
# 1 is the input value
# shape of weights_0_1: (vocab_size, n_hidden_nodes)
# .[index] get the weights for this specific word
layer_1 += self.weights_0_1[index]
# the hidden layer doesn't change the input
hidden_output = layer_1
output = self.sigmoid(hidden_output @ self.weights_1_2)
error = label - output # output error
output_error_term = error * self.sigmoid_output_2_derivative(output)
hidden_error_term = (self.weights_1_2 @ output_error_term).T # * 1
for index in words:
self.weights_0_1[index] += self.learning_rate * hidden_error_term[0]
self.weights_1_2 += self.learning_rate * (layer_1.T @ output_error_term)
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if abs(error[0,0]) < 0.5:
correct += 1
self.print_progress(n_records, start, correct, i)
def print_progress(self, n_records, start, correct, i):
# For debug purposes, print out our prediction accuracy and speed
# throughout the process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
print("\rProgress:%2.2f%% Speed:%4.2f/s Correct:%5d #Trained:%5s Accuracy:%2.4f" %
(100 * i/n_records, reviews_per_second, correct, i+1, 100 * correct/(i+1)),
end='')
if i != 0 and i % 2500 == 0:
print('\n', end='')
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
n_records = len(testing_reviews)
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(n_records):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
self.print_progress(n_records, start, correct, i)
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
layer_1 = self.layer_1
layer_1 *= 0
for word in set(review.lower().split(' ')):
if word in self.word2index:
layer_1 += self.weights_0_1[self.word2index[word]]
# final output
output = self.sigmoid(layer_1 @ self.weights_1_2)
# round to the nearest label
label_index = 0 if output[0,0] < 0.5 else 1
return self.label_vocab[label_index]
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
print('')
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:], labels[-1000:])
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
# TODO: -Copy the SentimentNetwork class from Project 5 lesson
# -Modify it according to the above instructions
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels,
hidden_nodes=10, learning_rate=0.1,
min_count=10, polarity_cutoff=1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels, min_count, polarity_cutoff)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab), hidden_nodes, 1, learning_rate)
# def pre_process_data(self, reviews, labels, min_count, polarity_cutoff):
# ## ----------------------------------------
# ## New for Project 6: Calculate positive-to-negative ratios for words before
# # building vocabulary
# #
# positive_counts = Counter()
# negative_counts = Counter()
# total_counts = Counter()
# for i in range(len(reviews)):
# if(labels[i] == 'POSITIVE'):
# for word in reviews[i].split(" "):
# positive_counts[word] += 1
# total_counts[word] += 1
# else:
# for word in reviews[i].split(" "):
# negative_counts[word] += 1
# total_counts[word] += 1
# pos_neg_ratios = Counter()
# for term,cnt in list(total_counts.most_common()):
# if(cnt >= 50):
# pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
# pos_neg_ratios[term] = pos_neg_ratio
# for word,ratio in pos_neg_ratios.most_common():
# if(ratio > 1):
# pos_neg_ratios[word] = np.log(ratio)
# else:
# pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
# #
# ## end New for Project 6
# ## ----------------------------------------
# # populate review_vocab with all of the words in the given reviews
# review_vocab = set()
# for review in reviews:
# for word in review.split(" "):
# ## New for Project 6: only add words that occur at least min_count times
# # and for words with pos/neg ratios, only add words
# # that meet the polarity_cutoff
# if(total_counts[word] > min_count):
# if(word in pos_neg_ratios.keys()):
# if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):
# review_vocab.add(word)
# else:
# review_vocab.add(word)
# # Convert the vocabulary set to a list so we can access words via indices
# self.review_vocab = list(review_vocab)
# # populate label_vocab with all of the words in the given labels.
# label_vocab = set()
# for label in labels:
# label_vocab.add(label)
# # Convert the label vocabulary set to a list so we can access labels via indices
# self.label_vocab = list(label_vocab)
# # Store the sizes of the review and label vocabularies.
# self.review_vocab_size = len(self.review_vocab)
# self.label_vocab_size = len(self.label_vocab)
# # Create a dictionary of words in the vocabulary mapped to index positions
# self.word2index = {}
# for i, word in enumerate(self.review_vocab):
# self.word2index[word] = i
# # Create a dictionary of labels mapped to index positions
# self.label2index = {}
# for i, label in enumerate(self.label_vocab):
# self.label2index[label] = i
def pre_process_data(self, reviews, labels, min_count, polarity_cutoff):
# Convert the label vocabulary set to a list so we can access labels via indices
label_vocab = set(labels)
self.label_vocab = list(label_vocab)
self.label_vocab_size = len(self.label_vocab)
pos_count, neg_count, total_count = Counter(), Counter(), Counter()
n_pos, n_neg = 0, 0
for review, label in zip(reviews, labels):
if label == 'POSITIVE':
c = pos_count
n_pos += 1
else:
c = neg_count
n_neg += 1
for word in review.split(' '):
c[word] += 1
total_count[word] += 1
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = [
x for x, count in total_count.items()
if (count > min_count and
abs(np.log(
((pos_count[x] + 0.5) / n_pos) /
((neg_count[x] + 0.5) / n_neg)
)) > polarity_cutoff)
]
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {x: i for i, x in enumerate(self.review_vocab)}
# Create a dictionary of labels mapped to index positions
self.label2index = {x: i for i, x in enumerate(self.label_vocab)}
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Generate weights
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.layer_1 = np.zeros((1, self.hidden_nodes))
def get_target_for_label(self,label):
return self.label2index[label]
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self, output):
return output * (1 - output)
def train(self, training_reviews_raw, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews_raw) == len(training_labels))
training_reviews = [
[self.word2index[x]
for x in set(review.split(' '))
if x in self.word2index]
for review in training_reviews_raw
]
# Keep track of correct predictions to display accuracy during training
correct = 0
n_records = len(training_reviews)
# Remember when we started for printing time statistics
start = time.time()
layer_1 = self.layer_1
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i, (words, label) in enumerate(zip(training_reviews, training_labels)):
label = self.label2index[label]
layer_1 *= 0
for index in words:
# 1 is the input value
# shape of weights_0_1: (vocab_size, n_hidden_nodes)
# .[index] get the weights for this specific word
layer_1 += self.weights_0_1[index]
# the hidden layer doesn't change the input
hidden_output = layer_1
output = self.sigmoid(hidden_output @ self.weights_1_2)
error = label - output # output error
output_error_term = error * self.sigmoid_output_2_derivative(output)
hidden_error_term = (self.weights_1_2 @ output_error_term).T # * 1
for index in words:
self.weights_0_1[index] += self.learning_rate * hidden_error_term[0]
self.weights_1_2 += self.learning_rate * (layer_1.T @ output_error_term)
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if abs(error[0,0]) < 0.5:
correct += 1
self.print_progress(n_records, start, correct, i)
def print_progress(self, n_records, start, correct, i):
# For debug purposes, print out our prediction accuracy and speed
# throughout the process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
if i % 100 == 0 or i + 1 == n_records:
print("Progress:%2.2f%% Speed:%4.2f/s Correct:%5d #Trained:%5s Accuracy:%2.4f" %
(100 * i/n_records, reviews_per_second, correct, i+1, 100 * correct/(i+1)),
end='\r')
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
n_records = len(testing_reviews)
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(n_records):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
self.print_progress(n_records, start, correct, i)
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
layer_1 = self.layer_1
layer_1 *= 0
for word in set(review.lower().split(' ')):
if word in self.word2index:
layer_1 += self.weights_0_1[self.word2index[word]]
# final output
output = self.sigmoid(layer_1 @ self.weights_1_2)
# round to the nearest label
label_index = 0 if output[0,0] < 0.5 else 1
return self.label_vocab[label_index]
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],
mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Are we dealing with form or metric information here?
Step2: Revise and edit to best convey the scientific result.
|
<ASSISTANT_TASK:>
Python Code:
import pickle
data = pickle.load(open('data/correlation_map.pkl', 'rb'))
data.keys()
type(data['excitation energy'])
data['excitation energy'].shape
data['correlation'].shape
import matplotlib
%matplotlib inline
matplotlib.style.use('ggplot')
import matplotlib.pyplot as plt
plt.imshow(data['correlation'])
matplotlib.__version__
from matplotlib.cm import magma, inferno, plasma, viridis
# from colormaps import magma, inferno, plasma, viridis
correlation = data['correlation']
excitation = data['excitation energy']
emission = data['emission energy']
plt.imshow(correlation,
origin='lower',
cmap=magma)
plt.imshow(correlation,
origin='lower',
extent=[excitation.min(), excitation.max(), emission.min(), emission.max()],
cmap=magma)
f, ax = plt.subplots(1, 1, figsize=(5, 5))
map0 = ax.imshow(correlation,
origin='lower',
extent=[excitation.min(), excitation.max(), emission.min(), emission.max()],
cmap=magma)
ax.plot(excitation, excitation)
ax.set_xlabel('Excitation Energy (eV)')
f.colorbar(map0, ax=ax)
f, ax = plt.subplots(1, 1, figsize=(5, 5))
ax.imshow(correlation,
origin='lower',
extent=[excitation.min(), excitation.max(), emission.min(), emission.max()],
cmap=viridis)
ax.plot(excitation, excitation)
ax.set_xlabel('Excitation Energy (eV)')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Training a multi-layer perceptron
Step2: Neural networks tend to perform better when the inputs are scaled to have zero mean and unit variance. Use sklearn.preprocessing.StandardScaler to appropriately scale the training and test sets. Note
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import expit
from sklearn import datasets, mixture
xs = np.linspace(-5, 5)
fig = plt.figure(figsize=(20, 5))
## Plot relu
ax1 = fig.add_subplot(1, 3, 1)
ax1.plot(xs, np.maximum(0, xs))
## Plot sigmoid
ax2 = fig.add_subplot(1, 3, 2)
ax2.plot(xs, expit(xs))
## Plot tanh
ax3 = fig.add_subplot(1, 3, 3)
ax3.plot(xs, np.tanh(xs))
plt.show()
iris = datasets.load_iris()
xs = iris.data[:, 0:2]
ys = iris.target
X, y = datasets.make_classification(n_samples=250, n_features=2, n_informative=2, n_redundant=0, n_classes=3, n_clusters_per_class=1)
gmm = mixture.GaussianMixture(n_components=3).fit(X, y)
xx, yy = np.meshgrid(np.arange(-5, 5, 0.2), np.arange(-5, 5, 0.2))
Z = gmm.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Accent, marker='.')
plt.contourf(xx, yy, Z, cmap=plt.cm.Accent, alpha=.6)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Wykresy punktowe
Step2: Histogramy
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,10))
ax = fig.add_subplot(111)
## the data
N = 5
menMeans = [18, 35, 30, 35, 27]
menStd = [2, 3, 4, 1, 2]
womenMeans = [25, 32, 34, 20, 25]
womenStd = [3, 5, 2, 3, 3]
## necessary variables
ind = np.arange(N) # the x locations for the groups
width = 0.35 # the width of the bars
## the bars
rects1 = ax.bar(ind, menMeans, width,
color='black',
yerr=menStd,
error_kw=dict(elinewidth=2,ecolor='red'))
rects2 = ax.bar(ind+width, womenMeans, width,
color='red',
yerr=womenStd,
error_kw=dict(elinewidth=2,ecolor='black'))
# axes and labels
ax.set_xlim(-width,len(ind)+width)
ax.set_ylim(0,45)
ax.set_ylabel('Scores')
ax.set_title('Scores by group and gender')
xTickMarks = ['Group'+str(i) for i in range(1,6)]
ax.set_xticks(ind+width)
xtickNames = ax.set_xticklabels(xTickMarks)
plt.setp(xtickNames, rotation=45, fontsize=10)
## add a legend
ax.legend( (rects1[0], rects2[0]), ('Men', 'Women') )
plt.show()
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure(figsize=(15,7))
ax1 = fig.add_subplot(121)
## the data
N=1000
x = np.random.randn(N)
y = np.random.randn(N)
## left panel
ax1.scatter(x,y,color='blue',s=5,edgecolor='none')
ax1.set_aspect(1./ax1.get_data_ratio()) # make axes square
## right panel
ax2 = fig.add_subplot(122)
props = dict(alpha=0.5, edgecolors='none' )
handles = []
colors = ['blue', 'green', 'magenta', 'cyan']
for color in colors:
x = np.random.randn(N)
y = np.random.randn(N)
s = np.random.randint(50,200)
handles.append(ax2.scatter(x, y, c=color, s=s, **props))
ax2.set_ylim([-5,11])
ax2.set_xlim([-5,11])
ax2.legend(handles, colors)
ax2.grid(True)
ax2.set_aspect(1./ax2.get_data_ratio())
plt.show()
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,10))
ax = fig.add_subplot(111)
x = np.random.normal(0,1,1000)
numBins = 50
ax.hist(x,numBins,color='green',alpha=0.8)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Elves in accounting are thankful for your help; one of them even offers you a starfish coin they had left over from a past vacation. They offer you a second one if you can find three numbers in your expense report that meet the same criteria.
|
<ASSISTANT_TASK:>
Python Code:
for a,b in itertools.permutations(list(map(int, data)), 2):
if a+b == 2020:
print(a*b)
break
for a,b,c in itertools.permutations(list(map(int, data)), 3):
if a+b+c == 2020:
print(a*b*c)
break
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import section specific modules
Step2: 3.3 Horizontal Coordinates (ALT,AZ)
Step3: Figure 3.3.3
Step4: Figure 3.3.4
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
from IPython.display import HTML
HTML('../style/code_toggle.html')
import ephem
import matplotlib
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 10)
#Creating the observer: KAT-7
KAT7 = ephem.Observer()
KAT7.lat = '-30:43:17'
KAT7.lon = '21:25:40.08'
KAT7.elevation = 0.0
KAT7.date = '2016/5/30 00:00:00' #UTC
#Creating the celestial bodies
star_names = np.array(["Rigel","Thuban","Mimosa","Procyon","Sirius","Achernar","Menkar","Zaurak","Aldebaran","Betelgeuse"])
star_objects = np.empty((len(star_names),),dtype=object)
for k in xrange(len(star_names)):
star_objects[k] = ephem.star(star_names[k],KAT7)
#Creating the time-strings at which we observe
hours = np.empty((96,),dtype=object)
minutes = np.empty((96,),dtype=object)
alt_az_mat = np.zeros((len(star_names),len(hours)+1,2),dtype=float) #(sources,hours,horz_coord)
hours_c = 0
for k in xrange(len(hours)):
if k % 4 == 0:
if hours_c < 10:
hours[k] = '0'+str(hours_c)
else:
hours[k] = str(hours_c)
minutes[k] = "00"
elif k % 4 == 1:
if hours_c < 10:
hours[k] = '0'+str(hours_c)
else:
hours[k] = str(hours_c)
minutes[k] = "15"
elif k % 4 == 2:
if hours_c < 10:
hours[k] = '0'+str(hours_c)
else:
hours[k] = str(hours_c)
minutes[k] = "30"
elif k % 4 == 3:
if hours_c < 10:
hours[k] = '0'+str(hours_c)
else:
hours[k] = str(hours_c)
hours_c = hours_c + 1
minutes[k] = "45"
#Compute the alt/az for different stars observed by KAT-7 at different times on 2016/5/30
for k in xrange(len(hours)):
#Set new time
n_date = '2016/5/30 ' + hours[k] + ':' + minutes[k] + ':00'
KAT7.date = n_date
#Calculate new alt/az
for j in xrange(len(star_names)):
star_objects[j].compute(KAT7)
alt_az_mat[j,k,0] = float(star_objects[j].alt)
alt_az_mat[j,k,1] = float(star_objects[j].az)
#Copy first value to last value
alt_az_mat[:,-1,:] = alt_az_mat[:,0,:]
time_v = np.linspace(0,24,len(hours)+1,endpoint=True)
#Plot alt
matplotlib.rcParams.update({'font.size': 13.75})
fig, ax = plt.subplots()
c = ["r","b","g","y","m","c","k"]
l = ["-","--"]
l_ind = 0
c_ind = 0
for k in xrange(len(star_names)):
if c_ind == 7:
c_ind = 0
l_ind = 1
mask = np.logical_not(np.logical_and(alt_az_mat[k,:,0]*(180/np.pi)>-5,alt_az_mat[k,:,0]*(180/np.pi)<5))
new_curve_y = alt_az_mat[k,mask,0]*(180/np.pi)
new_curve_x = time_v[mask]
ax.plot(new_curve_x,new_curve_y,c[c_ind]+l[l_ind],label=star_names[k],lw=2,zorder=k)
c_ind = c_ind +1
ax.fill_between(time_v, -5, 5, facecolor='k',alpha=1,zorder=k+1)
ax.annotate("HORIZON", xy = (11.5,5), xytext=(11.5, 15),arrowprops=dict(facecolor="b", shrink=1))
ax.legend()
ax.set_xlim([0,24])
ax.set_ylim([-90,90])
ticks = np.array([-90,-80,-70,-60,-50,-40,-30,-20,-10,0,10,20,30,40,50,60,70,80,90])
plt.yticks(ticks)
ticks = np.array([0,2,4,6,8,10,12,14,16,18,20,22,24])
plt.xticks(ticks)
plt.xlabel("UTC [$h$]")
plt.ylabel("Altitude [$^{\circ}$]")
plt.title("KAT-7: 2016/5/30")
labels = [item.get_text() for item in ax.get_yticklabels()]
labels = np.array(["-90$^{\circ}$","-80$^{\circ}$","-70$^{\circ}$","-60$^{\circ}$","-50$^{\circ}$","-40$^{\circ}$","-30$^{\circ}$","-20$^{\circ}$","-10$^{\circ}$","0$^{\circ}$","10$^{\circ}$","20$^{\circ}$","30$^{\circ}$","40$^{\circ}$","50$^{\circ}$","60$^{\circ}$","70$^{\circ}$","80$^{\circ}$","90$^{\circ}$"])
ax.set_yticklabels(labels)
ax.grid('on')
#Plot az
matplotlib.rcParams.update({'font.size': 13.75})
fig, ax = plt.subplots()
c = ["r","b","g","y","m","c","k"]
l = ["-","--"]
l_ind = 0
c_ind = 0
for i in xrange(10):
if c_ind == 7:
c_ind = 0
l_ind = 1
plt.plot(time_v,alt_az_mat[i,:,1]*(180/np.pi),c[c_ind]+l[l_ind],lw=2,label=star_names[i])
c_ind = c_ind +1
ax.legend()
ax.set_xlim([0,24])
ax.set_ylim([0,360])
ticks = np.array([0,60,120,180,240,300,360])
plt.yticks(ticks)
ticks = np.array([0,2,4,6,8,10,12,14,16,18,20,22,24])
plt.xticks(ticks)
plt.xlabel("UTC [$h$]")
plt.ylabel("Azimuth [$^{\circ}$]")
plt.title("KAT-7: 2016/5/30")
labels = [item.get_text() for item in ax.get_yticklabels()]
labels = np.array(["0$^{\circ}$","60$^{\circ}$","120$^{\circ}$","180$^{\circ}$","240$^{\circ}$","300$^{\circ}$","360$^{\circ}$"])
ax.set_yticklabels(labels)
ax.grid('on')
#Preliminaries
matplotlib.rcParams.update({'font.size': 13.75})
observatories = ["LOFAR","KAT7","MWA","VLA","ALMA","GMRT"]
lat_v = ["52:54:32","-30:43:17","-26:42:12","34:04:43","-23:01:09","19:05:47"]
lon_v = ["06:52:08","21:25:40.08","116:40:16","-107:37:05","-67:45:12","74:02:59"]
alt_az = np.zeros((len(observatories),2),dtype=float)
#Loading different observatories and calculating alt/az of Betelgeuse for each of them
for k in xrange(len(observatories)):
obs = ephem.Observer()
obs.lat = lat_v[k]
obs.lon = lon_v[k]
obs.elevation = 0.0
obs.date = '2016/5/30 00:00:00' #UTC
betelgeuse = ephem.star("Betelgeuse",obs)
alt_az[k,0] = float(betelgeuse.alt)
alt_az[k,1] = float(betelgeuse.az)
#Plotting
cluster = ['o','^','>','s','*','v']
col = ['b','r','g','k','c','m']
fig, ax = plt.subplots()
for xp, yp, m, n, col_v in zip(alt_az[:,0]*(180/np.pi), alt_az[:,1]*(180/np.pi), cluster, observatories,col):
ax.plot([xp],[yp], marker=m, c = col_v, label = n, markersize = 20, linestyle='None')
ax.legend(numpoints=1)
ax.set_xlim([-90,90])
ax.set_ylim([0,360])
ticks = np.array([0,60,120,180,240,300,360])
plt.yticks(ticks)
ticks = np.array([-90,-80,-70,-60,-50,-40,-30,-20,-10,0,10,20,30,40,50,60,70,80,90])
plt.xticks(ticks)
labels = [item.get_text() for item in ax.get_yticklabels()]
labels = np.array(["0$^{\circ}$","60$^{\circ}$","120$^{\circ}$","180$^{\circ}$","240$^{\circ}$","300$^{\circ}$","360$^{\circ}$"])
ax.set_yticklabels(labels)
labels = [item.get_text() for item in ax.get_xticklabels()]
labels = np.array(["-90$^{\circ}$","-80$^{\circ}$","-70$^{\circ}$","-60$^{\circ}$","-50$^{\circ}$","-40$^{\circ}$","-30$^{\circ}$","-20$^{\circ}$","-10$^{\circ}$","0$^{\circ}$","10$^{\circ}$","20$^{\circ}$","30$^{\circ}$","40$^{\circ}$","50$^{\circ}$","60$^{\circ}$","70$^{\circ}$","80$^{\circ}$","90$^{\circ}$"])
ax.set_xticklabels(labels)
plt.xlabel("Altitude [$^{\circ}$]")
plt.ylabel("Azimuth [$^{\circ}$]")
plt.title("Betelgeuse: 2016/5/30 - 00:00:00 UTC")
ax.grid('on')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bonus Material - Softmax Regression
Step2: First, we want to encode the class labels into a format that we can more easily work with; we apply one-hot encoding
Step3: A sample that belongs to class 0 (the first row) has a 1 in the first cell, a sample that belongs to class 2 has a 1 in the second cell of its row, and so forth.
Step4: To compute the net input, we multiply the 4x2 matrix feature matrix X with the 2x3 (n_features x n_classes) weight matrix W, which yields a 4x3 output matrix (n_samples x n_classes) to which we then add the bias unit
Step5: Now, it's time to compute the softmax activation that we discussed earlier
Step6: As we can see, the values for each sample (row) nicely sum up to 1 now. E.g., we can say that the first sample
Step7: As we can see, our predictions are terribly wrong, since the correct class labels are [0, 1, 2, 2]. Now, in order to train our logistic model (e.g., via an optimization algorithm such as gradient descent), we need to define a cost function $J(\cdot)$ that we want to minimize
Step15: In order to learn our softmax model -- determining the weight coefficients -- via gradient descent, we then need to compute the derivative
Step16: Example 1 - Gradient Descent
Step17: Continue training for another 800 epochs by calling the fit method with init_params=False.
Step18: Predicting Class Labels
Step19: Predicting Class Probabilities
Step20: Example 2 - Stochastic Gradient Descent
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -a '' -u -d -v -p matplotlib,numpy,scipy
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
%matplotlib inline
import numpy as np
y = np.array([0, 1, 2, 2])
y_enc = (np.arange(np.max(y) + 1) == y[:, None]).astype(float)
print('one-hot encoding:\n', y_enc)
X = np.array([[0.1, 0.5],
[1.1, 2.3],
[-1.1, -2.3],
[-1.5, -2.5]])
W = np.array([[0.1, 0.2, 0.3],
[0.1, 0.2, 0.3]])
bias = np.array([0.01, 0.1, 0.1])
print('Inputs X:\n', X)
print('\nWeights W:\n', W)
print('\nbias:\n', bias)
X = np.array([[0.1, 0.5],
[1.1, 2.3],
[-1.1, -2.3],
[-1.5, -2.5]])
W = np.array([[0.1, 0.2, 0.3],
[0.1, 0.2, 0.3]])
bias = np.array([0.01, 0.1, 0.1])
print('Inputs X:\n', X)
print('\nWeights W:\n', W)
print('\nbias:\n', bias)
def net_input(X, W, b):
return (X.dot(W) + b)
net_in = net_input(X, W, bias)
print('net input:\n', net_in)
def softmax(z):
return (np.exp(z.T) / np.sum(np.exp(z), axis=1)).T
smax = softmax(net_in)
print('softmax:\n', smax)
def to_classlabel(z):
return z.argmax(axis=1)
print('predicted class labels: ', to_classlabel(smax))
def cross_entropy(output, y_target):
return - np.sum(np.log(output) * (y_target), axis=1)
xent = cross_entropy(smax, y_enc)
print('Cross Entropy:', xent)
def cost(output, y_target):
return np.mean(cross_entropy(output, y_target))
J_cost = cost(smax, y_enc)
print('Cost: ', J_cost)
# Sebastian Raschka 2016
# Implementation of the mulitnomial logistic regression algorithm for
# classification.
# Author: Sebastian Raschka <sebastianraschka.com>
#
# License: BSD 3 clause
import numpy as np
from time import time
#from .._base import _BaseClassifier
#from .._base import _BaseMultiClass
class SoftmaxRegression(object):
Softmax regression classifier.
Parameters
------------
eta : float (default: 0.01)
Learning rate (between 0.0 and 1.0)
epochs : int (default: 50)
Passes over the training dataset.
Prior to each epoch, the dataset is shuffled
if `minibatches > 1` to prevent cycles in stochastic gradient descent.
l2 : float
Regularization parameter for L2 regularization.
No regularization if l2=0.0.
minibatches : int (default: 1)
The number of minibatches for gradient-based optimization.
If 1: Gradient Descent learning
If len(y): Stochastic Gradient Descent (SGD) online learning
If 1 < minibatches < len(y): SGD Minibatch learning
n_classes : int (default: None)
A positive integer to declare the number of class labels
if not all class labels are present in a partial training set.
Gets the number of class labels automatically if None.
random_seed : int (default: None)
Set random state for shuffling and initializing the weights.
Attributes
-----------
w_ : 2d-array, shape={n_features, 1}
Model weights after fitting.
b_ : 1d-array, shape={1,}
Bias unit after fitting.
cost_ : list
List of floats, the average cross_entropy for each epoch.
def __init__(self, eta=0.01, epochs=50,
l2=0.0,
minibatches=1,
n_classes=None,
random_seed=None):
self.eta = eta
self.epochs = epochs
self.l2 = l2
self.minibatches = minibatches
self.n_classes = n_classes
self.random_seed = random_seed
def _fit(self, X, y, init_params=True):
if init_params:
if self.n_classes is None:
self.n_classes = np.max(y) + 1
self._n_features = X.shape[1]
self.b_, self.w_ = self._init_params(
weights_shape=(self._n_features, self.n_classes),
bias_shape=(self.n_classes,),
random_seed=self.random_seed)
self.cost_ = []
y_enc = self._one_hot(y=y, n_labels=self.n_classes, dtype=np.float)
for i in range(self.epochs):
for idx in self._yield_minibatches_idx(
n_batches=self.minibatches,
data_ary=y,
shuffle=True):
# givens:
# w_ -> n_feat x n_classes
# b_ -> n_classes
# net_input, softmax and diff -> n_samples x n_classes:
net = self._net_input(X[idx], self.w_, self.b_)
softm = self._softmax(net)
diff = softm - y_enc[idx]
mse = np.mean(diff, axis=0)
# gradient -> n_features x n_classes
grad = np.dot(X[idx].T, diff)
# update in opp. direction of the cost gradient
self.w_ -= (self.eta * grad +
self.eta * self.l2 * self.w_)
self.b_ -= (self.eta * np.sum(diff, axis=0))
# compute cost of the whole epoch
net = self._net_input(X, self.w_, self.b_)
softm = self._softmax(net)
cross_ent = self._cross_entropy(output=softm, y_target=y_enc)
cost = self._cost(cross_ent)
self.cost_.append(cost)
return self
def fit(self, X, y, init_params=True):
Learn model from training data.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
init_params : bool (default: True)
Re-initializes model parametersprior to fitting.
Set False to continue training with weights from
a previous model fitting.
Returns
-------
self : object
if self.random_seed is not None:
np.random.seed(self.random_seed)
self._fit(X=X, y=y, init_params=init_params)
self._is_fitted = True
return self
def _predict(self, X):
probas = self.predict_proba(X)
return self._to_classlabels(probas)
def predict(self, X):
Predict targets from X.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Returns
----------
target_values : array-like, shape = [n_samples]
Predicted target values.
if not self._is_fitted:
raise AttributeError('Model is not fitted, yet.')
return self._predict(X)
def predict_proba(self, X):
Predict class probabilities of X from the net input.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Returns
----------
Class probabilties : array-like, shape= [n_samples, n_classes]
net = self._net_input(X, self.w_, self.b_)
softm = self._softmax(net)
return softm
def _net_input(self, X, W, b):
return (X.dot(W) + b)
def _softmax(self, z):
return (np.exp(z.T) / np.sum(np.exp(z), axis=1)).T
def _cross_entropy(self, output, y_target):
return - np.sum(np.log(output) * (y_target), axis=1)
def _cost(self, cross_entropy):
L2_term = self.l2 * np.sum(self.w_ ** 2)
cross_entropy = cross_entropy + L2_term
return 0.5 * np.mean(cross_entropy)
def _to_classlabels(self, z):
return z.argmax(axis=1)
def _init_params(self, weights_shape, bias_shape=(1,), dtype='float64',
scale=0.01, random_seed=None):
Initialize weight coefficients.
if random_seed:
np.random.seed(random_seed)
w = np.random.normal(loc=0.0, scale=scale, size=weights_shape)
b = np.zeros(shape=bias_shape)
return b.astype(dtype), w.astype(dtype)
def _one_hot(self, y, n_labels, dtype):
Returns a matrix where each sample in y is represented
as a row, and each column represents the class label in
the one-hot encoding scheme.
Example:
y = np.array([0, 1, 2, 3, 4, 2])
mc = _BaseMultiClass()
mc._one_hot(y=y, n_labels=5, dtype='float')
np.array([[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 0., 0., 1.],
[0., 0., 1., 0., 0.]])
mat = np.zeros((len(y), n_labels))
for i, val in enumerate(y):
mat[i, val] = 1
return mat.astype(dtype)
def _yield_minibatches_idx(self, n_batches, data_ary, shuffle=True):
indices = np.arange(data_ary.shape[0])
if shuffle:
indices = np.random.permutation(indices)
if n_batches > 1:
remainder = data_ary.shape[0] % n_batches
if remainder:
minis = np.array_split(indices[:-remainder], n_batches)
minis[-1] = np.concatenate((minis[-1],
indices[-remainder:]),
axis=0)
else:
minis = np.array_split(indices, n_batches)
else:
minis = (indices,)
for idx_batch in minis:
yield idx_batch
def _shuffle_arrays(self, arrays):
Shuffle arrays in unison.
r = np.random.permutation(len(arrays[0]))
return [ary[r] for ary in arrays]
from mlxtend.data import iris_data
from mlxtend.evaluate import plot_decision_regions
import matplotlib.pyplot as plt
# Loading Data
X, y = iris_data()
X = X[:, [0, 3]] # sepal length and petal width
# standardize
X[:,0] = (X[:,0] - X[:,0].mean()) / X[:,0].std()
X[:,1] = (X[:,1] - X[:,1].mean()) / X[:,1].std()
lr = SoftmaxRegression(eta=0.01, epochs=10, minibatches=1, random_seed=0)
lr.fit(X, y)
plot_decision_regions(X, y, clf=lr)
plt.title('Softmax Regression - Gradient Descent')
plt.show()
plt.plot(range(len(lr.cost_)), lr.cost_)
plt.xlabel('Iterations')
plt.ylabel('Cost')
plt.show()
lr.epochs = 800
lr.fit(X, y, init_params=False)
plot_decision_regions(X, y, clf=lr)
plt.title('Softmax Regression - Stochastic Gradient Descent')
plt.show()
plt.plot(range(len(lr.cost_)), lr.cost_)
plt.xlabel('Iterations')
plt.ylabel('Cost')
plt.show()
y_pred = lr.predict(X)
print('Last 3 Class Labels: %s' % y_pred[-3:])
y_pred = lr.predict_proba(X)
print('Last 3 Class Labels:\n %s' % y_pred[-3:])
from mlxtend.data import iris_data
from mlxtend.evaluate import plot_decision_regions
from mlxtend.classifier import SoftmaxRegression
import matplotlib.pyplot as plt
# Loading Data
X, y = iris_data()
X = X[:, [0, 3]] # sepal length and petal width
# standardize
X[:,0] = (X[:,0] - X[:,0].mean()) / X[:,0].std()
X[:,1] = (X[:,1] - X[:,1].mean()) / X[:,1].std()
lr = SoftmaxRegression(eta=0.05, epochs=200, minibatches=len(y), random_seed=0)
lr.fit(X, y)
plot_decision_regions(X, y, clf=lr)
plt.title('Softmax Regression - Stochastic Gradient Descent')
plt.show()
plt.plot(range(len(lr.cost_)), lr.cost_)
plt.xlabel('Iterations')
plt.ylabel('Cost')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a Raster Model Grid
Step2: Visualize Basin topography
Step3: Create a Network Model Grid
Step4: Let's plot our network
Step5: As are the links,
Step6: Reduce the numer of nodes in channel segments
Step7: As an example, the following will retain only the downstream and upstream node of each segment.
Step8: Reduce to a maximum number of nodes per segment (including end nodes), use AtMostNodes,
Step9: Another option is to specify a spacing at each node. The spacing can be either a scalar, meaning the spacing is the same for every node, or can vary from node to node. An example of the latter is a spacing that depends on upstream area at each node.
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
# landlab modules
from landlab.plot.graph import plot_nodes, plot_links
from landlab.io import read_esri_ascii
# Package for plotting raster data
from landlab.plot.imshow import imshow_grid
from landlab.grid.create_network import network_grid_from_raster
grid, z = read_esri_ascii("hugo_site.asc", name="topographic__elevation")
grid.status_at_node[grid.nodes_at_right_edge] = grid.BC_NODE_IS_FIXED_VALUE
grid.status_at_node[np.isclose(z, -9999.0)] = grid.BC_NODE_IS_CLOSED
imshow_grid(
grid,
"topographic__elevation",
plot_name="Basin topography",
color_for_closed=None,
colorbar_label="$z$ [m]",
)
network_grid = network_grid_from_raster(
grid,
minimum_channel_threshold=12000.0,
include=["drainage_area", "topographic__elevation"],
)
plot_nodes(network_grid, markersize=4)
plt.title("Nodes");
plot_nodes(network_grid, with_id=False, markersize=4)
plot_links(network_grid)
plt.title("Links");
imshow_grid(
grid,
"topographic__elevation",
plot_name="Basin topography",
color_for_closed=None,
colorbar_label="$z$ [m]",
)
plot_links(network_grid, with_id=False)
if len(network_grid.nodes_at_link) > (len(network_grid.nodes) - 1):
print("there are duplicate links")
else:
print("link count looks good!")
from landlab.grid.create_network import (
AtMostNodes,
JustEndNodes,
SpacingAtLeast,
spacing_from_drainage_area,
)
network_grid = network_grid_from_raster(
grid,
reducer=JustEndNodes(),
minimum_channel_threshold=12000.0,
include=["drainage_area", "topographic__elevation"],
)
plot_nodes(network_grid, markersize=4)
plot_links(network_grid)
plt.title("Nodes and Links");
network_grid = network_grid_from_raster(
grid,
reducer=AtMostNodes(count=4),
minimum_channel_threshold=12000.0,
include=["drainage_area", "topographic__elevation"],
)
plot_nodes(network_grid, markersize=4)
plot_links(network_grid)
plt.title("Nodes and Links");
spacing = spacing_from_drainage_area(grid.at_node["drainage_area"])
network_grid = network_grid_from_raster(
grid,
reducer=SpacingAtLeast(grid.xy_of_node, spacing),
minimum_channel_threshold=12000.0,
include=["drainage_area", "topographic__elevation"],
)
plot_nodes(network_grid, with_id=False, markersize=4)
plot_links(network_grid, with_id=False)
plt.title("Nodes and Links");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download weather data
Step2: Cleaning the weather dataset
Step3: Merge weather and NYPD MVC datasets
Step4: Make some nice data analysis
Step5: Now lets try to find out if there are any condition that causes more incidents than others. We do this by plotting out heatmaps to get an idea of the distributions in the NYC area
Step6: Finding the ratio between conditions that resulted in an incident. Borough level
Step7: Let's try to look at zip codes in Brooklyn only
Step8: Looking at weather based causes
Step9: There seems to be a lot of incidents caused by slippery pavement. Let's look at the weather conditions for those incidents.
Step10: Okay, the overcast and clear weather still are the top 2. The assumption that the type of incidents are caused by weather conditions might still hold true. It could be that top 2 are caused by pavement conditions independent of the weather, such as water or oil on the roads. In any case, lets try to plot out where these incidents occur.
Step11: Looking at the intersections we can find those most dangerous based on the number of incidents happening there which are in some way caused by slippery pavement.
Step12: Ignoring incidents happening outside intersections the top 3 looks like this
Step13: fin
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import datetime
from datetime import date
from dateutil.rrule import rrule, DAILY
from __future__ import division
import geoplotlib as glp
from geoplotlib.utils import BoundingBox, DataAccessObject
pd.set_option('display.max_columns', None)
%matplotlib inline
start_date = date(2012, 7, 1)
end_date = date(2016, 2, 29)
# data = pd.DataFrame()
frames = []
url_template = 'https://www.wunderground.com/history/airport/KNYC/%s/%s/%s/DailyHistory.html?req_city=New+York&req_state=NY&req_statename=New+York&reqdb.zip=10001&reqdb.magic=4&reqdb.wmo=99999&format=1.csv'
month = ""
for dt in rrule(DAILY, dtstart=start_date, until=end_date):
if (month != dt.strftime("%m")):
month = dt.strftime("%m")
print 'Downloading to memory: ' + dt.strftime("%Y-%m")
frames.append(pd.read_csv(url_template % (dt.strftime("%Y"),dt.strftime("%m"), dt.strftime("%d"))))
print "Saving data to csv..."
data = pd.concat(frames)
data.to_csv('weather_data_nyc.csv', sep=',')
from datetime import datetime
from dateutil import tz
weather = pd.read_csv('datasets/weather_data_nyc_clean.csv')
def UTCtoActual(utcDate):
from_zone = tz.gettz('UTC')
to_zone = tz.gettz('America/New_York')
utc = datetime.strptime(utcDate.DateUTC, '%m/%d/%Y %H:%M:%S')\
.replace(tzinfo=from_zone)\
.astimezone(to_zone)
s = pd.Series([utc.year, utc.month, utc.day, utc.hour])
s.columns = ['Year', 'Month', 'Day', 'Hour']
return s
#weather['DateActual'] = weather.DateUTC.map()
weather[['Year', 'Month', 'Day', 'Hour']] = weather.apply(UTCtoActual, axis=1)
weather.to_csv('datasets/weather_data_nyc_clean2.csv')
incidents = pd.read_csv('datasets/NYPD_Motor_Vehicle_Collisions.csv')
weather = pd.read_csv('datasets/weather_data_nyc_clean2.csv')
weather.head(1)
weather[(weather.Year == 2015) & (weather.Month == 11) & (weather.Day == 27)]
features0 = ['Conditions', 'TemperatureC']
features = ['Conditions', 'Precipitationmm',\
'TemperatureC', 'VisibilityKm']
def lookup_weather2(year, month, day, hour):
w = weather[(weather.Year == year) & (weather.Month == month) & (weather.Day == day) & (weather.Hour == hour)]
return w
def lookup_weather(date, time):
month = int(date.split('/')[0])
day = int(date.split('/')[1])
year = int(date.split('/')[2])
hour = int(time.split(':')[0])
d = lookup_weather2(year, month, day, hour).head(1)
if (d.empty):
dt_back = datetime.datetime(year, month, day, hour) - datetime.timedelta(hours=1)
dt_forward = datetime.datetime(year, month, day, hour) + datetime.timedelta(hours=1)
d_back = lookup_weather2(dt_back.year, dt_back.month, dt_back.day, dt_back.hour)
if (not d_back.empty): return d_back
d_forward = lookup_weather2(dt_forward.year, dt_forward.month, dt_forward.day, dt_forward.hour)
if (not d_forward.empty): return d_forward
return d
def merge_weather(incident):
date = incident.DATE
time = incident.TIME
#print "0"
w = lookup_weather(date, time)
#[unnamed, condition, dateUTC, Dew, Events, Gust, Humidity,Precipitationmm,Sea_Level_PressurehPa, TemperatureC] = w.values[0]
#print "1"
try:
#print "2"
#print w
con = "-"
temp = "-"
rainmm = "-"
viskm = "-"
#print "2.5"
if (not pd.isnull(w['Conditions'].iloc[0])):
con = w['Conditions'].iloc[0]
if (not pd.isnull(w['TemperatureC'].iloc[0])):
temp = w['TemperatureC'].iloc[0]
if (not pd.isnull(w['Precipitationmm'].iloc[0])):
rainmm = w['Precipitationmm'].iloc[0]
if (not pd.isnull(w['VisibilityKm'].iloc[0])):
viskm = w['VisibilityKm'].iloc[0]
#print 'con %s, temp %s, rainmm %s, viskm %s' % (con, temp, rainmm, viskm)
#print "2.75"
s = pd.Series([con, rainmm, temp, viskm])
#print "3"
#print str(len(w.values[0]))
#s = pd.Series(w.values[0])
#s = pd.Series([w['Conditions'].iloc[0], w['Dew PointC'].iloc[0], w['Gust SpeedKm/h'].iloc[0]])
#s.columns = features
return s
except:
#print "4"
print date + "x" + time
s = pd.Series([None,None,None,None])
#s = pd.Series(["1","2","3","4","5","6","7","8","9"])
#s = pd.Series([])
#s.columns = features
return s
#lookup_weather2(2016, 2, 14, 7)
#lookup_weather('03/14/2016', '3:27').values[0]
#[unnamed, condition, dateUTC, Dew, Events, Gust, Humidity,Precipitationmm,Sea_Level_PressurehPa, TemperatureC] = lookup_weather('01/27/2016', '3:27').values[0]
print "Applying weather data to incidents..."
incidents[features] = incidents[incidents.DATE.str.split('/').str.get(2) != '2016'].apply(merge_weather, axis=1)
print "Saving weather in-riched incident data..."
incidents.to_csv('datasets/NYPD_Motor_Vehicle_Collisions_weather3.csv', sep=',')
incidents[incidents.DATE.str.split('/').str.get(2) == '2016']
# Read dataset
incidents = pd.read_csv('datasets/NYPD_Motor_Vehicle_Collisions_weather3.csv')
# Filter 2016 incidents
incidents = incidents[(incidents.DATE.str.split('/').str.get(2) != '2016')
& (pd.notnull(incidents.Conditions))]
# Distribution of incidents by weather conditions
ys = []
xs = []
for c in incidents.Conditions.unique():
mask = (incidents.Conditions == c)
filtered_incidents = incidents[mask]
ys.append(len(filtered_incidents.index))
xs.append(c)
df = pd.DataFrame(pd.Series(ys, index=xs, name="Incidents by weather conditions").sort_values())
df.plot(kind='barh', figsize=(8,8))
df
def plot_zip_weather(condition, data):
ys = []
xs = []
for z in data['ZIP CODE'].unique():
mask = (data['ZIP CODE'] == z)
filtered_incidents = data[mask]
ys.append(len(filtered_incidents.index))
xs.append(z)
df = pd.DataFrame(pd.Series(ys, index=xs, name="%s incidents by zip code" % condition).sort_values())
df.plot(kind='barh', figsize=(8,32))
def draw_kde(data):
bbox = BoundingBox(north=data.LATITUDE.max()-0.055,\
west=data.LONGITUDE.min()+0.055,\
south=data.LATITUDE.min()-0.055,\
east=data.LONGITUDE.max()+0.055)
coords = {'lat': data.LATITUDE.values.tolist(), 'lon': data.LONGITUDE.values.tolist()}
glp.kde(coords, bw=5, cut_below=1e-4)
glp.set_bbox(bbox)
glp.inline()
def plot_stuff(conditions, data):
print "%s conditions" % conditions
plot_zip_weather(conditions, data)
draw_kde(data)
snowy = incidents[incidents['Conditions'].str.contains('Snow')]
rainy = incidents[incidents['Conditions'].str.contains('Rain')]
clear = incidents[incidents['Conditions'].str.contains('Clear')]
cloudy = incidents[(incidents['Conditions'].str.contains('Cloud')) | (incidents['Conditions'].str.contains('Overcast'))]
haze = incidents[incidents['Conditions'].str.contains('Haze')]
plot_stuff("Snowy", snowy)
plot_stuff("Rainy", rainy)
plot_stuff("Clear", clear)
plot_stuff("Cloudy", cloudy)
plot_stuff("Hazy", haze)
# What is the probability of an incident based on the weather condition?
# Normalize incidents based on the conditions.
from collections import Counter
ConditionIncidentCounter = Counter(incidents.Conditions.values)
p_incident = {}
for k,v in ConditionIncidentCounter.most_common():
p_incident[k] = v/len(incidents)
p_incident
# Do the same again but for individual areas of NYC
p_incident_district = {}
l = len(incidents)
for district in incidents[pd.notnull(incidents.BOROUGH)].BOROUGH.unique():
filtered = incidents[incidents.BOROUGH == district]
counter = Counter(filtered.Conditions.values)
p_incident_district[district] = {}
for k,v in counter.most_common():
p_incident_district[district][k] = v / len(list(counter.elements()));
p_incident_district
# Are there any areas in NYC that experience incidents based
# on a condition unusually higher or lower compared to other areas?
# Calculate the ratio of incidents based on the condition.
def calcRatioForDistrict(districtCounter, overAllCounter, district):
ys = []
xs = []
for con in incidents.Conditions.unique():
ys.append(districtCounter[con] / overAllCounter[con])
xs.append(con)
return pd.Series(ys, index=xs)
series = {}
for b in incidents[pd.notnull(incidents.BOROUGH)].BOROUGH.unique():
series[b] = calcRatioForDistrict(p_incident_district[b], p_incident, b)
df = pd.DataFrame(series)
df.plot(kind="bar", subplots=True, figsize=(14,14),layout=(7,2), legend=False,sharey=True)
# What is the probability of an incident based on the weather condition?
# Normalize incidents based on the conditions.
from collections import Counter
borough = incidents[incidents.BOROUGH == 'MANHATTAN']
ConditionIncidentCounter = Counter(borough.Conditions.values)
p_incident = {}
for k,v in ConditionIncidentCounter.most_common():
p_incident[k] = v/len(borough)
p_incident
# Do the same again but for individual areas of NYC
p_incident_borough_zip = {}
l = len(borough)
for z in borough[pd.notnull(incidents['ZIP CODE'])]['ZIP CODE'].unique():
filtered = borough[incidents['ZIP CODE'] == z]
counter = Counter(filtered.Conditions.values)
# z = str(z).split(".")[0]
p_incident_borough_zip[z] = {}
for k,v in counter.most_common():
p_incident_borough_zip[z][k] = v / len(list(counter.elements()));
p_incident_borough_zip
# Are there any areas in NYC that experience incidents based
# on a condition unusually higher or lower compared to other areas?
# Calculate the ratio of incidents based on the condition.
def calcRatioForDistrict(districtCounter, overAllCounter, district):
ys = []
xs = []
for con in incidents.Conditions.unique():
if (con in districtCounter):
ys.append(districtCounter[con] / overAllCounter[con])
else:
ys.append(0)
xs.append(con)
return pd.Series(ys, index=xs)
series = {}
for z in borough[pd.notnull(incidents['ZIP CODE'])]['ZIP CODE'].unique():
series[z] = calcRatioForDistrict(p_incident_borough_zip[z], p_incident, b)
df = pd.DataFrame(series)
df.plot(kind="bar", subplots=True, figsize=(14,100), layout=(50,2), legend=False, sharey=False)
worst_day = incidents.DATE.value_counts().index[0]
worst_day_count = incidents.DATE.value_counts()[0]
incidents[incidents.DATE == worst_day]
incidents.DATE.value_counts()
incidents['CONTRIBUTING FACTOR VEHICLE 1'].unique()
# Read dataset
incidents = pd.read_csv('datasets/NYPD_Motor_Vehicle_Collisions_weather4.csv')
# Filter 2016 incidents
incidents = incidents[(incidents.DATE.str.split('/').str.get(2) != '2016')
& (pd.notnull(incidents.Conditions))]
def count_contributing(cont):
temp = incidents[(incidents['CONTRIBUTING FACTOR VEHICLE 1'] == cont) | \
(incidents['CONTRIBUTING FACTOR VEHICLE 2'] == cont) | \
(incidents['CONTRIBUTING FACTOR VEHICLE 3'] == cont) | \
(incidents['CONTRIBUTING FACTOR VEHICLE 4'] == cont) | \
(incidents['CONTRIBUTING FACTOR VEHICLE 5'] == cont) ]
return temp.shape[0]
print "Accidents caused by Pavement Slippery: %s" % count_contributing('Pavement Slippery')
print "Accidents caused by Glare: %s " % count_contributing('Glare')
print "Accidents caused by Pavement Defective: %s " % count_contributing('Pavement Defective')
weather_incidents = incidents[(incidents['CONTRIBUTING FACTOR VEHICLE 1'] == 'Pavement Slippery') | \
(incidents['CONTRIBUTING FACTOR VEHICLE 2'] == 'Pavement Slippery') | \
(incidents['CONTRIBUTING FACTOR VEHICLE 3'] == 'Pavement Slippery') | \
(incidents['CONTRIBUTING FACTOR VEHICLE 4'] == 'Pavement Slippery') | \
(incidents['CONTRIBUTING FACTOR VEHICLE 5'] == 'Pavement Slippery') ]
# Distribution of incidents by weather conditions
ys = []
xs = []
for c in weather_incidents.Conditions.unique():
mask = (weather_incidents.Conditions == c)
filtered_incidents = weather_incidents[mask]
ys.append(filtered_incidents.shape[0])
xs.append(c)
df = pd.DataFrame(pd.Series(ys, index=xs, name="Weather conditions during 'slippery pavement' based incidents").sort_values())
df.plot(kind='barh', figsize=(8,8))
# Export to json for d3 viz
from collections import OrderedDict
import json
with open('datasets/slippery_pavement.json', 'w') as fp:
json.dump(OrderedDict(sorted(dict(zip(xs, ys)).items(), key=lambda x: x[1], reverse=True)), fp)
def draw_dot(data, type_color):
bbox = BoundingBox(north=incidents.LATITUDE.max()-0.055,\
west=incidents.LONGITUDE.min()+0.055,\
south=incidents.LATITUDE.min()-0.055,\
east=incidents.LONGITUDE.max()+0.055)
gridDots = {'lat': data.LATITUDE.values.tolist(), 'lon': data.LONGITUDE.values.tolist()}
glp.set_bbox(bbox)
glp.dot(gridDots, color=type_color)
def get_spaced_colors(n):
max_value = 16581375 #255**3
interval = int(max_value / n)
colors = [hex(I)[2:].zfill(6) for I in range(0, max_value, interval)]
return [[int(i[:2], 16), int(i[2:4], 16), int(i[4:], 16), 255] for i in colors]
colormap = get_spaced_colors(weather_incidents['Conditions'].unique().size)
for idx, wi in enumerate(weather_incidents['Conditions'].unique().tolist()):
filtered = weather_incidents[weather_incidents['Conditions'] == wi]
print "%s %s" % (wi, str(len(filtered.index)))
draw_dot(filtered, colormap[idx])
draw_dot(filtered, 'r')
glp.inline()
#glp.inline()
bbox = BoundingBox(north=incidents.LATITUDE.max()-0.055,\
west=incidents.LONGITUDE.min()+0.055,\
south=incidents.LATITUDE.min()-0.055,\
east=incidents.LONGITUDE.max()+0.055)
glp.set_bbox(bbox)
glp.kde({'lat': weather_incidents.LATITUDE.values.astype('float'), 'lon': weather_incidents.LONGITUDE.values.astype('float')},bw=5, cut_below=1e-4)
glp.inline()
top10 = weather_incidents.LOCATION.value_counts()[:20]
top10.to_csv('datasets/top20slippery')
locations = weather_incidents[weather_incidents.LOCATION.isin(top10.index)].drop_duplicates('LOCATION','first')\
[['TIME','BOROUGH','ZIP CODE','LATITUDE','LONGITUDE','LOCATION','ON STREET NAME','CROSS STREET NAME']]
loca = locations.copy()
def m(r):
return top10[top10.index == r.LOCATION].iloc[0]
loca['COUNT'] = loca.apply(m, axis=1)
loca.sort_values(by='COUNT', ascending=False).to_csv('../app/datasets/slippery.csv', sep=',')
lightsnow = incidents[incidents['Conditions'] == 'Light Snow']
print "Accidents happening because of light snow: %s" % str(lightsnow.size)
print "Injuries: %s" % lightsnow['NUMBER OF PERSONS INJURED'].sum()
print "Killed: %s" % lightsnow['NUMBER OF PERSONS KILLED'].sum()
print "Top intersections:"
lightsnow.LOCATION.value_counts()[:3]
lightrain = incidents[incidents['Conditions'] == 'Light Rain']
print "Accidents happening because of light rain: %s" % str(lightrain.size)
print "Injuries: %s" % lightrain['NUMBER OF PERSONS INJURED'].sum()
print "Killed: %s" % lightrain['NUMBER OF PERSONS KILLED'].sum()
print "Top intersections:"
lightrain.LOCATION.value_counts()[:3]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup and imports
Step2: Define your indico API key
Step4: Convenience function for making batches of examples
Step5: Check that the requested paths exist
Step6: Query indico API to make sure everything is plumbed up correctly
Step7: Read data into a list of dictionary objects
Step8: Quick sanity check on the data, is everything as expected?
Step9: Strategy A
Step10: Fit the classifier
Step11: Get predictions
Step12: Compute and plot ROC and AUC
Step13: Put examples data into batches, for APIs
Step14: Prepare batches of test examples
Step15: Strategy B. Pre-trained sentiment HQ
Step16: Strategy C. Custom model using general text features.
Step17: Draw ROC plot and compute metrics for the custom collection
Step18: Strategy D. Custom model using sentiment features from the pretrained deep neural network.
Step19: Get predictions for custom collection with sentiment domain text features
Step20: Compute metrics and plot
Step21: Sanity check on results for all 4 strategies
Step22: Compute overall metrics and plot
Step23: Accuracy metrics
|
<ASSISTANT_TASK:>
Python Code:
seed = 3 # for reproducibility across experiments, just pick something
train_num = 100 # number of training examples to use
test_num = 100 # number of examples to use for testing
base_model_name = "sentiment_train%s_test%s" % (train_num, test_num)
lab2bin = {'pos': 1, 'neg': 0} # label -> binary class
pos_path = "~DATASETS/aclImdb/train/pos/" # filepath to the positive examples
neg_path = "~DATASETS/aclImdb/train/neg/" # file path to the negative examples
output_path = "OUTPUT" # path where output file should go
batchsize = 25 # send this many requests at once
max_num_examples = 25000.0 # for making subsets below
import os, io, glob, random, time
# from itertools import islice, chain, izip_longest
import numpy as np
import pandas as pd
from tqdm import tqdm
import pprint
pp = pprint.PrettyPrinter(indent=4)
import indicoio
from indicoio.custom import Collection
from indicoio.custom import collections as check_status
import sklearn
from sklearn import metrics
from sklearn import linear_model
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import Pipeline
import matplotlib.pyplot as plt # for plotting results
%matplotlib inline
import seaborn # just for the colors
indicoio.config.api_key = "" # Add your API key here
def batcher(seq, stride = 4):
Generator strides across the input sequence,
combining the elements between each stride.
for pos in xrange(0, len(seq), stride):
yield seq[pos : pos + stride]
# for making subsets below
train_subset = (train_num / 25000.0)
test_subset = (test_num / 25000.0)
random.seed(seed)
np.random.seed(seed)
# check that paths exist
for p in [pos_path, neg_path]:
abs_path = os.path.abspath(p)
if not os.path.exists(abs_path):
os.makedirs(abs_path)
print(abs_path)
for p in [output_path]:
abs_path = os.path.abspath(p)
if not os.path.exists(abs_path): # and make output path if necessary
os.makedirs(abs_path)
print(abs_path)
# pre_status = check_status()
# pp.pprint(pre_status)
train_data = [] # these lists will contain a bunch of little dictionaries, one for each example
test_data = []
# Positive examples (train)
examples = glob.glob(os.path.join(pos_path, "*")) # find all the positive examples, and read them
i = 0
for ex in examples:
d = {}
with open(ex, 'rb') as f:
d['label'] = 'pos' # label as "pos"
t = f.read().lower() # these files are already ascii text, so just lowercase them
d['text'] = t
d['pred_label'] = None # placeholder for predicted label
d['prob_pos'] = None # placeholder for predicted probability of a positive label
train_data.append(d) # add example to the list of training data
i +=1
print("Read %d positive training examples, of %d available (%.2f%%)" % (i, len(examples), (100.0*i)/len(examples)))
# Negative examples (train)
examples = glob.glob(os.path.join(neg_path, "*")) # find all the negative examples and read them
i = 0
for ex in examples:
d = {}
with open(ex, 'rb') as f:
d['label'] = 'neg'
t = f.read().lower()
d['text'] = t
d['pred_label'] = None
d['prob_pos'] = None
train_data.append(d)
i +=1
print("Read %d negative training examples, of %d available (%.2f%%)" % (i, len(examples), (100.0*i)/len(examples)))
# Positive examples (test)
examples = glob.glob(os.path.join(pos_path, "*"))
i = 0
for ex in examples:
d = {}
with open(ex, 'rb') as f:
d['label'] = 'pos'
t = f.read().lower() # these files are already ascii text
d['text'] = t
d['pred_label'] = None
d['prob_pos'] = None
test_data.append(d)
i +=1
print("Read %d positive test examples, of %d available (%.2f%%)" % (i, len(examples), (100.0*i)/len(examples)))
# Negative examples (test)
examples = glob.glob(os.path.join(neg_path, "*"))
i = 0
for ex in examples:
d = {}
with open(ex, 'rb') as f:
d['label'] = 'neg'
t = f.read().lower() # these files are already ascii text
d['text'] = t
d['pred_label'] = None
d['prob_pos'] = None
test_data.append(d)
i +=1
print("Read %d negative examples, of %d available (%.2f%%)" % (i, len(examples), (100.0*i)/len(examples)))
# Populate a dataframe, shuffle, and subset as required
df_train = pd.DataFrame(train_data)
df_train = df_train.sample(frac = train_subset) # shuffle (by sampling everything randomly)
print("After resampling, down to %d training records" % len(df_train))
df_test = pd.DataFrame(test_data)
df_test = df_test.sample(frac = test_subset) # shuffle (by sampling everything randomly)
print("After resampling, down to %d test records" % len(df_test))
df_train.head(10) # sanity check
df_train.tail(10)
df_test.tail(10)
vectorizer = sklearn.feature_extraction.text.TfidfVectorizer(
max_features = int(1e5), # max vocab size (pretty large)
max_df = 0.50,
sublinear_tf = True,
use_idf = True,
encoding = 'ascii',
decode_error = 'replace',
analyzer = 'word',
ngram_range = (1,3),
stop_words = 'english',
lowercase = True,
norm = 'l2',
smooth_idf = True,
)
lr = linear_model.SGDClassifier(
alpha = 1e-5,
average = 10,
class_weight = 'balanced',
epsilon = 0.15,
eta0 = 0.0,
fit_intercept = True,
l1_ratio = 0.15,
learning_rate = 'optimal',
loss = 'log',
n_iter = 5,
n_jobs = -1,
penalty = 'l2',
power_t = 0.5,
random_state = seed,
shuffle = True,
verbose = 0,
warm_start = False,
)
classifier = Pipeline([('vectorizer', vectorizer),
('logistic_regression', lr)
])
_ = classifier.fit(df_train['text'], df_train['label'])
pred_sk = classifier.predict(df_test['text'])
y_true_sk = [lab2bin[ex] for ex in df_test['label']]
proba_sk = classifier.predict_proba(df_test['text']) # also get probas
cname = base_model_name + "_sklearn"
plt.figure(figsize=(8,8))
probas_sk = []
y_pred_labels_sk = []
y_pred_sk = []
# get predictions
for i, pred in enumerate(pred_sk):
proba_pos = proba_sk[i][1]
probas_sk.append(proba_pos)
if float(proba_pos) >= 0.50:
pred_label = "pos"
elif float(proba_pos) < 0.50:
pred_label = "neg"
else:
print("ERROR on example %d" % i) # if this happens, need to fix something
y_pred_labels_sk.append(pred_label)
y_pred_sk.append(lab2bin[pred_label])
# compute ROC
fpr, tpr, thresholds = metrics.roc_curve(y_true_sk, probas_sk)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 1, label = "ROC area = %0.3f" % (roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_ROC" + ".png")))
plt.show()
acc = metrics.accuracy_score(y_true_sk, y_pred_sk)
print("Accuracy: %.4f" % (acc))
examples = [list(ex) for ex in zip(df_train['text'], df_train['label'])]
batches = [b for b in batcher(examples, batchsize)] # stores in memory, but the texts are small so no problem
test_examples = [list(ex) for ex in zip(df_test['text'], df_test['label'])] # test data
test_batches = [b for b in batcher(test_examples, batchsize)]
# get predictions from sentiment-HQ API
cname = base_model_name + "hq"
predictions_hq = []
for batch in tqdm(test_batches):
labels = [x[1] for x in batch]
texts = [x[0] for x in batch]
results = indicoio.sentiment_hq(texts)
for i, result in enumerate(results):
r = {}
r['label'] = labels[i]
r['text'] = texts[i]
r['proba'] = result
predictions_hq.append(r)
cname = base_model_name + "_hq"
plt.figure(figsize=(8,8))
# y_true = [df_test['label']]
probas = []
y_true = []
y_pred_labels = []
y_pred = []
for i, pred in enumerate(predictions_hq):
y_true.append(lab2bin[pred['label']])
proba = pred['proba']
probas.append(proba)
if float(proba) >= 0.50:
pl = 'pos'
elif float(proba) < 0.50:
pl= 'neg'
else:
print("Error. Check proba value and y_true logic")
pred_label = pl # pick the most likely class by predicted proba
y_pred_labels.append(pred_label)
y_pred.append(lab2bin[pred_label])
fpr, tpr, thresholds = metrics.roc_curve(y_true, probas)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 1, label = "ROC area = %0.3f" % (roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# plt.title("ROC plot model: '%s'" % cname)
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
# plt.savefig(os.path.abspath(cname + "_hq_ROC" + ".png"))
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_ROC" + ".png")))
plt.show()
acc = metrics.accuracy_score(y_true, y_pred)
print("Accuracy: %.4f" % (acc))
cname = base_model_name
print("This model will be cached as an indico custom collection using the name: '%s'" % cname)
collection = Collection(cname)
try:
collection.clear() # delete any previous data in this collection
collection.info()
collection = Collection(cname)
except:
print(" Error, probably because a collection with the given name didn't exist. Continuing...")
print(" Submitting %d training examples in %d batches..." % (len(examples), len(batches)))
for batch in tqdm(batches):
try:
collection.add_data(batch)
except Exception as e:
print("Exception: '%s' for batch:" % e)
pp.pprint(batch)
print(" training model: '%s'" % cname)
collection.train()
collection.wait() # blocks until the model is trained
# get predictions from the trained API model
predictions = []
cname = base_model_name
collection = Collection(cname)
for batch in tqdm(test_batches):
labels = [x[1] for x in batch]
texts = [x[0] for x in batch]
results = collection.predict(texts)
for i, result in enumerate(results):
r = {}
r['indico_result'] = result
r['label'] = labels[i]
r['text'] = texts[i]
r['proba'] = result['pos']
predictions.append(r)
pp.pprint(predictions[0]) # sanity check
plt.figure(figsize=(8,8))
probas = []
y_true = []
y_pred_labels = []
y_pred = []
for i, pred in enumerate(predictions):
y_true.append(lab2bin[pred['label']])
probas.append(pred['indico_result']['pos'])
pred_label = max(pred['indico_result'].keys(), key = (lambda x: pred['indico_result'][x])) # pick the most likely class by predicted proba
y_pred_labels.append(pred_label)
y_pred.append(lab2bin[pred_label])
fpr, tpr, thresholds = metrics.roc_curve(y_true, probas)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 1, label = "ROC area = %0.3f" % (roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_cc_ROC" + ".png")))
plt.show()
acc = metrics.accuracy_score(y_true, y_pred)
print("Accuracy: %.4f" % (acc))
cname = base_model_name + "_domain"
print("This model will be cached as an indico custom collection using the name: '%s'" % cname)
collection = Collection(cname, domain = "sentiment")
try:
collection.clear() # delete any previous data in this collection
collection.info()
collection = Collection(cname, domain = "sentiment")
except:
print(" Error, probably because a collection with the given name didn't exist. Continuing...")
print(" Submitting %d training examples in %d batches..." % (len(examples), len(batches)))
for batch in tqdm(batches):
try:
collection.add_data(batch, domain = "sentiment")
except Exception as e:
print("Exception: '%s' for batch:" % e)
pp.pprint(batch)
print(" training model: '%s'" % cname)
collection.train()
collection.wait()
# get predictions from trained API
predictions_domain = []
cname = base_model_name + "_domain"
collection = Collection(cname, domain = "sentiment")
for batch in tqdm(test_batches):
labels = [x[1] for x in batch]
texts = [x[0] for x in batch]
results = collection.predict(texts, domain = "sentiment")
# batchsize = len(batch)
for i, result in enumerate(results):
r = {}
r['indico_result'] = result
r['label'] = labels[i]
r['text'] = texts[i]
r['proba'] = result['pos']
predictions_domain.append(r)
cname = base_model_name + "_domain"
plt.figure(figsize=(8,8))
# y_true = [df_test['label']]
probas = []
y_true = []
y_pred_labels = []
y_pred = []
for i, pred in enumerate(predictions_domain):
y_true.append(lab2bin[pred['label']])
probas.append(pred['indico_result']['pos'])
pred_label = max(pred['indico_result'].keys(), key = (lambda x: pred['indico_result'][x])) # pick the most likely class by predicted proba
y_pred_labels.append(pred_label)
y_pred.append(lab2bin[pred_label])
fpr, tpr, thresholds = metrics.roc_curve(y_true, probas)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 1, label = "ROC area = %0.3f" % (roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# plt.title("ROC plot model: '%s'" % cname)
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
# plt.savefig(os.path.abspath(cname + "_cc_domain_ROC" + ".png"))
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_ROC" + ".png")))
plt.show()
acc = metrics.accuracy_score(y_true, y_pred)
print("Accuracy: %.4f" % (acc))
print("Strategy A. Custom sklearn model using n-grams, TFIDF, LR:")
print(y_true_sk[0])
print(pred_sk[0])
print(proba_sk[0])
print("")
print("Strategy B. Sentiment HQ:")
pp.pprint(predictions_hq[0])
print("Strategy C. Custom collection using general text features:")
pp.pprint(predictions[0])
print("")
print("Strategy D. Custom collection using sentiment features:")
pp.pprint(predictions_domain[0])
print("")
plt.figure(figsize=(10,10))
cname = base_model_name
# compute and draw curve for sklearn LR built from scratch
probas_sk = []
y_pred_labels_sk = []
y_pred_sk = []
for i, pred in enumerate(pred_sk):
proba_pos = proba_sk[i][1]
probas_sk.append(proba_pos)
if float(proba_pos) >= 0.50:
pred_label = "pos"
elif float(proba_pos) < 0.50:
pred_label = "neg"
else:
print("ERROR on example %d" % i)
y_pred_labels_sk.append(pred_label)
y_pred_sk.append(lab2bin[pred_label])
fpr_sk, tpr_sk, thresholds_sk = metrics.roc_curve(y_true_sk, probas_sk)
roc_auc_sk = metrics.auc(fpr_sk, tpr_sk)
plt.plot(fpr_sk, tpr_sk, lw = 2, color = "#a5acaf", label = "A. Custom sklearn ngram LR model; area = %0.3f" % roc_auc_sk)
# compute and draw curve for sentimentHQ
probas_s = []
y_true_s = []
y_pred_labels_s = []
y_pred_s = []
for i, pred in enumerate(predictions_hq):
y_true_s.append(lab2bin[pred['label']])
probas_s.append(pred['proba'])
if float(pred['proba']) >= 0.50:
pred_label = "pos"
elif float(pred['proba']) < 0.50:
pred_label = "neg"
else:
print("ERROR on example %d" % i)
y_pred_labels_s.append(pred_label)
y_pred_s.append(lab2bin[pred_label])
fpr_s, tpr_s, thresholds_s = metrics.roc_curve(y_true_s, probas_s)
roc_auc_s = metrics.auc(fpr_s, tpr_s)
plt.plot(fpr_s, tpr_s, lw = 2, color = "#b05ecc", label = "B. Sentiment HQ model; area = %0.3f" % roc_auc_s)
# Compute and draw curve for the custom collection using general text features
probas = []
y_true = []
y_pred_labels = []
y_pred = []
lab2bin = {'pos': 1,
'neg': 0}
for i, pred in enumerate(predictions):
y_true.append(lab2bin[pred['label']])
probas.append(pred['indico_result']['pos'])
pred_label = max(pred['indico_result'].keys(), key = (lambda x: pred['indico_result'][x])) # pick the most likely class by predicted proba
y_pred_labels.append(pred_label)
y_pred.append(lab2bin[pred_label])
fpr, tpr, thresholds = metrics.roc_curve(y_true, probas)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 2, color = "#ffbb3b", label = "C. Custom IMDB model using general text features; area = %0.3f" % (roc_auc))
# now compute and draw curve for the CC using sentiment text features
probas_d = []
y_true_d = []
y_pred_labels_d = []
y_pred_d = []
for i, pred in enumerate(predictions_domain):
y_true_d.append(lab2bin[pred['label']])
probas_d.append(pred['indico_result']['pos'])
pred_label = max(pred['indico_result'].keys(), key = (lambda x: pred['indico_result'][x]))
y_pred_labels_d.append(pred_label)
y_pred_d.append(lab2bin[pred_label])
fpr_d, tpr_d, thresholds_d = metrics.roc_curve(y_true_d, probas_d)
roc_auc_d = metrics.auc(fpr_d, tpr_d)
plt.plot(fpr_d, tpr_d, lw = 2, color = "#43b9af", label = "D. Custom IMDB model using sentiment text features; area = %0.3f" % roc_auc_d)
# Add other stuff to figure
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# plt.title("ROC: %d training examples" % len(examples))
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_comparison_ROC" + ".png")), dpi = 300)
plt.show()
acc_sk = metrics.accuracy_score(y_true_sk, y_pred_sk)
print("A. Sklearn model from scratch (sklearn) : %.4f" % (acc_sk))
acc_s = metrics.accuracy_score(y_true_s, y_pred_s)
print("B. Sentiment HQ : %.4f" % (acc_s))
acc = metrics.accuracy_score(y_true, y_pred)
print("C. Custom model using general text features : %.4f" % (acc))
acc_d = metrics.accuracy_score(y_true_d, y_pred_d)
print("D. Custom model using sentiment text features : %.4f" % (acc_d))
# print("Using (%d, %d, %d, %d) examples" % (len(y_pred), len(y_pred_d), len(y_pred_s), len(y_pred_sk)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: retrieving the available fields as a reference
Step2: build a field specific stop word list
Step3: plotting the wordcloud for abstracts and titles from various years
Step4: plotting the number publications per year
Step5: this shows that the database is missing entries and judging from the total number of downloaded items, there must be around 30000 papers missing
Step6: plotting the world map of author affiliations
Step7: using LDA to map out the topics
|
<ASSISTANT_TASK:>
Python Code:
client = MongoClient('localhost:27017')
db = client.arXivDB
db.arXivfeeds.count()
print(db.arXivfeeds.find_one().keys())
for item in db.arXivfeeds.find({'published_parsed': 2016}).sort('_id', pymongo.DESCENDING).limit(5):
print(item['title'])
#db.arXivfeeds.delete_many({})
def cleaner(doc, stem=False):
'''Function to clean the text data and prep for further analysis'''
doc = doc.lower() # turn text to lowercase
stops = set(stopwords.words("english")) # Creating a set of Stopwords
p_stemmer = PorterStemmer() # Creating the stemmer model
doc = re.sub(r"quantum", '', doc) # removing the word quantum (duh)
doc = re.sub(r"physics", '', doc) # removing the word physics (duh)
doc = re.sub(r"state", '', doc) # removing the word state (duh)
doc = re.sub(r'\$.*?\$', 'latexinlineformula', doc) # replacing latex inline formula
doc = re.sub(r'\\n', ' ', doc) # removing new line character
doc = re.sub(r'\\\\\"', '', doc) # removing german double dotted letters
doc = re.sub(r"</?\w+[^>]*>", '', doc) # removing html tags
doc = re.sub("[^a-zA-Z]", ' ', doc) # removing anythin other alpha-numerical char's and @ and !
doc = doc.split() # Splits the data into individual words
doc = [w for w in doc if not w in stops and len(w) > 3] # Removes stopwords and short length words
if stem:
doc = [p_stemmer.stem(i) for i in doc] # Stemming (reducing words to their root)
if not len(doc): # dealing with comments that are all emojis, stop words or other languages
doc = ['emptystring']
# print('text cleaning done!')
return ' '.join(doc)
cleaner(db.arXivfeeds.find_one({'published_parsed': 2016})['summary'])
def plot_abstract_and_title_wordcloud(arXivfeed_query_result):
arXivfeed_2015_text = cleaner(' '.join([' '.join(list(d.values())) for d in arXivfeed_query_result]))
# Generate a word cloud image
wordcloud_arXivfeed_2015 = WordCloud().generate(arXivfeed_2015_text)
# Display the generated image:
plt.imshow(wordcloud_arXivfeed_2015)
plt.axis("off")
plot_abstract_and_title_wordcloud(list(db.arXivfeeds.find({'published_parsed': 1995}, {'_id':0,'title':1})))
plot_abstract_and_title_wordcloud(list(db.arXivfeeds.find({'published_parsed': 2002}, {'_id':0,'title':1})))
plot_abstract_and_title_wordcloud(list(db.arXivfeeds.find({'published_parsed': 2015}, {'_id':0,'title':1})))
years = range(1994,2016,1)
num_publications_per_year = [db.arXivfeeds.find({'published_parsed': y}).count() for y in years]
plt.plot(years, num_publications_per_year)
pattern1 = r'[Pp]hoton\w*'
pattern2 = r'[Oo]ptic\w*'
set(re.findall(pattern2,
' '.join([' '.join(list(d.values())) for d in db.arXivfeeds.find({}, {'_id':0,'summary':1})])))
num_ph_papers = np.zeros(len(years))
for i, y in enumerate(years):
num_ph_papers[i] = db.arXivfeeds.find({'$and':[{'published_parsed': y},
{'$or':[
{'summary': {'$regex': pattern1}},
{'title': {'$regex': pattern1}},
{'summary': {'$regex': pattern2}},
{'title': {'$regex': pattern2}}
]}
]}).count()
plt.plot(years, num_ph_papers/num_publications_per_year)
list(db.arXivfeeds.find({'published_parsed': 2016}).limit(1))[0]
import nltk
import gensim
import pyLDAvis
import pyLDAvis.gensim
documents = [cleaner(d['summary']) for d in db.arXivfeeds.find({'published_parsed': 2010}, {'_id':0, 'summary':1})]
# documents = [cleaner(d['summary']) for d in db.arXivfeeds.find({}, {'_id':0, 'summary':1})]
train_set = []
for j in range(len(documents)):
train_set.append(nltk.word_tokenize(documents[j]))
dic = gensim.corpora.Dictionary(train_set)
print(len(dic))
dic.filter_extremes(no_below=20, no_above=0.1)
print(len(dic))
corpus = [dic.doc2bow(text) for text in train_set] # transform every token into BOW
tfidf = gensim.models.TfidfModel(corpus)
corpus_tfidf = tfidf[corpus]
lda = gensim.models.LdaModel(corpus_tfidf, id2word = dic, num_topics = 10, iterations=20, passes = 10)
corpus_lda = lda[corpus_tfidf]
vis_data = pyLDAvis.gensim.prepare(lda, corpus, dic)
pyLDAvis.display(vis_data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example with more savings, but slower to optimize
Step2: Look at the recommendations
|
<ASSISTANT_TASK:>
Python Code:
def func(a, b, c):
res = tf.einsum('ijk,ja,kb->iab', a, b, c) + 1
res = tf.einsum('iab,kb->iak', res, c)
return res
a = tf.random_normal((10, 11, 12))
b = tf.random_normal((11, 13))
c = tf.random_normal((12, 14))
# res = func(a, b, c)
orders, optimized_func = tf_einsum_opt.optimizer(func, sess, a, b, c)
res1 = func(a, b, c)
%timeit sess.run(res1)
res2 = optimized_func(a, b, c)
%timeit sess.run(res2)
# Check that the results of optimized and the original function are the same.
np.testing.assert_allclose(*sess.run([res1, res2]), rtol=1e-5, atol=1e-5)
def func(a, b, c, d):
res = tf.einsum('si,sj,sk,ij->s', a, b, d, c)
res += tf.einsum('s,si->s', res, a)
return res
a = tf.random_normal((100, 101))
b = tf.random_normal((100, 102))
c = tf.random_normal((101, 102))
d = tf.random_normal((100, 30))
orders, optimized_func = tf_einsum_opt.optimizer(func, sess, a, b, c, d)
res1 = func(a, b, c, d)
%timeit sess.run(res1)
res2 = optimized_func(a, b, c, d)
%timeit sess.run(res2)
orders
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's start by creating some data that we can store. 2000 images with shape (254, 254, 3) along with 2000 targets are generated.
Step2: The tenth image is set to be a completely red image.
Step3: In the next step we instantiate a pyxis writer to write the data we created above to a directory called data.
Step4: The map_size_limit is the size of the LMDB in MB. For file systems running ext4, there is no big cost associated with making this big. ram_size_limit is the limit on how many GB of data we can push per write operation.
Step5: Data can be input in two different ways
Step6: Reading data
Step7: To read data from a LMDB we instantiate a pyxis reader using the directory of the database.
Step8: All addressable keys can be found by calling the pyxis.get_data_keys function
Step9: Python list indexing syntax is used to retrieve a whole sample from the LMDB. The pyxis.get_sample function can be used as well. The returned value is a Python dictionary, addressable by the keys used when writing the database.
Step10: The code snippet above retrieves the tenth sample, i.e. the one with the red image.
Step11: Slice objects are supported by the indexing method outlined above. For example
Step12: Python 2.7 tend to favour __getslice__ over __getitem__ when slicing, however __getslice__ has been deprecated since Python 2.0. The functionality may vary when compared with Python 3. See Python 2 Data model documentation for more information.
Step13: Use len() to return the number of samples in the LMDB.
Step14: Just like when writing, we should make sure to close the LMDB environment after we are done reading.
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import time
import numpy as np
import pyxis as px
np.random.seed(1234)
nb_samples = 2000
X = np.zeros((nb_samples, 254, 254, 3), dtype=np.uint8)
y = np.arange(nb_samples, dtype=np.uint8)
X[10, :, :, 0] = 255
db = px.Writer(dirpath='data', map_size_limit=500, ram_gb_limit=2)
start = time.time()
db.put_samples('X', X, 'y', y)
print('Average time per image = {:.4f}s'.format((time.time() - start) / nb_samples))
db.close()
try:
%matplotlib inline
import matplotlib.pyplot as plt
except ImportError:
raise ImportError('Could not import the matplotlib library required to '
'plot images. Please refer to http://matplotlib.org/ '
'for installation instructions.')
db = px.Reader('data')
db.get_data_keys()
sample = db[10]
print('X: ', sample['X'].shape, sample['X'].dtype)
print('y: ', sample['y'].shape, sample['y'].dtype)
plt.figure()
plt.imshow(sample['X'])
plt.axis('off')
plt.show()
samples = db[0:8:2]
for sample in samples:
print(sample['y'])
db.get_data_value(100, 'y')
len(db)
db.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For information on how to configure and tune the solver, please see the documentation for optlang project and note that model.solver is simply an object optlang of class Model.
|
<ASSISTANT_TASK:>
Python Code:
import cobra.test
model = cobra.test.create_test_model('textbook')
model.solver = 'glpk'
# or if you have cplex installed
model.solver = 'cplex'
type(model.solver)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we'll load the text file and convert it into integers for our network to use.
Step3: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Step4: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
Step5: Hyperparameters
Step6: Write out the graph for TensorBoard
Step7: Training
Step8: Sampling
|
<ASSISTANT_TASK:>
Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
def split_data(chars, batch_size, num_steps, split_frac=0.9):
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
with tf.name_scope('inputs'):
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
with tf.name_scope('targets'):
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
with tf.name_scope("RNN_layers"):
#lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
#drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
#cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
lstm = tf.nn.rnn_cell.BasicLSTMCell(lstm_size)
drop = tf.nn.rnn_cell.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.nn.rnn_cell.MultiRNNCell([drop] * num_layers)
with tf.name_scope("RNN_init_state"):
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
with tf.name_scope("RNN_forward"):
rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]
#outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)
outputs, state = tf.nn.rnn(cell, rnn_inputs, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
with tf.name_scope('sequence_reshape'):
#seq_output = tf.concat(outputs, axis=1,name='seq_output')
seq_output = tf.concat(1,outputs,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
with tf.name_scope('logits'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
with tf.name_scope('predictions'):
preds = tf.nn.softmax(logits, name='predictions')
with tf.name_scope('cost'):
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
with tf.name_scope('train'):
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/3', sess.graph)
!mkdir -p checkpoints/anna
epochs = 10
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. 입력DataLoader 설정
Step2: 2. 사전 설정
Step3: 3. Trainning loop
Step4: 4. Predict & Evaluate
Step5: 5. save model parameter
Step6: 6. plot images which failed to predict
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
from torchvision import datasets, transforms
from torch.autograd import Variable
import matplotlib.pyplot as plt
import numpy as np
is_cuda = torch.cuda.is_available() # cuda사 사용가능시, True
batch_size = 50
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=True, download=True, transform=transforms.ToTensor()),
batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=True, transform=transforms.ToTensor()),
batch_size=1000)
class MnistModel(nn.Module):
def __init__(self):
super(MnistModel, self).__init__()
# input is 28x28
# padding=2 for same padding
self.conv1 = nn.Conv2d(1, 32, 5, padding=2)
# feature map size is 14*14 by pooling
# padding=2 for same padding
self.conv2 = nn.Conv2d(32, 64, 5, padding=2)
# feature map size is 7*7 by pooling
self.fc1 = nn.Linear(64*7*7, 1024)
self.fc2 = nn.Linear(1024, 10)
def forward(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), 2)
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, 64*7*7) # reshape Variable
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
model = MnistModel()
if is_cuda : model.cuda()
loss_fn = nn.NLLLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
# trainning
model.train()
train_loss = []
train_accu = []
for epoch in range(3):
for i, (image, target) in enumerate(train_loader):
if is_cuda : image, target = image.cuda(), target.cuda()
image, target = Variable(image), Variable(target) # 입력image Target 설정
output = model(image) # model 생성
loss = loss_fn(output, target) #loss 생성
optimizer.zero_grad() # zero_grad
loss.backward() # calc backward grad
optimizer.step() # update parameter
pred = output.data.max(1)[1]
accuracy = pred.eq(target.data).sum()/batch_size
train_loss.append(loss.data[0])
train_accu.append(accuracy)
if i % 300 == 0:
print(i, loss.data[0])
plt.plot(train_accu)
plt.plot(train_loss)
model.eval()
correct = 0
for image, target in test_loader:
if is_cuda : image, target = image.cuda(), target.cuda()
image, target = Variable(image, volatile=True), Variable(target)
output = model(image)
prediction = output.data.max(1)[1]
correct += prediction.eq(target.data).sum()
print('\nTest set: Accuracy: {:.2f}%'.format(100. * correct / len(test_loader.dataset)))
checkpoint_filename = 'minist.ckpt'
torch.save(model.state_dict(), checkpoint_filename)
model.eval()
image, target = iter(test_loader).next() #test_loader로 부터 한번만 dataset을 호출
if is_cuda : image, target = image.cuda(), target.cuda()
image, target = Variable(image, volatile=True), Variable(target)
output = model(image)
## 이미지, 참값, 예측값을 numpy array로 변환
images = image.data.cpu().numpy()
cls_true = target.data.cpu().numpy().squeeze()
prediction = output.data.max(1)[1].cpu().numpy().squeeze()
# 예측값이 참값과 틀린것을 확인
incorrect = (prediction != cls_true)
# 예측이 틀린 것만을 추출
images = images[incorrect]
cls_true = cls_true[incorrect]
prediction = prediction[incorrect]
# 에러율을 표지
print('error : {:.1%}, number ={:}'.format(incorrect.sum()/len(incorrect), incorrect.sum()))
# 틀린 것들의 이미지를 표시
tensorImg = torch.Tensor(images)
plt.imshow(torchvision.utils.make_grid(tensorImg).numpy().transpose((1,2,0)))
plt.show()
# 틀린 것들의 예측치를 표시
print('prediction :')
pred_resized = np.pad(prediction, (0, 8 - len(prediction)%8), 'constant', constant_values=(0, 0))
print(pred_resized.reshape(-1,8))
print('\n')
# 틀린 것들의 참값을 표시
print('True :')
true_resized = np.pad(cls_true, (0, 8 - len(cls_true)%8), 'constant', constant_values=(0, 0))
print(true_resized.reshape(-1,8))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can now import the deepchem package to play with.
Step2: Anatomy of a Dataset
Step3: We now have three Dataset objects
Step4: There's a lot of information there, so let's start at the beginning. It begins with the label "DiskDataset". Dataset is an abstract class. It has a few subclasses that correspond to different ways of storing data.
Step5: This is a very easy way to access data, but you should be very careful about using it. This requires the data for all samples to be loaded into memory at once. That's fine for small datasets like this one, but for large datasets it could easily take more memory than you have.
Step6: Most deep learning models can process a batch of multiple samples all at once. You can use iterbatches() to iterate over batches of samples.
Step7: iterbatches() has other features that are useful when training models. For example, iterbatches(batch_size=100, epochs=10, deterministic=False) will iterate over the complete dataset ten times, each time with the samples in a different random order.
Step8: Creating Datasets
Step9: Notice that we did not specify weights or IDs. These are optional, as is y for that matter. Only X is required. Since we left them out, it automatically built w and ids arrays for us, setting all weights to 1 and setting the IDs to integer indices.
Step10: What about creating a DiskDataset? If you have the data in NumPy arrays, you can call DiskDataset.from_numpy() to save it to disk. Since this is just a tutorial, we will save it to a temporary directory.
|
<ASSISTANT_TASK:>
Python Code:
!pip install --pre deepchem
import deepchem as dc
dc.__version__
tasks, datasets, transformers = dc.molnet.load_delaney(featurizer='GraphConv')
train_dataset, valid_dataset, test_dataset = datasets
print(test_dataset)
test_dataset.y
for X, y, w, id in test_dataset.itersamples():
print(y, id)
for X, y, w, ids in test_dataset.iterbatches(batch_size=50):
print(y.shape)
test_dataset.to_dataframe()
import numpy as np
X = np.random.random((10, 5))
y = np.random.random((10, 2))
dataset = dc.data.NumpyDataset(X=X, y=y)
print(dataset)
dataset.to_dataframe()
import tempfile
with tempfile.TemporaryDirectory() as data_dir:
disk_dataset = dc.data.DiskDataset.from_numpy(X=X, y=y, data_dir=data_dir)
print(disk_dataset)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'sandbox-2', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the weight matrices from the training
Step2: Visualize the digit from one hot representation through the activity weight matrix to the image representation
Step3: Visualize the rotation of the image using the weight matrix from activity to activity
|
<ASSISTANT_TASK:>
Python Code:
import nengo
import numpy as np
import cPickle
import matplotlib.pyplot as plt
from matplotlib import pylab
import matplotlib.animation as animation
#Weight matrices generated by the neural network after training
#Maps the label vectors to the neuron activity of the ensemble
label_weights = cPickle.load(open("label_weights1000.p", "rb"))
#Maps the activity of the neurons to the visual representation of the image
activity_to_img_weights = cPickle.load(open("activity_to_img_weights_scale1000.p", "rb"))
#Maps the activity of the neurons of an image with the activity of the neurons of an image scaled
scale_up_weights = cPickle.load(open("scale_up_weights1000.p", "rb"))
scale_down_weights = cPickle.load(open("scale_down_weights1000.p", "rb"))
#Create the pointers for the numbers
temp = np.diag([1]*10)
ZERO = temp[0]
ONE = temp[1]
TWO = temp[2]
THREE= temp[3]
FOUR = temp[4]
FIVE = temp[5]
SIX = temp[6]
SEVEN =temp[7]
EIGHT= temp[8]
NINE = temp[9]
labels =[ZERO,ONE,TWO,THREE,FOUR,FIVE,SIX,SEVEN,EIGHT,NINE]
#Visualize the one hot representation
print(ZERO)
print(ONE)
#Change this to imagine different digits
imagine = ZERO
#Can also imagine combitnations of numbers (ZERO + ONE)
#Label to activity
test_activity = np.dot(imagine,label_weights)
#Image decoded
test_output_img = np.dot(test_activity, activity_to_img_weights)
plt.imshow(test_output_img.reshape(28,28),cmap='gray')
plt.show()
#Change this to visualize different digits
imagine = ZERO
#How long the animation should go for
frames=5
#Make a list of the activation of rotated images and add first frame
rot_seq = []
rot_seq.append(np.dot(imagine,label_weights)) #Map the label vector to the activity vector
test_output_img = np.dot(rot_seq[0], activity_to_img_weights) #Map the activity to the visual representation
#add the rest of the frames, using the previous frame to calculate the current frame
for i in range(1,frames):
rot_seq.append(np.dot(rot_seq[i-1],scale_down_weights)) #add the activity of the current image to the list
test_output_img = np.dot(rot_seq[i], activity_to_img_weights) #map the new activity to the visual image
for i in range(1,frames*2):
rot_seq.append(np.dot(rot_seq[frames+i-2],scale_up_weights)) #add the activity of the current image to the list
test_output_img = np.dot(rot_seq[i], activity_to_img_weights) #map the new activity to the visual image
#Animation of rotation
fig = plt.figure()
def updatefig(i):
image_vector = np.dot(rot_seq[i], activity_to_img_weights) #map the activity to the image representation
im = pylab.imshow(np.reshape(image_vector,(28,28), 'F').T, cmap=plt.get_cmap('Greys_r'),animated=True)
return im,
ani = animation.FuncAnimation(fig, updatefig, interval=100, blit=True)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Notice that we have a lot of speeches that are relatively short. They probably aren't the best for clustering because of their brevity
|
<ASSISTANT_TASK:>
Python Code:
!curl -O http://www.cs.cornell.edu/home/llee/data/convote/convote_v1.1.tar.gz
!tar -zxvf convote_v1.1.tar.gz
paths = glob.glob("convote_v1.1/data_stage_one/development_set/*")
speeches = []
for path in paths:
speech = {}
filename = path[-26:]
speech['filename'] = filename
speech['bill_no'] = filename[:3]
speech['speaker_no'] = filename[4:10]
speech['bill_vote'] = filename[-5]
speech['party'] = filename[-7]
# Open the file
speech_file = open(path, 'r')
# Read the stuff out of it
speech['contents'] = speech_file.read()
cleaned_contents = re.sub(r"[^ \w]",'', speech['contents'])
cleaned_contents = re.sub(r" +",' ', cleaned_contents)
cleaned_contents = cleaned_contents.strip()
words = cleaned_contents.split(' ')
speech['word_count'] = len(words)
speeches.append(speech)
speeches[:5]
speeches_df = pd.DataFrame(speeches)
speeches_df.head()
speeches_df["word_count"].describe()
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(max_features=10000, stop_words='english')
longer_speeches = speeches_df[speeches_df["word_count"] > 92]
#filtering for word counts greater than 92 (our median length)
X = vectorizer.fit_transform(longer_speeches['contents'])
from sklearn.cluster import KMeans
number_of_clusters = 7
km = KMeans(n_clusters=number_of_clusters)
km.fit(X)
print("Top terms per cluster:")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(number_of_clusters):
print("Cluster %d:" % i),
for ind in order_centroids[i, :15]:
print(' %s' % terms[ind]),
print ''
additional_stopwords = ['mr','congress','chairman','madam','amendment','legislation','speaker']
import nltk
english_stopwords = nltk.corpus.stopwords.words('english')
new_stopwords = additional_stopwords + english_stopwords
vectorizer = TfidfVectorizer(max_features=10000, stop_words=new_stopwords)
longer_speeches = speeches_df[speeches_df["word_count"] > 92]
X = vectorizer.fit_transform(longer_speeches['contents'])
number_of_clusters = 7
km = KMeans(n_clusters=number_of_clusters)
km.fit(X)
print("Top terms per cluster:")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(number_of_clusters):
print("Cluster %d:" % i),
for ind in order_centroids[i, :15]:
print(' %s' % terms[ind]),
print ''
longer_speeches["k-means label"] = km.labels_
longer_speeches.head()
china_speeches = longer_speeches[longer_speeches["k-means label"] == 1]
china_speeches.head()
vectorizer = TfidfVectorizer(max_features=10000, stop_words=new_stopwords)
X = vectorizer.fit_transform(china_speeches['contents'])
number_of_clusters = 5
km = KMeans(n_clusters=number_of_clusters)
km.fit(X)
print("Top terms per cluster:")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(number_of_clusters):
print("Cluster %d:" % i),
for ind in order_centroids[i, :10]:
print(' %s' % terms[ind]),
print ''
km.get_params()
km.score(X)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's load the actual data. We're going to use astropy to do that
Step2: Since this is Chandra data, we know where all the relevant information is
Step3: Let's plot the raw spectrum
Step4: Ok, great! Now we can load the ARF and RMF
Step5: There's also an XSPECTRUM class that makes all of this much easier
Step6: spec now also contains the ARF and RMF and can be used for various things.
Step7: Spectral Fitting
Step8: Awesome! Let's make a LogPosterior object that we can now play around with
Step9: Because this is a simulated spectrum, we know the "ground truth" values. The photon index is $\Gamma = 2$, and the normalization is $N = 1$.
Step10: Ok, cool. Fitting can be performed using scipy.optimize.
Step11: Awesome, that works, too. Let's now stick this into scipy.optimize and look for a solution
Step12: Hooray, we can fit! That's the minimum result we were looking for!
Step13: That looks vaguely right!
Step14: Let's plot the spectra, so that we know what we're looking at
Step15: Ok, cool. We'll need to define a new posterior class. We still only have two parameters, but now we have four spectra to fit at the same time!
Step16: Awesome, now we can optimize this as well
Step17: ... and let's to MCMC!
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
try:
import seaborn as sns
except ImportError:
print("No seaborn installed. Oh well.")
import numpy as np
from scipy.special import gammaln as scipy_gammaln
import scipy.stats
import astropy.io.fits as fits
import astropy.modeling.models as models
from astropy.modeling.fitting import _fitter_to_model_params
from clarsach import respond
from clarsach.spectrum import XSpectrum
from clarsach.models.powerlaw import Powerlaw
phafile = "./fake_acis.pha"
pha_list = fits.open(phafile)
bin_lo = pha_list[1].data.field("BIN_LO")
bin_hi = pha_list[1].data.field("BIN_HI")
bin_mid = bin_lo + (bin_hi - bin_lo)/2.
channels = pha_list[1].data.field("CHANNEL")
counts = pha_list[1].data.field("COUNTS")
respfile = pha_list[1].header["RESPFILE"]
arffile = pha_list[1].header["ANCRFILE"]
plt.figure()
plt.figure(figsize=(10,7))
plt.plot(bin_mid, counts)
arf = respond.ARF(arffile)
rmf = respond.RMF(respfile)
spec = XSpectrum(phafile)
pl_model = Powerlaw(norm=1., phoindex=2.0)
y_truth = pl_model.calculate(rmf.energ_lo, rmf.energ_hi)
y_arf = y_truth * arf.specresp
y_rmf = rmf.apply_rmf(y_arf)
plt.figure()
plt.loglog(bin_mid, y_truth)
plt.figure()
plt.figure(figsize=(10,7))
plt.plot(bin_mid, counts)
plt.plot(bin_mid, y_rmf*1e5)
class PoissonPosterior(object):
def __init__(self, spec, model):
self.x_low = spec.rmf.energ_lo
self.x_high = spec.rmf.energ_hi
self.y = spec.counts
self.model = model
self.arf = spec.arf
self.rmf = spec.rmf
self.exposure = spec.exposure
def logprior(self, pars):
phoind = pars[0]
norm = pars[1]
p_ph = scipy.stats.uniform(1.0, 4.0).logpdf(phoind)
p_norm = scipy.stats.uniform(0, 10).logpdf(norm)
return p_ph + p_norm
def _calculate_model(self, pars):
self.model.phoindex = pars[0]
self.model.norm = pars[1]
# evaluate the model at the positions x
mean_model = self.model.calculate(self.x_low, self.x_high)
# run the ARF and RMF calculations
if self.arf is not None:
m_arf = self.arf.apply_arf(mean_model)
else:
m_arf = mean_model
if self.rmf is not None:
ymodel = self.rmf.apply_rmf(m_arf)
else:
ymodel = mean_model
ymodel *= self.exposure
ymodel += 1e-20
return ymodel
def loglikelihood(self, pars):
ymodel = self._calculate_model(pars)
# compute the log-likelihood
loglike = np.sum(-ymodel + self.y*np.log(ymodel) \
- scipy_gammaln(self.y + 1.))
if np.isfinite(loglike):
return loglike
else:
return -np.inf
def logposterior(self, pars):
return self.logprior(pars) + self.loglikelihood(pars)
def __call__(self, pars):
return self.logposterior(pars)
lpost = PoissonPosterior(spec, pl_model)
lpost([2,1])
lpost_neg = lambda pars : -lpost(pars)
lpost_neg([2, 1])
lpost_neg([1.5, 3])
res = scipy.optimize.minimize(lpost_neg, [2, 1], method="L-BFGS-B")
res.x
res
y_truth = lpost._calculate_model([2,1])
y_fit = lpost._calculate_model(res.x)
plt.figure()
plt.plot(spec.bin_lo, spec.counts, label="Data")
#plt.plot(spec.bin_lo, y_truth, label="ground truth")
plt.plot(spec.bin_lo, y_fit, label="best-fit model")
plt.legend()
import emcee
res.x
ndim = 2
nwalker = 500
burnin = 200
niter = 200
p0 = np.array([np.random.multivariate_normal(res.x,
np.diag([0.05, 0.05])) for i in range(nwalker)])
sampler = emcee.EnsembleSampler(nwalker, ndim, lpost, threads=4)
pos, prob, stat = sampler.run_mcmc(p0, burnin)
sampler.reset()
_, _, _ = sampler.run_mcmc(pos, niter, rstate0=stat)
fl = sampler.flatchain
for i in range(ndim):
plt.figure()
plt.plot(fl[:,i])
import corner
_ = corner.corner(fl, truths=[2,1])
spec_heg_p1 = XSpectrum("./fake_heg_p1.pha")
spec_heg_m1 = XSpectrum("./fake_heg_m1.pha")
spec_meg_p1 = XSpectrum("./fake_meg_p1.pha")
spec_meg_m1 = XSpectrum("./fake_meg_m1.pha")
spec_meg_m1.rmf
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(10,10))
ax1.plot(spec_heg_p1.bin_lo, spec_heg_p1.counts)
ax1.set_title("HEG P1")
ax2.plot(spec_heg_m1.bin_lo, spec_heg_m1.counts)
ax2.set_title("HEG M1")
ax3.plot(spec_meg_p1.bin_lo, spec_meg_p1.counts)
ax3.set_title("MEG P1")
ax4.plot(spec_meg_m1.bin_lo, spec_meg_m1.counts)
ax4.set_title("MEG M1")
class PoissonPosterior(object):
def __init__(self, spec_hp1, spec_hm1, spec_mp1, spec_mm1, model):
self.spec_hp1 = spec_hp1
self.spec_hm1 = spec_hm1
self.spec_mp1 = spec_mp1
self.spec_mm1 = spec_mm1
self.model = model
def logprior(self, pars):
phoind = pars[0]
norm = pars[1]
inst_fac1 = pars[2]
inst_fac2 = pars[3]
inst_fac3 = pars[4]
p_ph = scipy.stats.uniform(1.0, 4.0).logpdf(phoind)
p_norm = scipy.stats.uniform(0, 10).logpdf(norm)
p_inst_fac1 = scipy.stats.norm(1, 0.5).logpdf(inst_fac1)
p_inst_fac2 = scipy.stats.norm(1, 0.5).logpdf(inst_fac2)
p_inst_fac3 = scipy.stats.norm(1, 0.5).logpdf(inst_fac3)
return p_ph + p_norm + p_inst_fac1 + p_inst_fac2 + p_inst_fac3
def _calculate_model(self, pars):
self.model.phoindex = pars[0]
self.model.norm = pars[1]
inst_fac1 = pars[2]
inst_fac2 = pars[3]
inst_fac3 = pars[4]
mean_model_hp1 = self.model.calculate(self.spec_hp1.rmf.energ_lo,
self.spec_hp1.rmf.energ_hi)
mean_model_hm1 = self.model.calculate(self.spec_hm1.rmf.energ_lo,
self.spec_hm1.rmf.energ_hi)
mean_model_mp1 = self.model.calculate(self.spec_mp1.rmf.energ_lo,
self.spec_mp1.rmf.energ_hi)
mean_model_mm1 = self.model.calculate(self.spec_mm1.rmf.energ_lo,
self.spec_mm1.rmf.energ_hi)
# run the ARF and RMF calculations
if self.spec_hp1.arf is not None:
m_arf_hp1 = self.spec_hp1.arf.apply_arf(mean_model_hp1)
m_arf_hm1 = self.spec_hm1.arf.apply_arf(mean_model_hm1 * inst_fac1)
m_arf_mp1 = self.spec_mp1.arf.apply_arf(mean_model_mp1 * inst_fac2)
m_arf_mm1 = self.spec_mm1.arf.apply_arf(mean_model_mm1 * inst_fac3)
else:
m_arf_hp1 = mean_model_hp1
m_arf_hm1 = mean_model_hm1
m_arf_mp1 = mean_model_mp1
m_arf_mm1 = mean_model_hp1
if self.spec_hp1.rmf is not None:
ymodel_hp1 = self.spec_hp1.rmf.apply_rmf(m_arf_hp1)
ymodel_hm1 = self.spec_hm1.rmf.apply_rmf(m_arf_hm1)
ymodel_mp1 = self.spec_mp1.rmf.apply_rmf(m_arf_mp1)
ymodel_mm1 = self.spec_mm1.rmf.apply_rmf(m_arf_mm1)
else:
ymodel_hp1 = mean_model_hp1
ymodel_hm1 = mean_model_hm1
ymodel_mp1 = mean_model_mp1
ymodel_mm1 = mean_model_mm1
ymodel_hp1 *= self.spec_hp1.exposure
ymodel_hm1 *= self.spec_hm1.exposure
ymodel_mp1 *= self.spec_mp1.exposure
ymodel_mm1 *= self.spec_mm1.exposure
ymodel_hp1 += 1e-20
ymodel_hm1 += 1e-20
ymodel_mp1 += 1e-20
ymodel_mm1 += 1e-20
return ymodel_hp1, ymodel_hm1, ymodel_mp1, ymodel_mm1
def loglikelihood(self, pars):
ymodel_hp1, ymodel_hm1, ymodel_mp1, ymodel_mm1 = self._calculate_model(pars)
#ymodel_hp1, ymodel_hm1 = self._calculate_model(pars)
# compute the log-likelihood
loglike_hp1 = np.sum(-ymodel_hp1 + self.spec_hp1.counts*np.log(ymodel_hp1) \
- scipy_gammaln(self.spec_hp1.counts + 1.))
loglike_hm1 = np.sum(-ymodel_hm1 + self.spec_hm1.counts*np.log(ymodel_hm1) \
- scipy_gammaln(self.spec_hm1.counts + 1.))
loglike_mp1 = np.sum(-ymodel_mp1 + self.spec_mp1.counts*np.log(ymodel_mp1) \
- scipy_gammaln(self.spec_mp1.counts + 1.))
loglike_mm1 = np.sum(-ymodel_mm1 + self.spec_mm1.counts*np.log(ymodel_mm1) \
- scipy_gammaln(self.spec_mm1.counts + 1.))
if np.isfinite(loglike_hp1) and np.isfinite(loglike_hm1) and \
np.isfinite(loglike_mp1) and np.isfinite(loglike_mm1):
return loglike_hp1 + loglike_hm1 #+ loglike_mp1 + loglike_mm1
else:
return -np.inf
def logposterior(self, pars):
return self.logprior(pars) + self.loglikelihood(pars)
def __call__(self, pars):
return self.logposterior(pars)
lpost = PoissonPosterior(spec_heg_p1, spec_heg_m1, spec_meg_p1, spec_meg_m1, pl_model)
lpost([2,1, 1, 1, 1])
lpost_neg = lambda pars : -lpost(pars)
%timeit lpost([2, 1, 1, 1, 1])
res = scipy.optimize.minimize(lpost_neg, [2, 1, 1, 1, 1], method="L-BFGS-B")
res
sampler = emcee.EnsembleSampler(200, 5, lpost, threads=4)
p0 = np.array([np.random.multivariate_normal([2, 1, 1, 1, 1],
np.diag([0.1, 0.05, 0.05, 0.05, 0.05])) for i in range(200)])
pos, prob, stat = sampler.run_mcmc(p0, 200)
sampler.reset()
_, _, _ = sampler.run_mcmc(pos, 100, rstate0=stat)
fl[:,0]
fl = sampler.flatchain
for i in range(ndim):
plt.figure()
plt.plot(fl[fl[:,i] > 0,i])
a = np.random.uniform(size=6)
b = np.random.uniform(size=6)
np.cov(a, b)
_ = corner.corner(fl, truths=[2,1])
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(10,10))
ax1.plot(spec_heg_p1.bin_lo, spec_heg_p1.counts)
ax1.set_title("HEG P1")
ax2.plot(spec_heg_m1.bin_lo, spec_heg_m1.counts)
ax2.set_title("HEG M1")
#ax3.plot(spec_meg_p1.bin_lo, spec_meg_p1.counts)
#ax3.set_title("MEG P1")
#ax4.plot(spec_meg_m1.bin_lo, spec_meg_m1.counts)
#ax4.set_title("MEG M1")
idx = np.random.choice(np.arange(len(fl)), size=100)
for i in idx:
p = fl[i]
yfit_hp1, yfit_hm1 = lpost._calculate_model(p)
ax1.plot(spec_heg_p1.bin_lo, yfit_hp1, color=sns.color_palette()[1], alpha=0.4)
ax2.plot(spec_heg_m1.bin_lo, yfit_hm1, color=sns.color_palette()[1], alpha=0.4)
#ax3.plot(spec_meg_p1.bin_lo, yfit_mp1, color=sns.color_palette()[1], alpha=0.4)
#ax4.plot(spec_meg_m1.bin_lo, yfit_mm1, color=sns.color_palette()[1], alpha=0.4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exploring the dataset
Step2: Based on the above exploratory commands, I believe that the following questions can be answered using the dataset
Step3: Now that we have a data frame of information about each genre and the corresponding mean rating, we will visualize the data using matplotlib
Step4: Reporting findings/analyses
Step5: Now that we have a move year column, let us list the data types of the columns in the movies data frame.
Step6: movie_year is of float64 datat type. We must convert the data type of the movie_year column to int64. Before we go ahead and do that, we must replace all NULL and inifinite entries in the column with zero. If we do not perform this step, we will get the following errror message.
Step7: In the above array, we see some incorrect year values. And the year values need to be sorted as well.
|
<ASSISTANT_TASK:>
Python Code:
# The first step is to import the dataset into a pandas dataframe.
import pandas as pd
#path = 'C:/Users/hrao/Documents/Personal/HK/Python/ml-20m/ml-20m/'
path = '/Users/Harish/Documents/HK_Work/Python/ml-20m/'
movies = pd.read_csv(path+'movies.csv')
movies.shape
tags = pd.read_csv(path+'tags.csv')
tags.shape
ratings = pd.read_csv(path+'ratings.csv')
ratings.shape
links = pd.read_csv(path+'links.csv')
links.shape
movies.head()
tags.head()
ratings.head()
links.head()
# List of genres as a Python list
genres = ['Action','Adventure','Animation','Children','Comedy','Crime','Documentary','Drama','Fantasy','Film-Noir','Horror','Musical','Mystery','Romance','Sci-Fi','Thriller','War','Western']
genres_rating_list = []
# The loop reads each element of the above list
# For each iteration, one genre is selected from the movies data frame
# This selection of the data frame is then merged with the rating data frame to get the rating for that genre
# Once the new merged data frame is created, we use the mean function to get the mean rating for the genre
# The genre and the corresponding mean rating are then appended to the genres_rating Data Frame
# The entire looping takes long - can certainly be optimized for performance
for i in range(len(genres)):
fil = genres[i]+'_filter'
mov = genres[i]+'_movies'
rat = genres[i]+'_ratings'
rat_mean = rat+'_mean'
fil = movies['genres'].str.contains(genres[i])
mov = movies[fil]
rat = mov.merge(ratings, on='movieId', how='inner')
rat_mean = round(rat['rating'].mean(), 2)
#print(genres[i], round(rat_mean,2))
genres_rating_list.append(rat_mean)
df = {'Genre':genres, 'Genres Mean Rating':genres_rating_list}
genres_rating = pd.DataFrame(df)
genres_rating
genres_rating['Genres Standard Deviation'] = genres_rating['Genres Mean Rating'].std()
genres_rating['Mean'] = genres_rating['Genres Mean Rating'].mean()
genres_rating['Zero'] = 0
genres_rating
overall_mean = round(genres_rating['Genres Mean Rating'].mean(), 2)
overall_std = round(genres_rating['Genres Mean Rating'].std(),2)
scifi_rating = genres_rating[genres_rating['Genre'] == 'Sci-Fi']['Genres Mean Rating']
print(overall_mean)
print(overall_std)
print(scifi_rating)
genres_rating['Diff from Mean'] = genres_rating['Genres Mean Rating'] - overall_mean
genres_rating
genre_list = list(genres_rating['Genre'])
genres_rating_list = list(genres_rating['Genres Mean Rating'])
genres_diff_list = list(genres_rating['Diff from Mean'])
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(20, 10))
ax1 = plt.subplot(2,1,1)
x = [x for x in range(0, 18)]
xticks_genre_list = genre_list
y = genres_rating_list
plt.xticks(range(len(x)), xticks_genre_list)
plt.scatter(x,y, color='g')
plt.plot(x, genres_rating['Mean'], color="red")
plt.autoscale(tight=True)
#plt.rcParams["figure.figsize"] = (10,2)
plt.title('Movie ratings by genre')
plt.xlabel('Genre')
plt.ylabel('Rating')
plt.ylim(ymax = 4, ymin = 3)
plt.grid(True)
plt.savefig(r'movie-ratings-by-genre.png')
plt.annotate("Sci-Fi Rating",
xy=(14.25,3.5), xycoords='data',
xytext=(14.20, 3.7), textcoords='data',
arrowprops=dict(arrowstyle="->",
connectionstyle="arc3"),
)
for i,j in enumerate( y ):
ax1.annotate( j, ( x[i] + 0.03, y[i] + 0.02))
ax2 = plt.subplot(2,1,2)
x = [x for x in range(0, 18)]
xticks_genre_list = genre_list
y = genres_rating['Diff from Mean']
plt.xticks(range(len(x)), xticks_genre_list)
plt.plot(x,y)
plt.plot(x, genres_rating['Zero'])
plt.autoscale(tight=True)
#plt.rcParams["figure.figsize"] = (10,2)
plt.title('Deviation of each genre\'s rating from the overall mean rating')
plt.xlabel('Genre')
plt.ylabel('Deviation from mean rating')
plt.grid(True)
plt.savefig(r'deviation-from-mean-rating.png')
plt.annotate("Sci-Fi Rating",
xy=(14,-0.13), xycoords='data',
xytext=(14.00, 0.0), textcoords='data',
arrowprops=dict(arrowstyle="->",
connectionstyle="arc3"),
)
plt.show()
# extract year of release of each movie from the title column
# convert the data type of the movie_year column to numeric (from str)
import numpy as np
import re
#movies['rel_year'] = movies.title.str[-5:-1]
#movies['rel_year'] = movies.title.apply(lambda x: x[-5:-1])
movies['movie_year'] = movies['title']
#movies['movie_year'] = movies['movie_year'].apply(lambda x: re.findall('\((.*?)\)',x))
movies['movie_year'] = movies['movie_year'].str.extract(r"\(([0-9]+)\)", expand=False)
# creating a new column with just the movie titles
movies['title_only'] = movies['title']
movies['title_only'] = movies['title_only'].str.extract('(.*?)\s*\(', expand=False)
#Drop all rows containing incorrect year values - such as 0, 6, 69, 500 and -2147483648
movies.drop(movies[movies.movie_year == '0'].index, inplace=True)
movies.drop(movies[movies.movie_year == '6'].index, inplace=True)
movies.drop(movies[movies.movie_year == '06'].index, inplace=True)
movies.drop(movies[movies.movie_year == '69'].index, inplace=True)
movies.drop(movies[movies.movie_year == '500'].index, inplace=True)
movies.drop(movies[movies.movie_year == '-2147483648'].index, inplace=True)
movies['movie_year'].fillna(0, inplace=True)
#convert the string values to numeric
movies['movie_year'] = pd.to_datetime(movies['movie_year'], format='%Y')
movies.dtypes
#movies['movie_year'].astype(np.int64)
#movies['movie_year'] = movies['movie_year'].astype(np.int64, inplace=True)
movies.dtypes
movies.describe()
movies.head()
movies_and_years = pd.DataFrame(movies['movie_year', 'title_only'])
movies_and_years.tail()
#a = pd.DataFrame(movies['title_only'].groupby(movies['movie_year']).count(), movies['movie_year'], movies['title_only'])
#plt.plot(a['title_only'], a['movie_year'])
#plt.show()
#movies_and_years
#movies_and_years['title_only'].groupby[movies_and_years['movie_year']]
#create a unique array of years from the newly created data frame column
#years_array_raw = movies['movie_year'].unique()
#years_array_raw = years_array_raw.astype(int)
#years_array_raw
# We see some elements of the array that are not year values - such as -2147483648 and 6
#Let us create a new numpy array to clean up the data and sort the list of unique years
# Numpy arrays are immutable. Delete operations on the array will create a new copy of the same array. So,
#years_array_clean = np.array([])
#for i in range(len(years_array_raw)):
# if years_array_raw[i] <= 2015 and years_array_raw[i] >= 1800:
# years_array_clean = np.append(years_array_clean, years_array_raw[i])
# else:
# print(years_array_raw[i], ' - Incorrect year value. Will not be appended to the new array')
#the incorrect year values are not present in the array anymore
#years_array_clean
#sorting the years in ascending order
#years_array_clean = np.sort(years_array_clean)
#Now, we have a unique, clean and sorted year list
#years_array_clean
# years and their respective movie counts
#movies_and_years = pd.DataFrame(movies['movie_year'],(movies['title_only'].groupby(movies['movie_year']).agg('count')))
#movies_and_years1 = pd.DataFrame(movies['movie_year'],(movies['title_only'].groupby(years_array_clean)))
# Movies and their corresponding genre combinations
# This is not the ideal grouping I want to arrive at - I want to analyze movies by individual genres, not groups of genres such as Comedy|Drama|Romance, for example.
# That will be the next step, possibly by creating new columns with flags for each genre
#movies_and_genres = pd.DataFrame(movies['genres'], (movies['title_only'].groupby(movies['genres']).agg('count')))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Webdriver
Step2: 下载和设置Webdriver
Step3: 访问页面
Step4: 查找元素
Step5: 这里我们通过三种不同的方式去获取响应的元素,第一种是通过id的方式,第二个中是CSS选择器,第三种是xpath选择器,结果都是相同的。
Step6: 多个元素查找
Step7: 当然上面的方式也是可以通过导入from selenium.webdriver.common.by import By 这种方式实现
Step8: 运行的结果可以看出程序会自动打开Chrome浏览器并打开淘宝输入ipad,然后删除,重新输入MacBook pro,并点击搜索
Step9: 一个例子
|
<ASSISTANT_TASK:>
Python Code:
!pip install selenium
from selenium import webdriver
help(webdriver)
#browser = webdriver.Firefox() # 打开Firefox浏览器
browser = webdriver.Chrome() # 打开Chrome浏览器
from selenium import webdriver
browser = webdriver.Chrome()
browser.get("http://music.163.com")
print(browser.page_source)
#browser.close()
from selenium import webdriver
browser = webdriver.Chrome()
browser.get("http://music.163.com")
input_first = browser.find_element_by_id("g_search")
input_second = browser.find_element_by_css_selector("#g_search")
input_third = browser.find_element_by_xpath('//*[@id="g_search"]')
print(input_first)
print(input_second)
print(input_third)
# 下面这种方式是比较通用的一种方式:这里需要记住By模块所以需要导入
from selenium.webdriver.common.by import By
browser = webdriver.Chrome()
browser.get("http://music.163.com")
input_first = browser.find_element(By.ID,"g_search")
print(input_first)
browser.close()
browser = webdriver.Chrome()
browser.get("http://music.163.com")
lis = browser.find_elements_by_css_selector('body')
print(lis)
browser.close()
from selenium import webdriver
import time
browser = webdriver.Chrome()
browser.get("https://music.163.com/")
input_str = browser.find_element_by_id('srch')
input_str.send_keys("周杰伦")
time.sleep(3) #休眠,模仿人工搜索
input_str.clear()
input_str.send_keys("林俊杰")
from selenium import webdriver
browser = webdriver.Chrome()
browser.get("https://www.zhihu.com/explore/")
browser.execute_script('window.scrollTo(0, document.body.scrollHeight)')
browser.execute_script('alert("To Bottom")')
# url = "https://www.privco.com/private-company/329463"
def download_excel(url):
browser.get(url)
name = url.split('/')[-1]
title = browser.title
source = browser.page_source
with open(name+'.html', 'w') as f:
f.write(source)
try:
soup = BeautifulSoup(source, 'html.parser')
url_new = soup.find('span', {'class', 'profile-name'}).a['href']
url_excel = url_new + '/export'
browser.get(url_excel)
except Exception as e:
print(url, 'no excel')
pass
urls = [ 'https://www.privco.com/private-company/1135789',
'https://www.privco.com/private-company/542756',
'https://www.privco.com/private-company/137908',
'https://www.privco.com/private-company/137138']
for k, url in enumerate(urls):
print(k)
try:
download_excel(url)
except Exception as e:
print(url, e)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Flower power
Step2: ConvNet Codes
Step3: Below I'm running images through the VGG network in batches.
Step4: Building the Classifier
Step5: Data prep
Step6: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
Step7: If you did it right, you should see these sizes for the training sets
Step9: Batches!
Step10: Training
Step11: Testing
Step12: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
|
<ASSISTANT_TASK:>
Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
# TODO: Build the vgg network here
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# TODO: Get the values from the relu6 layer of the VGG network
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
from sklearn.preprocessing import LabelBinarizer
# Your one-hot encoded labels array here
labels_vecs = LabelBinarizer().fit_transform(labels)
from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
train_idx, val_idx = next(ss.split(codes, labels))
half_val_len = int(len(val_idx) / 2)
val_idx, test_idx = val_idx[:half_val_len, half_val_len:]
train_x, train_y = codes[train_idx], labels_vecs[train_idx]
val_x, val_y = codes[val_idx], labels_vecs[val_idx]
test_x, test_y = codes[test_idx], labels_vecs[test_idx]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
keep_prob = tf.placeholder(tf.float32)
# TODO: Classifier layers and operations
relu7 = tf.layers.dense(inputs_, codes.shape[1], activation=tf.nn.relu)
drop7 = tf.nn.dropout(relu7, keep_prob)
logits = tf.layers.dense(drop7, labels_vecs.shape[1]) # output layer logits
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels_)) # cross entropy loss
optimizer = tf.train.AdamOptimizer().minimize(cost) # training optimizer
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, n_batches=10):
Return a generator that yields batches from arrays x and y.
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
epochs = 10
iteration = 0
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# TODO: Your training code here
for epoch in range(epochs):
for x, y in get_batches(train_x, train_y):
train_dict = {inputs_: x, labels_: y, keep_prob: 0.5}
train_loss, train_acc, _ = sess.run([loss, accuracy, optimizer], keep_dict=train_dict)
print('Epoch {:2d}/{:2d} |'.format(epoch, epochs),
'Train loss {:.3f} |'.format(train_loss),
'Train acc {:.3f}'.format(train_acc)
)
if iteration % 5:
val_dict = {inputs_: x, labels_: y, keep_prob: 1.0}
val_loss, val_acc = sess.run([loss, accuracy], keep_dict=train_dict)
print('*** Epoch {:2d}/{:2d} |'.format(epoch, epochs),
'Val loss {:.3f} |'.format(val_loss),
'Val acc {:.3f} ***'.format(val_acc)
)
iteration += 1
saver.save(sess, "checkpoints/flowers.ckpt")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create sample data set
Step2: Winsorize
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd # pandas for handling mixed data sets
import numpy as np # numpy for basic math and matrix operations
from scipy.stats.mstats import winsorize # scipy for stats and more advanced calculations
scratch_df = pd.DataFrame({'x1': pd.Series(np.random.choice(1000, 20))})
scratch_df
scratch_df['x1_winsor'] = winsorize(scratch_df['x1'], limits=[0.1, 0.1])
scratch_df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load raw data
Step2: Train and Evaluate input Functions
Step3: Feature columns for Wide and Deep model
Step4: We also add our engineered features that we used previously.
Step5: Gather list of feature columns
Step6: Serving Input Receiver function
Step7: Train and Evaluate (500 train steps)
Step8: Results
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
!gsutil cp gs://cloud-training-demos/taxifare/small/*.csv .
!ls -l *.csv
CSV_COLUMN_NAMES = ["fare_amount","dayofweek","hourofday","pickuplon","pickuplat","dropofflon","dropofflat"]
CSV_DEFAULTS = [[0.0],[1],[0],[-74.0],[40.0],[-74.0],[40.7]]
def read_dataset(csv_path):
def _parse_row(row):
# Decode the CSV row into list of TF tensors
fields = tf.decode_csv(records = row, record_defaults = CSV_DEFAULTS)
# Pack the result into a dictionary
features = dict(zip(CSV_COLUMN_NAMES, fields))
# NEW: Add engineered features
features = add_engineered_features(features)
# Separate the label from the features
label = features.pop("fare_amount") # remove label from features and store
return features, label
# Create a dataset containing the text lines.
dataset = tf.data.Dataset.list_files(file_pattern = csv_path) # (i.e. data_file_*.csv)
dataset = dataset.flat_map(map_func = lambda filename:tf.data.TextLineDataset(filenames = filename).skip(count = 1))
# Parse each CSV row into correct (features,label) format for Estimator API
dataset = dataset.map(map_func = _parse_row)
return dataset
def train_input_fn(csv_path, batch_size = 128):
#1. Convert CSV into tf.data.Dataset with (features,label) format
dataset = read_dataset(csv_path)
#2. Shuffle, repeat, and batch the examples.
dataset = dataset.shuffle(buffer_size = 1000).repeat(count = None).batch(batch_size = batch_size)
return dataset
def eval_input_fn(csv_path, batch_size = 128):
#1. Convert CSV into tf.data.Dataset with (features,label) format
dataset = read_dataset(csv_path)
#2.Batch the examples.
dataset = dataset.batch(batch_size = batch_size)
return dataset
# 1. One hot encode dayofweek and hourofday
fc_dayofweek = tf.feature_column.categorical_column_with_identity(key = "dayofweek", num_buckets = 7)
fc_hourofday = tf.feature_column.categorical_column_with_identity(key = "hourofday", num_buckets = 24)
# 2. Bucketize latitudes and longitudes
NBUCKETS = 16
latbuckets = np.linspace(start = 38.0, stop = 42.0, num = NBUCKETS).tolist()
lonbuckets = np.linspace(start = -76.0, stop = -72.0, num = NBUCKETS).tolist()
fc_bucketized_plat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplon"), boundaries = lonbuckets)
fc_bucketized_plon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplat"), boundaries = latbuckets)
fc_bucketized_dlat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflon"), boundaries = lonbuckets)
fc_bucketized_dlon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflat"), boundaries = latbuckets)
# 3. Cross features to get combination of day and hour
fc_crossed_day_hr = tf.feature_column.crossed_column(keys = [fc_dayofweek, fc_hourofday], hash_bucket_size = 24 * 7)
fc_crossed_dloc = tf.feature_column.crossed_column(keys = [fc_bucketized_dlat, fc_bucketized_dlon], hash_bucket_size = NBUCKETS * NBUCKETS)
fc_crossed_ploc = tf.feature_column.crossed_column(keys = [fc_bucketized_plat, fc_bucketized_plon], hash_bucket_size = NBUCKETS * NBUCKETS)
fc_crossed_pd_pair = tf.feature_column.crossed_column(keys = [fc_crossed_dloc, fc_crossed_ploc], hash_bucket_size = NBUCKETS**4)
def add_engineered_features(features):
features["dayofweek"] = features["dayofweek"] - 1 # subtract one since our days of week are 1-7 instead of 0-6
features["latdiff"] = features["pickuplat"] - features["dropofflat"] # East/West
features["londiff"] = features["pickuplon"] - features["dropofflon"] # North/South
features["euclidean_dist"] = tf.sqrt(x = features["latdiff"]**2 + features["londiff"]**2)
return features
def get_wide_deep():
# Wide columns are sparse, have linear relationship with the output
wide_columns = [
# Feature crosses
fc_crossed_day_hr, fc_crossed_dloc,
fc_crossed_ploc, fc_crossed_pd_pair,
# Sparse columns
fc_dayofweek, fc_hourofday
]
# Continuous columns are deep, have a complex relationship with the output
deep_columns = [
# Embedding_column to "group" together ...
tf.feature_column.embedding_column(categorical_column = fc_crossed_pd_pair, dimension = 10),
tf.feature_column.embedding_column(categorical_column = fc_crossed_day_hr, dimension = 10),
# Numeric columns
tf.feature_column.numeric_column(key = "pickuplat"),
tf.feature_column.numeric_column(key = "pickuplon"),
tf.feature_column.numeric_column(key = "dropofflon"),
tf.feature_column.numeric_column(key = "dropofflat"),
tf.feature_column.numeric_column(key = "latdiff"),
tf.feature_column.numeric_column(key = "londiff"),
tf.feature_column.numeric_column(key = "euclidean_dist"),
tf.feature_column.indicator_column(categorical_column = fc_crossed_day_hr),
]
return wide_columns, deep_columns
def serving_input_receiver_fn():
receiver_tensors = {
'dayofweek' : tf.placeholder(dtype = tf.int32, shape = [None]), # shape is vector to allow batch of requests
'hourofday' : tf.placeholder(dtype = tf.int32, shape = [None]),
'pickuplon' : tf.placeholder(dtype = tf.float32, shape = [None]),
'pickuplat' : tf.placeholder(dtype = tf.float32, shape = [None]),
'dropofflat' : tf.placeholder(dtype = tf.float32, shape = [None]),
'dropofflon' : tf.placeholder(dtype = tf.float32, shape = [None]),
}
features = add_engineered_features(receiver_tensors) # 'features' is what is passed on to the model
return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = receiver_tensors)
%%time
OUTDIR = "taxi_trained_wd/500"
shutil.rmtree(path = OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
tf.logging.set_verbosity(v = tf.logging.INFO) # so loss is printed during training
# Collect the wide and deep columns from above
wide_columns, deep_columns = get_wide_deep()
model = tf.estimator.DNNLinearCombinedRegressor(
model_dir = OUTDIR,
linear_feature_columns = wide_columns,
dnn_feature_columns = deep_columns,
dnn_hidden_units = [10,10], # specify neural architecture
config = tf.estimator.RunConfig(
tf_random_seed = 1, # for reproducibility
save_checkpoints_steps = 100 # checkpoint every N steps
)
)
# Add custom evaluation metric
def my_rmse(labels, predictions):
pred_values = tf.squeeze(input = predictions["predictions"], axis = -1)
return {"rmse": tf.metrics.root_mean_squared_error(labels = labels, predictions = pred_values)}
model = tf.contrib.estimator.add_metrics(estimator = model, metric_fn = my_rmse)
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: train_input_fn("./taxi-train.csv"),
max_steps = 500)
exporter = tf.estimator.FinalExporter(name = "exporter", serving_input_receiver_fn = serving_input_receiver_fn) # export SavedModel once at the end of training
# Note: alternatively use tf.estimator.BestExporter to export at every checkpoint that has lower loss than the previous checkpoint
eval_spec = tf.estimator.EvalSpec(
input_fn = lambda: eval_input_fn("./taxi-valid.csv"),
steps = None,
start_delay_secs = 1, # wait at least N seconds before first evaluation (default 120)
throttle_secs = 1, # wait at least N seconds before each subsequent evaluation (default 600)
exporters = exporter) # export SavedModel once at the end of training
tf.estimator.train_and_evaluate(estimator = model, train_spec = train_spec, eval_spec = eval_spec)
%%time
OUTDIR = "taxi_trained_wd/5000"
shutil.rmtree(path = OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
tf.logging.set_verbosity(v = tf.logging.INFO) # so loss is printed during training
# Collect the wide and deep columns from above
wide_columns, deep_columns = get_wide_deep()
model = tf.estimator.DNNLinearCombinedRegressor(
model_dir = OUTDIR,
linear_feature_columns = wide_columns,
dnn_feature_columns = deep_columns,
dnn_hidden_units = [10,10], # specify neural architecture
config = tf.estimator.RunConfig(
tf_random_seed = 1, # for reproducibility
save_checkpoints_steps = 100 # checkpoint every N steps
)
)
# Add custom evaluation metric
def my_rmse(labels, predictions):
pred_values = tf.squeeze(input = predictions["predictions"], axis = -1)
return {"rmse": tf.metrics.root_mean_squared_error(labels = labels, predictions = pred_values)}
model = tf.contrib.estimator.add_metrics(estimator = model, metric_fn = my_rmse)
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: train_input_fn("./taxi-train.csv"),
max_steps = 5000)
exporter = tf.estimator.FinalExporter(name = "exporter", serving_input_receiver_fn = serving_input_receiver_fn) # export SavedModel once at the end of training
# Note: alternatively use tf.estimator.BestExporter to export at every checkpoint that has lower loss than the previous checkpoint
eval_spec = tf.estimator.EvalSpec(
input_fn = lambda: eval_input_fn("./taxi-valid.csv"),
steps = None,
start_delay_secs = 1, # wait at least N seconds before first evaluation (default 120)
throttle_secs = 1, # wait at least N seconds before each subsequent evaluation (default 600)
exporters = exporter) # export SavedModel once at the end of training
tf.estimator.train_and_evaluate(estimator = model, train_spec = train_spec, eval_spec = eval_spec)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Cf. jp-quadratic.html
Step2: Simplify
Step3: Then quadratic becomes
Step4: Define a "native" pm function.
Step5: The resulting trace is short enough to fit on a page.
|
<ASSISTANT_TASK:>
Python Code:
from notebook_preamble import J, V, define
define('quadratic == over [[[neg] dupdip sqr 4] dipd * * - sqrt [+] [-] cleave] dip 2 * [truediv] cons app2 roll< pop')
J('3 1 1 quadratic')
define('pm == [+] [-] cleave popdd')
define('quadratic == over [[[neg] dupdip sqr 4] dipd * * - sqrt pm] dip 2 * [truediv] cons app2')
J('3 1 1 quadratic')
from joy.library import SimpleFunctionWrapper
from notebook_preamble import D
@SimpleFunctionWrapper
def pm(stack):
a, (b, stack) = stack
p, m, = b + a, b - a
return m, (p, stack)
D['pm'] = pm
V('3 1 1 quadratic')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set some colors and linestyles.
Step2: Hardcode the loop length. In this paper, we only use a single loop half-length of $L=40$ Mm.
Step3: First, we'll load in the NEI results.
Step4: Now, build the temperature profiles.
Step5: Now, plot the emission measure distributions with their NEI counterparts.
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
import pickle
import numpy as np
import seaborn.apionly as sns
import matplotlib.pyplot as plt
from matplotlib import ticker
sys.path.append(os.path.join(os.environ['EXP_DIR'],'EBTEL_analysis/src'))
import em_binner as emb
%matplotlib inline
plt.rcParams.update({'figure.figsize' : [16,5]})
colors = {'20':sns.color_palette('deep')[0],'500':sns.color_palette('deep')[3]}
linestyles = {'single':'solid','electron':'dashed','ion':'-.'}
loop_length = 40.e+8
nei_results = {'single':{},'electron':{},'ion':{}}
file_template = "../results/tau%d.%s.sol.txt"
species = ['single','electron','ion']
tau = [20,500]
for s in species:
for t in tau:
_tmp_ = np.loadtxt(file_template%(t,s))
nei_results[s]['tau%d'%t] = {'t':_tmp_[:,0],'T':_tmp_[:,1],'Teff':_tmp_[:,2],'n':_tmp_[:,3]}
fig,ax = plt.subplots(1,2,sharey=True)
plt.subplots_adjust(hspace=0.0,wspace=0.0)
for t in tau:
for s in species:
ax[tau.index(t)].plot(nei_results[s]['tau%d'%t]['t'],nei_results[s]['tau%d'%t]['T']/1.e+6,
color=colors[str(t)],linestyle=linestyles[s],label=r'$\mathrm{%s}$'%s)
ax[tau.index(t)].plot(nei_results[s]['tau%d'%t]['t'],nei_results[s]['tau%d'%t]['Teff']/1.e+6,
color=sns.color_palette('bright')[2],linestyle=linestyles[s])
#scale
ax[0].set_xscale('log')
ax[1].set_xscale('log')
#limits
ax[0].set_xlim([0.5,5000])
ax[1].set_xlim([0.5,5000])
ax[0].set_ylim([0,25])
ax[1].set_ylim([0,25])
#tick labels
ax[0].yaxis.set_major_locator(ticker.MaxNLocator(nbins=6,prune='lower'))
ax[1].yaxis.set_major_locator(ticker.MaxNLocator(nbins=6))
#axes labels
ax[0].set_ylabel(r'$T$ $\mathrm{(MK)}$')
global_xlab = fig.text(0.5, 0.015, r'$t$ $\mathrm{(s)}$', ha='center', va='center',fontsize=22)
#legend
ax[0].legend(loc='best')
plt.savefig(__dest__[0],bbox_extra_artists=[global_xlab], bbox_inches='tight')
plt.show()
fig,ax = plt.subplots(1,2,sharey=True)
plt.subplots_adjust(wspace=0.0)
for t in tau:
for s in species:
#IEQ
binner = emb.EM_Binner(2.*loop_length,time=nei_results[s]['tau%d'%t]['t'],
temp=nei_results[s]['tau%d'%t]['T'],density=nei_results[s]['tau%d'%t]['n'])
binner.build_em_dist()
hist,bin_edges = np.histogram(binner.T_em_flat,bins=binner.T_em_histo_bins,weights=np.array(binner.em_flat))
ax[tau.index(t)].plot((bin_edges[:-1]+bin_edges[1:])/2,hist/10,
color=colors[str(t)],linestyle=linestyles[s],label=r'$\mathrm{%s}$'%s)
#NEI
binner = emb.EM_Binner(2.*loop_length,time=nei_results[s]['tau%d'%t]['t'],
temp=nei_results[s]['tau%d'%t]['Teff'],density=nei_results[s]['tau%d'%t]['n'])
binner.build_em_dist()
hist,bin_edges = np.histogram(binner.T_em_flat,bins=binner.T_em_histo_bins,weights=np.array(binner.em_flat))
ax[tau.index(t)].plot((bin_edges[:-1]+bin_edges[1:])/2,hist/10,
color=sns.color_palette('bright')[2],linestyle=linestyles[s])
#scale
ax[0].set_yscale('log')
ax[1].set_yscale('log')
ax[0].set_xscale('log')
ax[1].set_xscale('log')
#limits
ax[0].set_xlim([10**6.5,10**7.5])
ax[0].set_ylim([1e+23,1e+28])
ax[1].set_xlim([10**6.5,10**7.5])
ax[1].set_ylim([1e+23,1e+28])
#labels
global_xlab = fig.text(0.5, 0.015, r'${T}\,\,\mathrm{(K)}$', ha='center', va='center',fontsize=22)
ax[0].set_ylabel(r'$\mathrm{EM}\,\,(\mathrm{cm}^{-5})$')
#legend
ax[0].legend(loc='best')
#save
plt.savefig(__dest__[1],bbox_extra_artists=[global_xlab], bbox_inches='tight')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Training
Step2: Model 2
Step3: Model 3
Step4: 3. Evaluate
Step5: 4. Final Result(Ensemble)
|
<ASSISTANT_TASK:>
Python Code:
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# test data
test_images = mnist.test.images.reshape(10000, 28, 28, 1)
test_labels = mnist.test.labels[:]
augmentation_size = 440000
images = np.concatenate((mnist.train.images.reshape(55000, 28, 28, 1), mnist.validation.images.reshape(5000, 28, 28, 1)), axis=0)
labels = np.concatenate((mnist.train.labels, mnist.validation.labels), axis=0)
datagen_list = [
ImageDataGenerator(rotation_range=10),
ImageDataGenerator(rotation_range=20),
ImageDataGenerator(rotation_range=30),
ImageDataGenerator(width_shift_range=0.1),
ImageDataGenerator(width_shift_range=0.2),
ImageDataGenerator(width_shift_range=0.3),
]
for datagen in datagen_list:
datagen.fit(images)
for image, label in datagen.flow(images, labels, batch_size=augmentation_size, shuffle=True, seed=2017):
images = np.concatenate((images, image), axis=0)
labels = np.concatenate((labels, label), axis=0)
break
print('Train Data Set :', images.shape)
print('Test Data Set :', test_images.shape)
model1 = Sequential([Convolution2D(filters=64, kernel_size=(3, 3), padding='same', activation='elu', input_shape=(28, 28, 1)),
Convolution2D(filters=128, kernel_size=(3, 3), padding='same', activation='elu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Convolution2D(filters=128, kernel_size=(3, 3), padding='same', activation='elu'),
Convolution2D(filters=128, kernel_size=(3, 3), padding='same', activation='elu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Convolution2D(filters=128, kernel_size=(3, 3), padding='same', activation='elu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Flatten(),
Dense(1024, activation='elu'),
Dropout(0.5),
Dense(1024, activation='elu'),
Dropout(0.5),
Dense(10, activation='softmax'),
])
model1.compile(optimizer=Adam(lr=0.0001), loss='categorical_crossentropy', metrics=['accuracy'])
model1.fit(images, labels, batch_size=256, epochs=20, shuffle=True, verbose=1, validation_data=(test_images, test_labels))
model_json = model1.to_json()
with open("model1.json", "w") as json_file:
json_file.write(model_json)
model1.save_weights("model1.h5")
print("Saved model to disk")
model2 = Sequential([Convolution2D(filters=64, kernel_size=(5, 5), padding='same', activation='elu', input_shape=(28, 28, 1)),
Convolution2D(filters=128, kernel_size=(5, 5), padding='same', activation='elu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Convolution2D(filters=128, kernel_size=(5, 5), padding='same', activation='elu'),
Convolution2D(filters=128, kernel_size=(5, 5), padding='same', activation='elu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Convolution2D(filters=128, kernel_size=(5, 5), padding='same', activation='elu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Flatten(),
Dense(1024, activation='elu'),
Dropout(0.5),
Dense(1024, activation='elu'),
Dropout(0.5),
Dense(10, activation='softmax'),
])
model2.compile(optimizer=Adam(lr=0.0001), loss='categorical_crossentropy', metrics=['accuracy'])
model2.fit(images, labels, batch_size=256, epochs=20, shuffle=True, verbose=1, validation_data=(test_images, test_labels))
model_json = model2.to_json()
with open("model2.json", "w") as json_file:
json_file.write(model_json)
model2.save_weights("model2.h5")
print("Saved model to disk")
model3 = Sequential([Convolution2D(filters=64, kernel_size=(7, 7), padding='same', activation='elu', input_shape=(28, 28, 1)),
Convolution2D(filters=128, kernel_size=(7, 7), padding='same', activation='elu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Convolution2D(filters=128, kernel_size=(7, 7), padding='same', activation='elu'),
Convolution2D(filters=128, kernel_size=(7, 7), padding='same', activation='elu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Convolution2D(filters=128, kernel_size=(7, 7), padding='same', activation='elu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Flatten(),
Dense(1024, activation='elu'),
Dropout(0.5),
Dense(1024, activation='elu'),
Dropout(0.5),
Dense(10, activation='softmax'),
])
model3.compile(optimizer=Adam(lr=0.0001), loss='categorical_crossentropy', metrics=['accuracy'])
model3.fit(images, labels, batch_size=256, epochs=20, shuffle=True, verbose=1, validation_data=(test_images, test_labels))
model_json = model3.to_json()
with open("model3.json", "w") as json_file:
json_file.write(model_json)
model3.save_weights("model3.h5")
print("Saved model to disk")
from keras.models import model_from_json
from tensorflow.examples.tutorials.mnist import input_data
from keras.optimizers import Adam
import numpy as np
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# test data
test_images = mnist.test.images.reshape(10000, 28, 28, 1)
test_labels = mnist.test.labels[:]
# load json and create model
def model_open(name, test_images, test_labels):
json_file = open(name + '.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights(name + '.h5')
print("Loaded model from disk")
loaded_model.compile(optimizer=Adam(lr=0.0001), loss='categorical_crossentropy', metrics=['acc'])
prob = loaded_model.predict_proba(test_images)
acc = np.mean(np.equal(np.argmax(prob, axis=1), np.argmax(test_labels, axis=1)))
print('\nmodel : %s, test accuracy : %.4f\n' % (name, acc))
return prob, acc
model_1_prob, model_1_acc = model_open('model1', test_images, test_labels)
model_2_prob, model_2_acc = model_open('model2', test_images, test_labels)
model_3_prob, model_3_acc = model_open('model3', test_images, test_labels)
def model_ensemble(prob1, acc1, prob2, acc2, prob3, acc3):
prob_list = [prob1, prob2, prob3]
acc_list = [acc1, acc2, acc3]
idx_acc_list = {idx: acc for idx, acc in enumerate(acc_list)}
sorted_acc_list = [idx for idx, _ in sorted(idx_acc_list.items(), key=lambda value: (value[1], value[0]), reverse=True)]
final_prob = 0
for i in sorted_acc_list:
final_prob += prob_list[i] * (i+1)
final_score = np.mean(np.equal(np.argmax(final_prob, axis=1), np.argmax(test_labels, axis=1))) # Test
print('Final test accuracy : %.4f' % final_score)
model_ensemble(model_1_prob, model_1_acc, model_2_prob, model_2_acc, model_3_prob, model_3_acc)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Audio Data Preparation and Augmentation
Step2: Usage
Step3: In the above example, the Flac file brooklyn.flac is from a publicly accessible audio clip in google cloud.
Step4: The audio can be played through
Step5: It is more convinient to convert tensor into float numbers and show the audio clip in graph
Step6: Trim the noise
Step7: Fade In and Fade Out
Step8: Spectrogram
Step9: Additional transformation to different scales are also possible
Step10: SpecAugment
Step11: Time Masking
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install tensorflow-io
import tensorflow as tf
import tensorflow_io as tfio
audio = tfio.audio.AudioIOTensor('gs://cloud-samples-tests/speech/brooklyn.flac')
print(audio)
audio_slice = audio[100:]
# remove last dimension
audio_tensor = tf.squeeze(audio_slice, axis=[-1])
print(audio_tensor)
from IPython.display import Audio
Audio(audio_tensor.numpy(), rate=audio.rate.numpy())
import matplotlib.pyplot as plt
tensor = tf.cast(audio_tensor, tf.float32) / 32768.0
plt.figure()
plt.plot(tensor.numpy())
position = tfio.audio.trim(tensor, axis=0, epsilon=0.1)
print(position)
start = position[0]
stop = position[1]
print(start, stop)
processed = tensor[start:stop]
plt.figure()
plt.plot(processed.numpy())
fade = tfio.audio.fade(
processed, fade_in=1000, fade_out=2000, mode="logarithmic")
plt.figure()
plt.plot(fade.numpy())
# Convert to spectrogram
spectrogram = tfio.audio.spectrogram(
fade, nfft=512, window=512, stride=256)
plt.figure()
plt.imshow(tf.math.log(spectrogram).numpy())
# Convert to mel-spectrogram
mel_spectrogram = tfio.audio.melscale(
spectrogram, rate=16000, mels=128, fmin=0, fmax=8000)
plt.figure()
plt.imshow(tf.math.log(mel_spectrogram).numpy())
# Convert to db scale mel-spectrogram
dbscale_mel_spectrogram = tfio.audio.dbscale(
mel_spectrogram, top_db=80)
plt.figure()
plt.imshow(dbscale_mel_spectrogram.numpy())
# Freq masking
freq_mask = tfio.audio.freq_mask(dbscale_mel_spectrogram, param=10)
plt.figure()
plt.imshow(freq_mask.numpy())
# Time masking
time_mask = tfio.audio.time_mask(dbscale_mel_spectrogram, param=10)
plt.figure()
plt.imshow(time_mask.numpy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 1. kNN
Step3: 1. kNN
Step4: 2. Aplicación al Iris Dataset
Step5: 2. Aplicación al Iris Dataset
Step6: 2. Aplicación al Iris Dataset
Step7: 2. Aplicación al Iris Dataset
Step8: 2. Aplicación al Iris Dataset
Step9: 2. Aplicación al Iris Dataset
Step10: 2. Aplicación al Iris Dataset
|
<ASSISTANT_TASK:>
Python Code:
def hamming(s1, s2):
# Caso no comparable
if len(s1)!=len(s2):
print("No comparable")
return None
h = 0
# Caso comparable
for ch1, ch2 in zip(s1,s2):
if ch1!=ch2:
h+= 1
# FIX ME
return h
print hamming("cara", "c")
print hamming("cara", "casa")
print hamming("cera", "cese")
import numpy as np
def knn_search(X, k, x):
find K nearest neighbours of data among D
# Distancia euclidiana
d = np.sqrt(((X - x[:,:k])**2).sum(axis=0))
# Ordenar por cercania
idx = np.argsort(d)
# Regresar los k mas cercanos
return idx[:k]
def knn(X,Y,k,x):
# Obtener los k mas cercanos
k_closest = knn_search(X, k, x)
# Obtener las etiquetas
Y_closest = Y[k_closest]
# Obtener la mas popular
counts = np.bincount(Y_closest)
print counts
# Regresar la mas popular (cualquiera, si hay empate)
return np.argmax(counts)
import numpy as np
from matplotlib import pyplot as plt
X = np.random.rand(2,100) # random dataset
Y = np.array(np.random.rand(100)<0.2, dtype=int) # random dataset
x = np.random.rand(2,1) # query point
# performing the search
k = 20
neig_idx = knn_search(X, k, x)
y = knn(X, Y, k, x)
print "etiqueta=", y
# plotting the data and the input point
fig = plt.figure(figsize=(16,8))
plt.plot(X[0,:][Y==0],X[1,:][Y==0],'ob', ms=8)
plt.plot(X[0,:][Y==1],X[1,:][Y==1],'sr', ms=8)
plt.plot(x[0,0],x[1,0],'ok', ms=16)
# highlighting the neighbours
plt.plot(X[0,neig_idx], X[1,neig_idx], 'o', markerfacecolor='None', markersize=24, markeredgewidth=1)
plt.show()
import numpy as np
from sklearn import datasets
# Loading the data
iris = datasets.load_iris()
X = iris.data
Y = iris.target
print iris.target_names
print X.shape[0]
# Print data and labels
for x, y in zip(X,Y):
print x, y
import numpy as np
from sklearn import datasets
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix
# Meta parameter
k = 150
# Loading the data
iris = datasets.load_iris()
names = iris.target_names
X = iris.data
Y = iris.target
# Fitting the model
kNN = KNeighborsClassifier(k)
kNN.fit(X,Y)
# No coefficients to print!
# Predicting values
Y_pred = kNN.predict(X)
# Count the errors
template = "{0} errores de clasificación de un total de {1}"
print template.format(sum(Y!=Y_pred), len(Y))
# Matriz de confusion
print confusion_matrix(Y, Y_pred)
import numpy as np
from sklearn import datasets
from sklearn.neighbors import KNeighborsClassifier
from sklearn.cross_validation import train_test_split
from sklearn.metrics import confusion_matrix
# Meta parameter
k = 5
# Loading the data
iris = datasets.load_iris()
names = iris.target_names
X = iris.data
Y = np.array(iris.target, int)
# Holdout Set
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, train_size=0.6)
print X_train.shape, X_test.shape
# Fitting the model
kNN = KNeighborsClassifier(n_neighbors=k)
kNN.fit(X_train, Y_train)
# No coefficients to print!
# Predicting values
Y_test_pred = kNN.predict(X_test)
# Count the errors
n_errors = sum(Y_test!=Y_test_pred)
template = "{0} errores de clasificación de un total de {1}"
print template.format(n_errors, len(Y_test))
# Matriz de confusion
print confusion_matrix(Y_test, Y_test_pred)
from sklearn.cross_validation import train_test_split
from sklearn import datasets
import numpy as np
# Loading the data
iris = datasets.load_iris()
names = iris.target_names
X = iris.data
Y = np.array(iris.target, int)
# Splitting the data
X_train, X_aux, Y_train, Y_aux = train_test_split(X, Y, train_size=0.6)
X_valid, X_test, Y_valid, Y_test = train_test_split(X_aux, Y_aux, test_size=0.5)
print X_train.shape
print X_valid.shape
print X_test.shape
import numpy as np
from sklearn import datasets
from sklearn.neighbors import KNeighborsClassifier
# Loading the data
iris = datasets.load_iris()
names = iris.target_names
X = iris.data
Y = np.array(iris.target, int)
# Holdout Set
X_aux, X_aux, Y_train, Y_aux = train_test_split(X, Y, train_size=0.6)
X_valid, X_test, Y_valid, Y_test = train_test_split(X_aux, Y_aux, test_size=0.5)
template = "k={0}: {1} errores de clasificación de un total de {2}"
# Fitting the model
for k in range(1,21):
kNN = KNeighborsClassifier(n_neighbors=k)
kNN.fit(X_train, Y_train)
# Predicting values
Y_test_pred = kNN.predict(X_test)
# Count the errors
n_errors = sum(Y_test!=Y_test_pred)
print template.format(k, n_errors, len(Y_test))
import numpy as np
from sklearn import datasets
from sklearn.neighbors import KNeighborsClassifier
# Loading the data
iris = datasets.load_iris()
names = iris.target_names
X = iris.data
Y = np.array(iris.target, int)
# Holdout Set
X_tv, X_test, Y_tv, Y_test = train_test_split(X, Y, train_size=0.8)
template = "k={0}: {1} errores de clasificación de un total de {2}"
# Fitting the model
mean_error_for_k = []
for k in range(1,21):
errors_k = []
for i in range(1000):
kNN = KNeighborsClassifier(n_neighbors=k)
X_train, X_valid, Y_train, Y_valid = train_test_split(X_tv, Y_tv, train_size=0.75)
kNN.fit(X_train, Y_train)
# Predicting values
Y_valid_pred = kNN.predict(X_valid)
# Count the errors
n_errors = sum(Y_valid!=Y_valid_pred)
# Add them to vector
errors_k.append(n_errors)
errors = np.array(errors_k).mean()
print template.format(k, errors, len(Y_valid))
mean_error_for_k.append(errors)
from matplotlib import pyplot as plt
plt.figure(figsize=(16,8))
plt.plot(range(1,21), mean_error_for_k, '-ok')
plt.xlabel("k")
plt.ylabel("Errores de clasificacion")
plt.show()
import numpy as np
from sklearn import datasets
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix
# Meta parameter
k = 5
# Loading the data
iris = datasets.load_iris()
names = iris.target_names
X = iris.data
Y = np.array(iris.target, int)
# Holdout Set
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, train_size=0.6)
print X_train.shape, X_test.shape
# Fitting the model
kNN = KNeighborsClassifier(n_neighbors=k)
kNN.fit(X_train, Y_train)
# No coefficients to print!
# Predicting values
Y_test_pred = kNN.predict(X_test)
# Count the errors
n_errors = sum(Y_test!=Y_test_pred)
print "{0} errores de clasificación de un total de {1}".format(n_errors, len(Y_test))
print n_errors/float(len(Y_test))
# Matriz de confusion
print confusion_matrix(Y_test, Y_test_pred)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implement a basic sub-string counter
Step2: Check if a string is ascii encoded
Step3: Solution
|
<ASSISTANT_TASK:>
Python Code:
s = 'ABCD'
for i in range(0, len(s)):
print (s[i])
string = 'ABCABDEABCF'
sub_string = 'ABC'
string[5:7]
def output_substring(string, sub_string):
for i in range(0, len(string)-len(sub_string)+1):
n = i
print (string[n:(n+len(sub_string))])
output_substring(string, sub_string)
def count_substring(string, sub_string):
count = 0
for i in range(0, len(string)-len(sub_string)+1):
if (string[i:(i+len(sub_string))]) == sub_string:
count+=1
return count
count_substring(string, sub_string)
try:
string.encode('ascii')
print ('This string is ascii ...')
except:
print ('This string is not ascii encoded ...')
def count_substring(string,
sub_string,
verbose = False,
min_string_len =1,
max_string_len=200):
ascii_encoding = False
try:
string.encode('ascii')
sub_string.encode('ascii')
ascii_encoding = True
if verbose:
print ('This string is ascii ...')
except:
if verbose:
print ('This string is not ascii encoded ...')
if len(string)>= min_string_len and len(string)<=max_string_len and ascii_encoding:
count = 0
for i in range(0, len(string)-len(sub_string)+1):
if (string[i:(i+len(sub_string))]) == sub_string:
count+=1
return count
count_substring(string, sub_string)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Write a Jython function for IDV and export as IDV Formula in GUI
Step2: Above function was created for use in this session, it will not be available for IDV in next session so let us save it to the IDV Jython library.
Step3: Create a IDV formula, once created it will be in the list of formulas in IDV. The arguments to saveFormula are (formulaid,description,functionastring,formula categories). formula categories can be a list of categories or just a single category specified by a string.
Step4: Check if the formula was created in IDV GUI . At anytime to show a IDV window from notebook use function showIdv(). Currently some displays cannot be made when using GUI from notebook, will be implemented in future.
|
<ASSISTANT_TASK:>
Python Code:
def moistStaticEnergy(T,Q,GZ):
Calculates Moist Static Energy with Temperature, Specific Humidity and Geopotential Height.
from ucar.visad.quantities import SpecificHeatCapacityOfDryAirAtConstantPressure,LatentHeatOfEvaporation
cp=SpecificHeatCapacityOfDryAirAtConstantPressure.newReal()
L=LatentHeatOfEvaporation.newReal()
return cp*T+L*Q+Z
saveJython(moistStaticEnergy)
saveFormula("Moist Static Energy","Moist Static Energy from T, Q, GZ","moistStaticEnergy(T,Q,GZ)",["Grids","Grids-MapesCollection"])
showIdv()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Convert the monthly average flow rates to volumes using
Step2: Convert monthly runoff to annual runoff
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import calendar
from charistools.hypsometry import Hypsometry
from charistools.timeSeries import TimeSeries
import datetime as dt
import pandas as pd
import numpy as np
import os
import re
from time import strptime
%cd /Users/brodzik/projects/CHARIS/streamflow/From_Alice_Naryn
%ls
file = "Kyrygzstan_Naryn_flow_monthly_average_1933-2015.xlsx"
# Use sheetname if pandas < 0.21.0
# use sheet_name if later
# Returns dict of sheets in file
sheets = pd.read_excel(io=file, sheetname=None)
for key in sheets:
print("key=%s" % key)
# Set the source name from the original file (controls which sheet is extracted)
# and the CHARIS-format drainageID
sheetName = "Naryn town gauge (Naryn Darya)"
drainageID = "SY_Naryn_at_NarynTown"
df = sheets[sheetName]
df
description = df.columns[0]
description
geolocation = df.iloc[0][0]
geolocation
units = df.iloc[1][0]
units
description, geolocation, units
def dmsds_to_decdeg(deg, min, sec, decsec):
return float(deg) + float(min)/60. + (float(sec) + float(decsec)/100.)/60./60.
# Convert lat/lon positions to decimal degrees
p = re.compile(r'latitude:(.*)longitude:(.*)\(')
m = p.search(geolocation)
if m :
print(m.group(1))
print(m.group(2))
(deg, min, sec, decsec) = m.group(1).split()
latitude = dmsds_to_decdeg(deg, min, sec, decsec)
(deg, min, sec, decsec) = m.group(2).split()
longitude = dmsds_to_decdeg(deg, min, sec, decsec)
print("lat/lon: %s/%s" % (latitude, longitude))
else:
print("no match")
# Drop the first 2 data rows, they are just metadata we have already scraped
df = df.drop([0,1])
#sheets['Naryn town gauge (Naryn Darya)']
df.loc[2].values
# Move the contents of data row 2 into the column headers
df.columns = df.loc[2].values
df
# Now drop the data row 2
df = df.drop([2])
df
# Move the df index to the Year column
df = df.set_index('Year')
df
df['Jan'][1933]
# Write the result as a hypsometry for ti-melt modeling calibration steps, this needs to look like this:
# # Station ID: 35749301
# # Basin: Hunza at Dainyor Bridge
# # Latitude: 35.92778
# # Longitude: 74.37639
# # Elevation: 1370 m
# # Area: 13157 km**2
# # Units: km**3
# # COLUMNS: year month runoff
# 1966 1 0.15
# 1966 2 0.12
# 1966 3 0.10
outfile = "%s.month_runoff.dat" % drainageID
outfile
%pwd
comments=["Description: Derived runoff volume from monthly flow rates",
"Flow rate file: %s" % os.path.basename(file),
"Basin: %s" % drainageID,
"Latitude: %.6f" % latitude,
"Longitude: %.6f" % longitude,
"Elevation: unknown",
"Area: unknown",
"Units: km**3",
"COLUMNS: year month runoff"]
f = open(outfile, "w")
for line in comments:
f.write("# %s\n" % line)
m_per_km = 1000.
s_per_day = 60. * 60. * 24.
conversion_factor = s_per_day / (m_per_km * m_per_km * m_per_km)
for year in df.index:
for mmm in df.columns[1:13]:
month = strptime(mmm,'%b').tm_mon
days_in_month = calendar.monthrange(year, int(month))[1]
volume_km3 = df[mmm][year] * conversion_factor * days_in_month
print("%4d %d %d %f %f" % (year, month, days_in_month, df[mmm][year], volume_km3))
print("%4d %2d %.6f" % (year, month, volume_km3),
file=f)
f.close()
%ls *month_runoff*
%pwd
ts = TimeSeries("../SY_Naryn_at_NarynTown.month_runoff.dat")
ts.comments
def convert_monthly_to_annual_runoff(monthFile):
monthTS = TimeSeries(monthFile)
# Add new description to beginning
monthTS.comments.insert(0, "# Monthly file: %s" % monthFile)
monthTS.comments.insert(0, "# Description: Annual runoff volume, created %s" % str(dt.datetime.now()))
# And replace Column descriptor line:
monthTS.comments = monthTS.comments[:-1]
monthTS.comments.extend(["# COLUMNS: year runoff"])
#print(monthTS.data)
monthTS.data["year"] = monthTS.data.index.year
annual = monthTS.data.groupby(["year"])["runoff"].sum()
# Make the new outfile by replacing "monthly" in monthFile
p = re.compile("month")
annualFile = p.sub("annual", monthFile)
f = open(annualFile, "w")
for line in monthTS.comments:
f.write("%s\n" % line)
for key, value in annual.iteritems():
print("%4d %.6f" % (key, value), file=f)
f.close()
%cd /Users/brodzik/projects/CHARIS/streamflow/From_Alice_Naryn
%ls
monthFile = "SY_Naryn_at_NarynTown.month_runoff.dat"
convert_monthly_to_annual_runoff(monthFile)
%pwd
annualFile = "SY_Naryn_at_NarynTown.annual_runoff.dat"
new = TimeSeries(annualFile)
print(new.data)
print(new.comments)
print(new.units)
%cat $annualFile
line = "# Units: km**3"
line.strip().lstrip('# Units: ')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-3', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: given the RFAM id of a family we retrieve it from the RFAM online database
Step2: prepare a function that composes all desired pre-processing steps over the original data
Step3: employ the vectorizer to transform the graph instances into sparse vectors and collect all the vectors in a data matrix
Step4: Setup the Annotator
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from eden.util import configure_logging
import logging
configure_logging(logging.getLogger(),verbosity=2)
def rfam_uri(family_id):
return '%s.fa'%(family_id)
def rfam_uri(family_id):
return 'http://rfam.xfam.org/family/%s/alignment?acc=%s&format=fastau&download=0'%(family_id,family_id)
rfam_id = 'RF00162' #SAM riboswitch (S box leader)
rfam_id = 'RF01725' #SAM-I/IV variant riboswitch
rfam_id = 'RF00005' #tRNA
rfam_id = 'RF02276' #hammerhead
def pre_process( data ):
from eden.converter.rna.rnafold import rnafold_to_eden
graphs = rnafold_to_eden( data )
return graphs
def model(uri, pre_process, size=None):
from eden.graph import Vectorizer
vectorizer = Vectorizer( complexity=2 )
from eden.converter.fasta import fasta_to_sequence
seqs = fasta_to_sequence(uri)
from itertools import tee, islice
if size is not None:
seqs = islice(seqs,size)
seqs, seqs_ = tee(seqs)
pos_graphs = pre_process( seqs )
from eden.modifier.seq import seq_to_seq, shuffle_modifier
seqs_neg = seq_to_seq( seqs_, modifier=shuffle_modifier, times=4, order=2 )
neg_graphs = pre_process( seqs_neg )
from eden.util import fit
estimator = fit(pos_graphs, neg_graphs, vectorizer, n_jobs=-1, cv=10, n_iter_search=20)
return estimator, vectorizer
%%time
estimator, vectorizer = model(rfam_uri( rfam_id ), pre_process, size=50)
from eden.util.display import draw_graph
import itertools
from eden.converter.fasta import fasta_to_sequence
seqs = fasta_to_sequence(rfam_uri( rfam_id ))
graphs = itertools.islice(pre_process(seqs),7,8)
graphs = vectorizer.annotate( graphs, estimator = estimator )
#parameters for visualization
opts={'size':9, 'node_border':False, 'node_size':200, 'font_size':9, 'vertex_alpha':0.6, 'title_key':'id'}
for i, graph in enumerate(graphs):
print i
#draw_graph(graph, vertex_color='importance', colormap='YlOrRd', **opts)
draw_graph(graph, vertex_color='importance', colormap='YlOrRd', file_name='rna_%d.pdf'%(i), **opts)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Types Matter
Step2: Switching Types
Step3: Inputs type str
Step4: We can use a built in Python function to convert the type from str to our desired type
Step5: We typically combine the first two lines into one expression like this
Step6: 1.1 You Code
Step7: Format Codes
Step8: Formatting with F-Strings
Step9: 1.2 You Code
Step10: 1.3 You Code
Step11: Putting it all together
Step12: Metacognition
|
<ASSISTANT_TASK:>
Python Code:
a = "4"
type(a) # should be str
a = 4
type(a) # should be int
a = 4
b = 5
a + b # this plus in this case means add so 9
a = "4"
b = "5"
a + b # the plus + in this case means concatenation, so '45'
x = "45" # x is a str
y = int(x) # y is now an int
z = float(x) # z is a float
print(x,y,z)
age = input("Enter your age: ")
type(age)
age = input("Enter your age: ")
age = int(age)
type(age)
age = int(input("Enter your age: "))
type(age)
# TODO: Debug this code
age = input("Enter your age: ")
nextage = age + 1
print("Today you are age next year you will be {nextage}")
name = "Mike"
age = 45
gpa = 3.4
print("%s is %d years old. His gpa is %.3f" % (name, age,gpa))
name ="Mike"
wage = 15
print(f"{name} makes ${wage:.2f} per hour")
# TODO: Write code here
#TODO: Write Code Here
# TODO: Write your code here
# run this code to turn in your work!
from coursetools.submission import Submission
Submission().submit()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ejecutar códigos con otros kernels
Step2: Cargar datos
Step3: Gráficos
Step4: Widgets
Step5: Para más informaciones sobre ipywidgets, consulte el manual de usuario [6].
|
<ASSISTANT_TASK:>
Python Code:
a = 1
b = 2.2
c = 3
d = 'a'
%who
def f1(n):
for x in range(n):
pass
%%time
f1(100)
%%timeit
f1(100)
%%bash
ls -lah
import pandas as pd
df = pd.read_csv('data/kaggle-titanic.csv')
df.head()
df.info()
df.describe()
from matplotlib import pyplot as plt
df.Survived.value_counts().plot(kind='bar')
plt.show()
import pixiedust
display(df)
import numpy as np
π = np.pi
def show_wave(A, f, φ):
ω = 2*π*f
t = np.linspace(0, 1, 10000)
f = A*np.sin(ω*t+φ)
plt.grid(True)
plt.plot(t, f)
plt.show()
show_wave(A=5, f=5, φ=2)
import ipywidgets as widgets
from IPython.display import display
params = dict(value=1, min=1, max=100, step=1, continuous_update=False)
wA = widgets.IntSlider(**params)
wf = widgets.IntSlider(**params)
wφ = widgets.IntSlider(value=0, min=0, max=10, step=1, continuous_update=False)
widgets.interact(show_wave, A=wA, f=wf, φ=wφ);
?str.replace()
help(str.replace)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exploring Your Data
Step3: Two Dimensions
Step5: Many Dimensions
Step9: Cleaning And Munging
Step10: Manipulating Data
Step11: Rescaling
Step12: If we use inches, B's closest neighbor is A
Step13: But if we use centimeters, the closest neighbor is C
Step16: When results are impacted by units of measure like in the above example, scaling can help. Rescaling projects values into a dimension with mean 0 and standard deviation 1. This effectively converts data into a common unit of "standard deviations from the mean."
Step17: Rescale our weights so that they do not differ by units
Step18: Now when we calculate distances, you can see that we end up with almost equivalent calculations for inches as we do centimeters
Step20: Dimensionality Reduction
Step21: With the de-meaned matrix, we can figure out which direction captures the most variance in the data
Step25: Now we can compute the variance of the data in the direction, w
Step26: The first principal component is the direction that maximizes the directional variance above.
Step29: We can use stochastic gradient descent here
Step30: Given a three-dimensional data set, it becomes effectively two-dimensional when we remove the first component.
Step31: For higher-dimensional data sets, iterate to find as many components as you like
Step32: Then transform the data into a lower-dimensional space
|
<ASSISTANT_TASK:>
Python Code:
def bucketize(point, bucket_size):
floor the point to the next lower multiple of bucket size
return bucket_size * math.floor(point / bucket_size)
def make_histogram(points, bucket_size):
return Counter(bucketize(point, bucket_size) for point in points)
def plot_histogram(points, bucket_size, title=''):
histogram = make_histogram(points, bucket_size)
plt.bar(list(histogram.keys()), histogram.values(), width=bucket_size, edgecolor='white')
plt.title(title)
plt.show()
import random
random.seed(0)
# uniform #'s will be between -100 and 100
uniform = [200 * random.random() - 100 for _ in range(10000)]
# normal #'s will be from the normal distribution with mean of 0 and standard deviation of 57
normal = [57 * inverse_normal_cdf(random.random()) for _ in range(10000)]
plot_histogram(uniform, 10, 'Uniform Histogram')
plot_histogram(normal, 10, 'Normal Histogram')
def random_normal():
retuns random #'s from standard normal distribution
return inverse_normal_cdf(random.random())
xs = [random_normal() for _ in range(1000)]
ys1 = [x + random_normal() / 2 for x in xs]
ys2 = [-x + random_normal() / 2 for x in xs]
ys3 = [0 if x < 0 else 6 for x in xs]
plt.scatter(xs, ys1, marker='.', color='black', label='ys1');
plt.scatter(xs, ys2, marker='.', color='gray', label='ys2');
plt.xlabel('xs');
plt.ylabel('ys');
plt.legend(loc=5);
plt.title('Very Different Joint Distributions');
correlation(xs, ys1), correlation(xs, ys2)
def correlation_matrix(data):
returns a matrix where x_ij is the correlation between col i and col j
_, num_cols = shape(data)
def matrix_entry(i, j):
return correlation(get_column(data, i), get_column(data, j))
return make_matrix(num_cols, num_cols, matrix_entry)
data = [[x, y2, y1, ys3] for (x, y1, y2, ys3) in zip(xs, ys1, ys2, ys3)]
correlation_matrix(data)
_, num_columns = shape(data)
fig, ax = plt.subplots(num_columns, num_columns)
for i in range(num_columns):
for j in range(num_columns):
# scatter column_j on the x-axis vs column_i on the y-axis
if i != j: ax[i][j].scatter(get_column(data, j), get_column(data, i))
# unless i == j, in which case show the series name
else: ax[i][j].annotate("series " + str(i), (0.5, 0.5), xycoords='axes fraction', ha="center", va="center")
# then hide axis labels except left and bottom charts
if i < num_columns - 1: ax[i][j].xaxis.set_visible(False)
if j > 0: ax[i][j].yaxis.set_visible(False)
# fix the bottom right and top left axis labels, which are wrong because
# their charts only have text in them
ax[-1][-1].set_xlim(ax[0][-1].get_xlim())
ax[0][0].set_ylim(ax[0][1].get_ylim())
def parse_row(input_row, parsers):
given a list of parsers (some of which may be None)
apply the appropriate one to each element of the input_row
return [parser(value) if parser is not None else value for value, parser in zip(input_row, parsers)]
def parse_rows_with(reader, parsers):
wrap a reader to apply the parsers to each of its rows
for row in reader:
yield parse_row(row, parsers)
def try_or_none(f):
wraps f to return None if f raises an exception
assumes f takes only one input
def f_or_none(x):
try: return f(x)
except: return None
return f_or_none
def parse_row(input_row, parsers):
return [try_or_none(parser)(value) if parser is not None else value for value, parser in zip(input_row, parsers)]
import dateutil.parser
import csv
data = []
with open("stocks.csv", "r") as f:
reader = csv.reader(f)
for line in parse_rows_with(reader, [dateutil.parser.parse, None, float]):
data.append(line)
data
max_aapl_price = max(row[2]
for row in data
if row[1] == "AAPL")
max_aapl_price
import pandas as pd
pd.DataFrame({'Height (inches)': [63, 67, 70], 'Height (centimeters)': [160, 170.2, 177.8], 'Weight': [150, 160, 171]}, index=['A', 'B', 'C'])
a_to_b = distance([63, 150], [67, 160])
a_to_c = distance([63, 150], [70, 171])
b_to_c = distance([67, 160], [70, 171])
a_to_b, a_to_c, b_to_c
a_to_b = distance([160, 150], [170.2, 160])
a_to_c = distance([160, 150], [177.8, 171])
b_to_c = distance([170.2, 160], [177.8, 171])
a_to_b, a_to_c, b_to_c
def scale(data_matrix):
returns the means and standard deviations of each column
num_rows, num_cols = shape(data_matrix)
means = [mean(get_column(data_matrix,j)) for j in range(num_cols)]
stdevs = [standard_deviation(get_column(data_matrix,j)) for j in range(num_cols)]
return means, stdevs
def rescale(data_matrix):
rescales the input data so that each column
has mean 0 and standard deviation 1
leaves alone columns with no deviation
means, stdevs = scale(data_matrix)
def rescaled(i, j):
if stdevs[j] > 0:
return (data_matrix[i][j] - means[j]) / stdevs[j]
else:
return data_matrix[i][j]
num_rows, num_cols = shape(data_matrix)
return make_matrix(num_rows, num_cols, rescaled)
hts_in = [[63], [67], [70]]
hts_cm = [[160], [170.2], [177.8]]
wts = [[150], [160], [171]]
hts_in_rescaled = rescale(hts_in)
hts_cm_rescaled = rescale(hts_cm)
wts_rescaled = rescale(wts)
hts_in_rescaled, hts_cm_rescaled, wts_rescaled
a_to_b_rescaled = distance([hts_in_rescaled[0][0], wts_rescaled[0][0]], [hts_in_rescaled[1][0], wts_rescaled[1][0]])
a_to_c_rescaled = distance([hts_in_rescaled[0][0], wts_rescaled[0][0]], [hts_in_rescaled[2][0], wts_rescaled[2][0]])
b_to_c_rescaled = distance([hts_in_rescaled[1][0], wts_rescaled[1][0]], [hts_in_rescaled[2][0], wts_rescaled[2][0]])
a_to_b_rescaled, a_to_c_rescaled, b_to_c_rescaled
a_to_b_rescaled = distance([hts_cm_rescaled[0][0], wts_rescaled[0][0]], [hts_cm_rescaled[1][0], wts_rescaled[1][0]])
a_to_c_rescaled = distance([hts_cm_rescaled[0][0], wts_rescaled[0][0]], [hts_cm_rescaled[2][0], wts_rescaled[2][0]])
b_to_c_rescaled = distance([hts_cm_rescaled[1][0], wts_rescaled[1][0]], [hts_cm_rescaled[2][0], wts_rescaled[2][0]])
a_to_b_rescaled, a_to_c_rescaled, b_to_c_rescaled
pca_data = [[7, 4, 3],
[4, 1, 8],
[6, 3, 5],
[8, 6, 1],
[8, 5, 7],
[7, 2, 9],
[5, 3, 3],
[9, 5, 8],
[7, 4, 5],
[8, 2, 2]]
pca_xs = [random_normal() for _ in range(100)]
pca_ys1 = [x + random_normal() / 2 for x in pca_xs]
pca_ys2 = [-x + random_normal() / 2 for x in pca_xs]
pca_ys3 = [0 if x < 0 else 6 for x in pca_xs]
pca_data = [[x, y1, y2, y3] for (x, y1, y2, y3) in zip(pca_xs, pca_ys1, pca_ys2, pca_ys3)]
plt.scatter(pca_xs, pca_ys1);
plt.scatter(pca_xs, pca_ys2);
plt.scatter(pca_xs, pca_ys3);
def de_mean_matrix(A):
Returns each value in columns of A minus the mean of
that column resulting in mean of 0 for all columns
nr, nc = shape(A)
column_means, _ = scale(A)
return make_matrix(nr, nc, lambda i, j: A[i][j] - column_means[j])
pca_data_de_mean = de_mean_matrix(pca_data)
pca_data_de_mean[:5]
plt.scatter([x[0] for x in pca_data_de_mean], [0 for _ in pca_data_de_mean]);
plt.scatter([x[1] for x in pca_data_de_mean], [0.1 for _ in pca_data_de_mean]);
plt.scatter([x[2] for x in pca_data_de_mean], [0.2 for _ in pca_data_de_mean]);
plt.scatter([x[3] for x in pca_data_de_mean], [0.3 for _ in pca_data_de_mean]);
frame = plt.gca()
frame.axes.yaxis.set_visible(False);
plt.ylim((-0.1, 1));
plt.scatter([x[0] for x in pca_data_de_mean], [x[1] for x in pca_data_de_mean]);
plt.scatter([x[0] for x in pca_data_de_mean], [x[2] for x in pca_data_de_mean]);
plt.scatter([x[0] for x in pca_data_de_mean], [x[3] for x in pca_data_de_mean]);
def direction(w):
mag = magnitude(w)
return [w_i / mag for w_i in w]
pca_data_direction = [direction(w) for w in pca_data_de_mean]
pca_data_direction[:5]
plt.scatter([x[0] for x in pca_data_direction], [x[1] for x in pca_data_direction]);
plt.scatter([x[0] for x in pca_data_direction], [x[2] for x in pca_data_direction]);
plt.scatter([x[0] for x in pca_data_direction], [x[3] for x in pca_data_direction]);
def directional_variance_i(x_i, w):
the variance of the row x_i in the direction determined by w
return dot(x_i, direction(w)) ** 2
def directional_variance(X, w):
the variance of the data in the direction determined w
return sum(directional_variance_i(x_i, w) for x_i in X)
def directional_variance_gradient_i(x_i, w):
the contribution of row x_i to the gradient of
the direction-w variance
projection_length = dot(x_i, direction(w))
return [2 * projection_length * x_ij for x_ij in x_i]
def directional_variance_gradient(X, w):
return vector_sum(directional_variance_gradient_i(x_i, w) for x_i in X)
pca_data_variance = [directional_variance(pca_data_direction, w) for w in pca_data_direction]
pca_data_variance[:5]
def first_principal_component(X):
guess = [1 for _ in X[0]]
unscaled_maximizer = maximize_batch(
partial(directional_variance, X), # is now a function of w
partial(directional_variance_gradient, X), # is now a function of w
guess)
return direction(unscaled_maximizer)
first_principal_component(pca_data)
def first_principal_component_sgd(X):
guess = [1 for _ in X[0]]
unscaled_maximizer = maximize_stochastic(
lambda x, _, w: directional_variance_i(x, w),
lambda x, _, w: directional_variance_gradient_i(x, w),
X,
[None for _ in X], # the fake "y"
guess)
return direction(unscaled_maximizer)
first_principal_component_sgd(pca_data)
def project(v, w):
returns the projection of v onto direction w
projection_length = dot(v, w)
return scalar_multiply(projection_length, w)
def remove_projection_from_vector(v, w):
projects v onto w and substracts the result from v
return vector_subtract(v, project(v, w))
def remove_projection(X, w):
return [remove_projection_from_vector(x_i, w) for x_i in X]
pca_data_fc_removed = remove_projection(pca_data, first_principal_component_sgd(pca_data))
pca_data_fc_removed[:5]
plt.scatter([x[0] for x in pca_data_fc_removed], [x[1] for x in pca_data_fc_removed]);
plt.scatter([x[0] for x in pca_data_fc_removed], [x[2] for x in pca_data_fc_removed]);
plt.scatter([x[0] for x in pca_data_fc_removed], [x[3] for x in pca_data_fc_removed]);
def principal_component_analysis(X, num_components):
components = []
for _ in range(num_components):
component = first_principal_component(X)
components.append(component)
X = remove_projection(X, component)
return components
def transform_vector(v, components):
return [dot(v, w) for w in components]
def transform(X, components):
return [transform_vector(x_i, components) for x_i in X]
pca_components = principal_component_analysis(pca_data, 3)
pca_transformed = transform(pca_data, pca_components)
pca_transformed[:5]
plt.scatter([x[0] for x in pca_transformed], [x[1] for x in pca_transformed]);
plt.scatter([x[0] for x in pca_transformed], [x[2] for x in pca_transformed]);
plt.title('3-dimensions n=' + str(len(pca_transformed)));
pca_components = principal_component_analysis(pca_data, 2)
pca_transformed = transform(pca_data, pca_components)
pca_transformed[:5]
plt.scatter([x[0] for x in pca_transformed], [x[1] for x in pca_transformed]);
plt.title('2-dimensions n=' + str(len(pca_transformed)));
pca_components = principal_component_analysis(pca_data, 1)
pca_transformed = transform(pca_data, pca_components)
pca_transformed[:5]
plt.scatter([x[0] for x in pca_transformed], [0 for _ in pca_transformed]);
plt.title('1-dimension n=' + str(len(pca_transformed)));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make population dynamic model
Step2: Population of haplotypes maps to counts and fitnesses
Step3: Map haplotype string to fitness float.
Step4: Add mutation
Step5: Mutations have fitness effects
Step6: If a mutation event creates a new haplotype, assign it a random fitness.
Step7: Genetic drift and fitness affect which haplotypes make it to the next generation
Step8: Combine and iterate
Step9: Record
Step10: Analyze trajectories
Step11: Plot diversity
Step12: Analyze and plot divergence
Step13: Plot haplotype trajectories
Step14: Plot SNP trajectories
Step15: Find all variable sites.
Step16: Scale up
Step17: In this case there are $\mu$ = 0.01 mutations entering the population every generation.
Step18: And the population genetic parameter $\theta$, which equals $2N\mu$, is 1.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import itertools
pop_size = 100
seq_length = 10
alphabet = ['A', 'T']
base_haplotype = "AAAAAAAAAA"
fitness_effect = 1.1 # fitness effect if a functional mutation occurs
fitness_chance = 0.1 # chance that a mutation has a fitness effect
pop = {}
pop["AAAAAAAAAA"] = 40
pop["AAATAAAAAA"] = 30
pop["AATTTAAAAA"] = 30
fitness = {}
fitness["AAAAAAAAAA"] = 1.0
fitness["AAATAAAAAA"] = 1.05
fitness["AATTTAAAAA"] = 1.10
pop["AAATAAAAAA"]
fitness["AAATAAAAAA"]
mutation_rate = 0.005 # per gen per individual per site
def get_mutation_count():
mean = mutation_rate * pop_size * seq_length
return np.random.poisson(mean)
def get_random_haplotype():
haplotypes = pop.keys()
frequencies = [x/float(pop_size) for x in pop.values()]
total = sum(frequencies)
frequencies = [x / total for x in frequencies]
return np.random.choice(haplotypes, p=frequencies)
def get_mutant(haplotype):
site = np.random.randint(seq_length)
possible_mutations = list(alphabet)
possible_mutations.remove(haplotype[site])
mutation = np.random.choice(possible_mutations)
new_haplotype = haplotype[:site] + mutation + haplotype[site+1:]
return new_haplotype
def get_fitness(haplotype):
old_fitness = fitness[haplotype]
if (np.random.random() < fitness_chance):
return old_fitness * fitness_effect
else:
return old_fitness
get_fitness("AAAAAAAAAA")
def mutation_event():
haplotype = get_random_haplotype()
if pop[haplotype] > 1:
pop[haplotype] -= 1
new_haplotype = get_mutant(haplotype)
if new_haplotype in pop:
pop[new_haplotype] += 1
else:
pop[new_haplotype] = 1
if new_haplotype not in fitness:
fitness[new_haplotype] = get_fitness(haplotype)
mutation_event()
pop
fitness
def mutation_step():
mutation_count = get_mutation_count()
for i in range(mutation_count):
mutation_event()
def get_offspring_counts():
haplotypes = pop.keys()
frequencies = [pop[haplotype]/float(pop_size) for haplotype in haplotypes]
fitnesses = [fitness[haplotype] for haplotype in haplotypes]
weights = [x * y for x,y in zip(frequencies, fitnesses)]
total = sum(weights)
weights = [x / total for x in weights]
return list(np.random.multinomial(pop_size, weights))
get_offspring_counts()
def offspring_step():
counts = get_offspring_counts()
for (haplotype, count) in zip(pop.keys(), counts):
if (count > 0):
pop[haplotype] = count
else:
del pop[haplotype]
def time_step():
mutation_step()
offspring_step()
generations = 5
def simulate():
for i in range(generations):
time_step()
history = []
def simulate():
clone_pop = dict(pop)
history.append(clone_pop)
for i in range(generations):
time_step()
clone_pop = dict(pop)
history.append(clone_pop)
simulate()
def get_distance(seq_a, seq_b):
diffs = 0
length = len(seq_a)
assert len(seq_a) == len(seq_b)
for chr_a, chr_b in zip(seq_a, seq_b):
if chr_a != chr_b:
diffs += 1
return diffs / float(length)
def get_diversity(population):
haplotypes = population.keys()
haplotype_count = len(haplotypes)
diversity = 0
for i in range(haplotype_count):
for j in range(haplotype_count):
haplotype_a = haplotypes[i]
haplotype_b = haplotypes[j]
frequency_a = population[haplotype_a] / float(pop_size)
frequency_b = population[haplotype_b] / float(pop_size)
frequency_pair = frequency_a * frequency_b
diversity += frequency_pair * get_distance(haplotype_a, haplotype_b)
return diversity
def get_diversity_trajectory():
trajectory = [get_diversity(generation) for generation in history]
return trajectory
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
def diversity_plot():
mpl.rcParams['font.size']=14
trajectory = get_diversity_trajectory()
plt.plot(trajectory, "#447CCD")
plt.ylabel("diversity")
plt.xlabel("generation")
def get_divergence(population):
haplotypes = population.keys()
divergence = 0
for haplotype in haplotypes:
frequency = population[haplotype] / float(pop_size)
divergence += frequency * get_distance(base_haplotype, haplotype)
return divergence
def get_divergence_trajectory():
trajectory = [get_divergence(generation) for generation in history]
return trajectory
def divergence_plot():
mpl.rcParams['font.size']=14
trajectory = get_divergence_trajectory()
plt.plot(trajectory, "#447CCD")
plt.ylabel("divergence")
plt.xlabel("generation")
def get_frequency(haplotype, generation):
pop_at_generation = history[generation]
if haplotype in pop_at_generation:
return pop_at_generation[haplotype]/float(pop_size)
else:
return 0
def get_trajectory(haplotype):
trajectory = [get_frequency(haplotype, gen) for gen in range(generations)]
return trajectory
def get_all_haplotypes():
haplotypes = set()
for generation in history:
for haplotype in generation:
haplotypes.add(haplotype)
return haplotypes
colors = ["#781C86", "#571EA2", "#462EB9", "#3F47C9", "#3F63CF", "#447CCD", "#4C90C0", "#56A0AE", "#63AC9A", "#72B485", "#83BA70", "#96BD60", "#AABD52", "#BDBB48", "#CEB541", "#DCAB3C", "#E49938", "#E68133", "#E4632E", "#DF4327", "#DB2122"]
colors_lighter = ["#A567AF", "#8F69C1", "#8474D1", "#7F85DB", "#7F97DF", "#82A8DD", "#88B5D5", "#8FC0C9", "#97C8BC", "#A1CDAD", "#ACD1A0", "#B9D395", "#C6D38C", "#D3D285", "#DECE81", "#E8C77D", "#EDBB7A", "#EEAB77", "#ED9773", "#EA816F", "#E76B6B"]
def stacked_trajectory_plot(xlabel="generation"):
mpl.rcParams['font.size']=18
haplotypes = get_all_haplotypes()
trajectories = [get_trajectory(haplotype) for haplotype in haplotypes]
plt.stackplot(range(generations), trajectories, colors=colors_lighter)
plt.ylim(0, 1)
plt.ylabel("frequency")
plt.xlabel(xlabel)
def get_snp_frequency(site, generation):
minor_allele_frequency = 0.0
pop_at_generation = history[generation]
for haplotype in pop_at_generation.keys():
allele = haplotype[site]
frequency = pop_at_generation[haplotype] / float(pop_size)
if allele != "A":
minor_allele_frequency += frequency
return minor_allele_frequency
def get_snp_trajectory(site):
trajectory = [get_snp_frequency(site, gen) for gen in range(generations)]
return trajectory
def get_all_snps():
snps = set()
for generation in history:
for haplotype in generation:
for site in range(seq_length):
if haplotype[site] != "A":
snps.add(site)
return snps
def snp_trajectory_plot(xlabel="generation"):
mpl.rcParams['font.size']=18
snps = get_all_snps()
trajectories = [get_snp_trajectory(snp) for snp in snps]
data = []
for trajectory, color in itertools.izip(trajectories, itertools.cycle(colors)):
data.append(range(generations))
data.append(trajectory)
data.append(color)
fig = plt.plot(*data)
plt.ylim(0, 1)
plt.ylabel("frequency")
plt.xlabel(xlabel)
pop_size = 50
seq_length = 100
generations = 500
mutation_rate = 0.0001 # per gen per individual per site
fitness_effect = 1.1 # fitness effect if a functional mutation occurs
fitness_chance = 0.1 # chance that a mutation has a fitness effect
seq_length * mutation_rate
2 * pop_size * seq_length * mutation_rate
base_haplotype = ''.join(["A" for i in range(seq_length)])
pop.clear()
fitness.clear()
del history[:]
pop[base_haplotype] = pop_size
fitness[base_haplotype] = 1.0
simulate()
plt.figure(num=None, figsize=(14, 14), dpi=80, facecolor='w', edgecolor='k')
plt.subplot2grid((3,2), (0,0), colspan=2)
stacked_trajectory_plot()
plt.subplot2grid((3,2), (1,0), colspan=2)
snp_trajectory_plot()
plt.subplot2grid((3,2), (2,0))
diversity_plot()
plt.subplot2grid((3,2), (2,1))
divergence_plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simulation
Step2: Spielverläufe
Step3: Ergebnisse
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from random import random, choice
# Kugeln (Werte erstmal unwichtig)
black, red, green, white = 0, 1, 2, 7
# Urnen
kugeln_urne1 = [white, white, white, black, black]
kugeln_urne2 = [white, green, green, red, red]
# Wkeiten
p_urne1 = 0.7
p_urne2 = 0.3 # 1 - p_urne1
def color(s, c): return '\033[1;3{}m{}\033[0m'.format(c, s)
N = 100000 # Anzahl der Spiele
spiele = [] # Spielverläufe
h_N = [0] * 11 # absolute Häufigkeiten der Ereignisse Z_k
for _ in range(N):
urne1, urne2 = list(kugeln_urne1), list(kugeln_urne2) # copy
spiel, kugel = [], None
while kugel != red:
# Urne auswählen (Urne 1 kann leer sein, dann Urne2)
urne = urne1 if random() < p_urne1 and urne1 else urne2
# zufällige Kugel aus der ausgewählten Urne nehmen und merken
kugel = choice(urne)
del urne[urne.index(kugel)]
spiel.append(color('●' + '₁₂'[urne == urne2], kugel))
h_N[len(spiel)] += 1
spiele.append(spiel)
for n, spiel in zip(range(20), spiele):
print("{:2} {}".format(n + 1, ' '.join(spiel)))
plt.stem(h_N)
plt.axvspan(0, 3.5, 0, 1, alpha=0.2)
plt.xlabel('Anzahl Runden $k$')
plt.ylabel('absolute Häufigkeit $h_N(Z_k)$');
print("P(Z₁ + Z₂ + Z₃) ≅ {:.4}".format(sum(h_N[:4]) / N))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: matplotlib figure
Step2: matplotlib axes
Step3: matplotlib artists
|
<ASSISTANT_TASK:>
Python Code:
import autofig
import numpy as np
import matplotlib.pyplot as plt
#autofig.inline()
x = np.linspace(0,0.1,11)
y = x**2
mplfig = autofig.plot(x=x, y=y, show=True)
print dir(mplfig)
mplfig.set_facecolor('blue')
mplfig
mplfig.axes
mplax = mplfig.axes[0]
print dir(mplax)
mplax.set_xticks([0, 0.05, 0.1])
mplfig
mplax.get_lines()
line = mplax.get_lines()[0]
print dir(line)
line.set_color('red')
mplfig
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 이 히스토그램에서 -0.143394 부터 0.437156 사이의 값이 전체의 약 24%를 차지하고 있음을 알 수 있다. 그럼 만약 -0.01 부터 0.01 사이의 구간에 대한 정보를 얻고 싶다면? 더 세부적인 구간에 대해 정보를 구하고 싶다면 히스토그램의 구간을 더 작게 나누어야 한다.
Step2: 정확한 묘사를 위해 구간의 수를 증가시키면 몇 가지 문제가 발생한다.
|
<ASSISTANT_TASK:>
Python Code:
sp.random.seed(0)
x = sp.random.normal(size=1000)
ns, bins, ps = plt.hist(x, bins=10)
pd.DataFrame([bins, ns/1000])
ns, bins, ps = plt.hist(x, bins=100)
pd.DataFrame([bins, ns/1000])
x = np.linspace(-3, 3, 100)
y = sp.stats.norm.pdf(x)
plt.plot(x, y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this example notebook, we will walk through the creation of logsums from
Step2: We'll also load the saved model from the mode choice estimation.
Step3: We'll replicate the pre-processing used in the mode choice estimation,
Step4: Then we bundle the raw data into the larch.DataFrames structure,
Step5: The logsums from a Model can be computed using the Model.logsums method.
Step6: Then we can persist the logsums dataframe to disk, for use in the next
Step7: To recover the DataArray later, we can read it using the DataArray.from_zarr method.
|
<ASSISTANT_TASK:>
Python Code:
# TEST
import larch.numba as lx
from pytest import approx
import os
import numpy as np
import xarray as xr
from addicty import Dict
from larch import P, X
import larch.numba as lx
hh, pp, tour, skims = lx.example(200, ['hh', 'pp', 'tour', 'skims'])
exampville_mode_choice_file = lx.example(201, output_file='exampville_mode_choice.html', estimate=True)
m = lx.read_metadata(exampville_mode_choice_file)
Mode = Dict(
DA = 1,
SR = 2,
Walk = 3,
Bike = 4,
Transit = 5,
).freeze()
tour_dataset = lx.Dataset.construct.from_idco(tour.set_index('TOURID'), alts=Mode)
od_skims = lx.Dataset.construct.from_omx(skims)
dt = lx.DataTree(
tour=tour_dataset.dc.query_cases("TOURPURP == 1"),
hh=hh.set_index('HHID'),
person=pp.set_index('PERSONID'),
od=od_skims,
do=od_skims,
relationships=(
"tours.HHID @ hh.HHID",
"tours.PERSONID @ person.PERSONID",
"hh.HOMETAZ @ od.otaz",
"tours.DTAZ @ od.dtaz",
"hh.HOMETAZ @ do.dtaz",
"tours.DTAZ @ do.otaz",
),
)
logsums = lx.DataArray.zeros(
dt.caseids(),
skims.TAZ_ID,
name='logsums',
)
for i,dtaz in enumerate(logsums.TAZ_ID):
m.datatree = dt.replace_datasets(
tour=dt.root_dataset.assign(
DTAZ=xr.full_like(dt['DTAZ'], dtaz)
),
)
logsums.loc[dict(TAZ_ID=dtaz)] = m.logsums()
# TEST
assert logsums[:4, :5].values == approx(np.array([
[-2.935642, -2.660468, -1.745647, -2.229293, -2.403132],
[-2.935642, -2.660468, -1.745647, -2.229293, -2.403132],
[-2.935642, -2.660468, -1.745647, -2.229293, -2.403132],
[-3.046127, -2.770006, -1.823385, -2.320373, -2.4941 ],
]), rel=1e-5)
assert logsums.shape == (7564, 40)
assert logsums[-4:, -5:].values == approx(np.array([
[-0.353981, -0.729011, -1.163964, -0.421501, -0.912948],
[-0.353981, -0.729011, -1.163964, -0.421501, -0.912948],
[-0.246663, -0.721656, -1.156654, -0.354325, -0.89051 ],
[-0.391211, -0.731383, -1.166312, -0.444022, -0.920203],
]), rel=1e-5)
logsums.to_zarr('logsums.zarr.zip')
lx.DataArray.from_zarr('logsums.zarr.zip')
# TEST
reloaded = _
assert reloaded[:4, :5].values == approx(np.array([
[-2.935642, -2.660468, -1.745647, -2.229293, -2.403132],
[-2.935642, -2.660468, -1.745647, -2.229293, -2.403132],
[-2.935642, -2.660468, -1.745647, -2.229293, -2.403132],
[-3.046127, -2.770006, -1.823385, -2.320373, -2.4941 ],
]), rel=1e-5)
assert reloaded.shape == (7564, 40)
assert reloaded[-4:, -5:].values == approx(np.array([
[-0.353981, -0.729011, -1.163964, -0.421501, -0.912948],
[-0.353981, -0.729011, -1.163964, -0.421501, -0.912948],
[-0.246663, -0.721656, -1.156654, -0.354325, -0.89051 ],
[-0.391211, -0.731383, -1.166312, -0.444022, -0.920203],
]), rel=1e-5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Data
Step6: We make a vocabulary, replacing any word that occurs less than 10 times with unk.
Step8: Mikolov suggested keeping word $w$ with probability
Step9: We compare the frequency of certain common and rare words in the original and subsampled data below.
Step10: Let's tokenize the subsampled data.
Step11: Extracting central target words and their contexts
Step12: Example. Suppose we have a corpus with 2 sentences of length 7 and 3, and we use a max context of size 2. Here are the centers and contexts.
Step13: Extract context for the full dataset.
Step15: Negative sampling
Step16: Example.
Step17: Now we generate $K$ negatives for each context. These are drawn from $p(w) \propto \text{freq}(w)^{0.75}$.
Step18: Minibatching
Step19: Example. We make a ragged minibatch with 2 examples, and then pad them to a standard size.
Step20: Dataloader
Step21: Let's print the first minibatch.
Step22: Model
Step23: Example. Assume the vocab size is 20 and we use $E=4$ embedding dimensions.
Step25: Loss
Step26: Different masks can lead to different results.
Step32: Training
Step33: Test
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import math
import os
import random
random.seed(0)
import jax
import jax.numpy as jnp
try:
from flax import linen as nn
except ModuleNotFoundError:
%pip install -qq flax
from flax import linen as nn
from flax.training import train_state
try:
import optax
except ModuleNotFoundError:
%pip install -qq optax
import optax
try:
import torch
except ModuleNotFoundError:
%pip install -qq torch
import torch
import requests
import zipfile
import tarfile
import hashlib
import collections
from IPython import display
import time
!mkdir figures # for saving plots
# Required functions for downloading data
def download(name, cache_dir=os.path.join("..", "data")):
Download a file inserted into DATA_HUB, return the local filename.
assert name in DATA_HUB, f"{name} does not exist in {DATA_HUB}."
url, sha1_hash = DATA_HUB[name]
os.makedirs(cache_dir, exist_ok=True)
fname = os.path.join(cache_dir, url.split("/")[-1])
if os.path.exists(fname):
sha1 = hashlib.sha1()
with open(fname, "rb") as f:
while True:
data = f.read(1048576)
if not data:
break
sha1.update(data)
if sha1.hexdigest() == sha1_hash:
return fname # Hit cache
print(f"Downloading {fname} from {url}...")
r = requests.get(url, stream=True, verify=True)
with open(fname, "wb") as f:
f.write(r.content)
return fname
def download_extract(name, folder=None):
Download and extract a zip/tar file.
fname = download(name)
base_dir = os.path.dirname(fname)
data_dir, ext = os.path.splitext(fname)
if ext == ".zip":
fp = zipfile.ZipFile(fname, "r")
elif ext in (".tar", ".gz"):
fp = tarfile.open(fname, "r")
else:
assert False, "Only zip/tar files can be extracted."
fp.extractall(base_dir)
return os.path.join(base_dir, folder) if folder else data_dir
DATA_HUB = dict()
DATA_URL = "http://d2l-data.s3-accelerate.amazonaws.com/"
DATA_HUB["ptb"] = (DATA_URL + "ptb.zip", "319d85e578af0cdc590547f26231e4e31cdf1e42")
def read_ptb():
data_dir = download_extract("ptb")
with open(os.path.join(data_dir, "ptb.train.txt")) as f:
raw_text = f.read()
return [line.split() for line in raw_text.split("\n")]
sentences = read_ptb()
f"# sentences: {len(sentences)}"
class Vocab:
Vocabulary for text.
def __init__(self, tokens=None, min_freq=0, reserved_tokens=None):
if tokens is None:
tokens = []
if reserved_tokens is None:
reserved_tokens = []
# Sort according to frequencies
counter = count_corpus(tokens)
self.token_freqs = sorted(counter.items(), key=lambda x: x[1], reverse=True)
# The index for the unknown token is 0
self.unk, uniq_tokens = 0, ["<unk>"] + reserved_tokens
uniq_tokens += [token for token, freq in self.token_freqs if freq >= min_freq and token not in uniq_tokens]
self.idx_to_token, self.token_to_idx = [], dict()
for token in uniq_tokens:
self.idx_to_token.append(token)
self.token_to_idx[token] = len(self.idx_to_token) - 1
def __len__(self):
return len(self.idx_to_token)
def __getitem__(self, tokens):
if not isinstance(tokens, (list, tuple)):
return self.token_to_idx.get(tokens, self.unk)
return [self.__getitem__(token) for token in tokens]
def to_tokens(self, indices):
if not isinstance(indices, (list, tuple)):
return self.idx_to_token[indices]
return [self.idx_to_token[index] for index in indices]
def count_corpus(tokens):
Count token frequencies.
# Here `tokens` is a 1D list or 2D list
if len(tokens) == 0 or isinstance(tokens[0], list):
# Flatten a list of token lists into a list of tokens
tokens = [token for line in tokens for token in line]
return collections.Counter(tokens)
vocab = Vocab(sentences, min_freq=10)
f"vocab size: {len(vocab)}"
def count_corpus(tokens):
Count token frequencies.
# Here `tokens` is a 1D list or 2D list
if len(tokens) == 0 or isinstance(tokens[0], list):
# Flatten a list of token lists into a list of tokens
tokens = [token for line in tokens for token in line]
return collections.Counter(tokens)
def subsampling(sentences, vocab):
# Map low frequency words into <unk>
sentences = [[vocab.idx_to_token[vocab[tk]] for tk in line] for line in sentences]
# Count the frequency for each word
counter = count_corpus(sentences)
num_tokens = sum(counter.values())
# Return True if to keep this token during subsampling
def keep(token):
return random.uniform(0, 1) < math.sqrt(1e-4 / counter[token] * num_tokens)
# Now do the subsampling
return [[tk for tk in line if keep(tk)] for line in sentences]
subsampled = subsampling(sentences, vocab)
def compare_counts(token):
return (
f'# of "{token}": '
f"before={sum([line.count(token) for line in sentences])}, "
f"after={sum([line.count(token) for line in subsampled])}"
)
print(compare_counts("the"))
print(compare_counts("join"))
corpus = [vocab[line] for line in subsampled]
print(corpus[0:3])
def get_centers_and_contexts(corpus, max_window_size):
centers, contexts = [], []
for line in corpus:
# Each sentence needs at least 2 words to form a "central target word
# - context word" pair
if len(line) < 2:
continue
centers += line
for i in range(len(line)): # Context window centered at i
window_size = random.randint(1, max_window_size)
indices = list(range(max(0, i - window_size), min(len(line), i + 1 + window_size)))
# Exclude the central target word from the context words
indices.remove(i)
contexts.append([line[idx] for idx in indices])
return centers, contexts
tiny_dataset = [list(range(7)), list(range(7, 10))]
print("dataset", tiny_dataset)
for center, context in zip(*get_centers_and_contexts(tiny_dataset, 2)):
print("center", center, "has contexts", context)
all_centers, all_contexts = get_centers_and_contexts(corpus, 5)
f"# center-context pairs: {len(all_centers)}"
class RandomGenerator:
Draw a random int in [0, n] according to n sampling weights.
def __init__(self, sampling_weights):
self.population = list(range(len(sampling_weights)))
self.sampling_weights = sampling_weights
self.candidates = []
self.i = 0
def draw(self):
if self.i == len(self.candidates):
self.candidates = random.choices(self.population, self.sampling_weights, k=10000)
self.i = 0
self.i += 1
return self.candidates[self.i - 1]
generator = RandomGenerator([2, 3, 4])
[generator.draw() for _ in range(10)]
def get_negatives(all_contexts, corpus, K):
counter = count_corpus(corpus)
sampling_weights = [counter[i] ** 0.75 for i in range(len(counter))]
all_negatives, generator = [], RandomGenerator(sampling_weights)
for contexts in all_contexts:
negatives = []
while len(negatives) < len(contexts) * K:
neg = generator.draw()
# Noise words cannot be context words
if neg not in contexts:
negatives.append(neg)
all_negatives.append(negatives)
return all_negatives
all_negatives = get_negatives(all_contexts, corpus, 5)
def batchify(data):
max_len = max(len(c) + len(n) for _, c, n in data)
centers, contexts_negatives, masks, labels = [], [], [], []
for center, context, negative in data:
cur_len = len(context) + len(negative)
centers += [center]
contexts_negatives += [context + negative + [0] * (max_len - cur_len)]
masks += [[1] * cur_len + [0] * (max_len - cur_len)]
labels += [[1] * len(context) + [0] * (max_len - len(context))]
return (np.array(centers).reshape((-1, 1)), np.array(contexts_negatives), np.array(masks), np.array(labels))
x_1 = (1, [2, 2], [3, 3, 3, 3])
x_2 = (1, [2, 2, 2], [3, 3])
batch = batchify((x_1, x_2))
names = ["centers", "contexts_negatives", "masks", "labels"]
for name, data in zip(names, batch):
print(name, "=", data)
def load_data_ptb(batch_size, max_window_size, num_noise_words):
num_workers = 2
sentences = read_ptb()
vocab = Vocab(sentences, min_freq=10)
subsampled = subsampling(sentences, vocab)
corpus = [vocab[line] for line in subsampled]
all_centers, all_contexts = get_centers_and_contexts(corpus, max_window_size)
all_negatives = get_negatives(all_contexts, corpus, num_noise_words)
class PTBDataset(torch.utils.data.Dataset):
def __init__(self, centers, contexts, negatives):
assert len(centers) == len(contexts) == len(negatives)
self.centers = centers
self.contexts = contexts
self.negatives = negatives
def __getitem__(self, index):
return (self.centers[index], self.contexts[index], self.negatives[index])
def __len__(self):
return len(self.centers)
dataset = PTBDataset(all_centers, all_contexts, all_negatives)
data_iter = torch.utils.data.DataLoader(
dataset, batch_size, shuffle=True, collate_fn=batchify, num_workers=num_workers
)
return data_iter, vocab
data_iter, vocab = load_data_ptb(512, 5, 5)
for batch in data_iter:
for name, data in zip(names, batch):
print(name, "shape:", data.shape)
break
batch_size, max_window_size, num_noise_words = 512, 5, 5
data_iter, vocab = load_data_ptb(batch_size, max_window_size, num_noise_words)
class SkipGram(nn.Module):
vocab_size: int
embed_size: int
@nn.compact
def __call__(self, center, contexts_and_negatives):
v = nn.Embed(self.vocab_size, self.embed_size)(center)
u = nn.Embed(self.vocab_size, self.embed_size)(contexts_and_negatives)
pred = jax.vmap(jnp.matmul)(v, u.transpose(0, 2, 1))
return pred
center = jnp.ones((2, 1), dtype=jnp.int32)
contexts_and_negatives = jnp.ones((2, 4), dtype=jnp.int32)
skip_gram = SkipGram(20, 4)
variables = skip_gram.init(jax.random.PRNGKey(0), center, contexts_and_negatives)
print(
f"Parameter embedding_weight ({variables['params']['Embed_0']['embedding'].shape}, "
f"dtype={variables['params']['Embed_0']['embedding'].dtype})"
)
skip_gram.apply(variables, center, contexts_and_negatives).shape
def sigmoid_bce_loss(inputs, target, mask=None):
BCE with logit loss, based on https://github.com/pytorch/pytorch/blob/1522912602bc4cc5f7adbce66cad00ebb436f195/aten/src/ATen/native/Loss.cpp#L317
max_val = jnp.clip(-inputs, 0, None)
loss = (1 - target) * inputs + max_val + jnp.log(jnp.exp(-max_val) + jnp.exp((-inputs - max_val)))
if mask is not None:
loss = loss * mask
return loss.mean(axis=1)
pred = jnp.array([[0.5] * 4] * 2)
label = jnp.array([[1.0, 0.0, 1.0, 0.0]] * 2)
mask = jnp.array([[1, 1, 1, 1], [1, 1, 0, 0]])
sigmoid_bce_loss(pred, label, mask)
sigmoid_bce_loss(pred, label, mask) / mask.sum(axis=1) * mask.shape[1]
# Functions for plotting and accumulating sum
def set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend):
Set the axes for matplotlib.
axes.set_xlabel(xlabel)
axes.set_ylabel(ylabel)
axes.set_xscale(xscale)
axes.set_yscale(yscale)
axes.set_xlim(xlim)
axes.set_ylim(ylim)
if legend:
axes.legend(legend)
axes.grid()
class Animator:
For plotting data in animation.
def __init__(
self,
xlabel=None,
ylabel=None,
legend=None,
xlim=None,
ylim=None,
xscale="linear",
yscale="linear",
fmts=("-", "m--", "g-.", "r:"),
nrows=1,
ncols=1,
figsize=(3.5, 2.5),
):
# Incrementally plot multiple lines
if legend is None:
legend = []
display.set_matplotlib_formats("svg")
self.fig, self.axes = plt.subplots(nrows, ncols, figsize=figsize)
if nrows * ncols == 1:
self.axes = [
self.axes,
]
# Use a lambda function to capture arguments
self.config_axes = lambda: set_axes(self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend)
self.X, self.Y, self.fmts = None, None, fmts
def add(self, x, y):
# Add multiple data points into the figure
if not hasattr(y, "__len__"):
y = [y]
n = len(y)
if not hasattr(x, "__len__"):
x = [x] * n
if not self.X:
self.X = [[] for _ in range(n)]
if not self.Y:
self.Y = [[] for _ in range(n)]
for i, (a, b) in enumerate(zip(x, y)):
if a is not None and b is not None:
self.X[i].append(a)
self.Y[i].append(b)
self.axes[0].cla()
for x, y, fmt in zip(self.X, self.Y, self.fmts):
self.axes[0].plot(x, y, fmt)
self.config_axes()
display.display(self.fig)
display.clear_output(wait=True)
class Accumulator:
For accumulating sums over `n` variables.
def __init__(self, n):
self.data = [0.0] * n
def add(self, *args):
self.data = [a + float(b) for a, b in zip(self.data, args)]
def reset(self):
self.data = [0.0] * len(self.data)
def __getitem__(self, idx):
return self.data[idx]
def create_train_state(rng, learning_rate, embed_size):
Creates initial `TrainState`.
net = SkipGram(len(vocab), embed_size)
params = net.init(
rng,
jnp.ones([1, 1], dtype=jnp.int32),
jnp.ones([1, max_window_size * 2 * (num_noise_words + 1)], dtype=jnp.int32),
)["params"]
tx = optax.adam(learning_rate)
return train_state.TrainState.create(apply_fn=net.apply, params=params, tx=tx)
@jax.jit
def train_step(state, batch):
Train for a single step.
center, context_negative, mask, label = batch
def loss_fn(params):
pred = state.apply_fn({"params": params}, center, context_negative)
loss = sigmoid_bce_loss(pred.reshape(label.shape), label, mask) / mask.sum(axis=1) * mask.shape[1]
return jnp.sum(loss)
grad_fn = jax.value_and_grad(loss_fn)
loss, grads = grad_fn(state.params)
state = state.apply_gradients(grads=grads)
return state, loss
def train(data_iter, lr, num_epochs, embed_size):
rng = jax.random.PRNGKey(1)
state = create_train_state(rng, lr, embed_size)
animator = Animator(xlabel="epoch", ylabel="loss", xlim=[1, num_epochs])
metric = Accumulator(2) # Sum of losses, no. of tokens
for epoch in range(num_epochs):
seconds, num_batches = time.time(), len(data_iter)
for i, batch in enumerate(data_iter):
state, loss = train_step(state, list(map(jnp.array, batch)))
metric.add(loss, batch[0].shape[0])
if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
animator.add(epoch + (i + 1) / num_batches, (metric[0] / metric[1],))
device = jax.default_backend()
print(
f"loss {metric[0] / metric[1]:.3f}, " f"{metric[1] / (time.time() - seconds):.1f} tokens/sec on {str(device)}"
)
return state
embed_size = 100
lr, num_epochs = 0.01, 5
state = train(data_iter, lr, num_epochs, embed_size)
def get_similar_tokens(query_token, k, state):
W = state.params["Embed_0"]["embedding"]
x = W[vocab[query_token]]
# Compute the cosine similarity. Add 1e-9 for numerical stability
cos = jnp.dot(W, x) / jnp.sqrt(jnp.sum(W * W, axis=1) * jnp.sum(x * x) + 1e-9)
topk = jax.lax.top_k(cos, k + 1)[1]
for i in topk[1:]: # Remove the input words
print(f"cosine sim={float(cos[i]):.3f}: {vocab.idx_to_token[i]}")
get_similar_tokens("chip", 3, state)
get_similar_tokens("president", 3, state)
get_similar_tokens("dog", 3, state)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First here's a very compact set of statements to get and plot the data for a station. No exploratory side trips.
Step2: Now the same thing, but with lots of exploration in between
Step3: May 10
|
<ASSISTANT_TASK:>
Python Code:
from datetime import datetime, timedelta
import pandas as pd
from pyoos.collectors.nerrs.nerrs_soap import NerrsSoap
# FROM pyoos SOS handling
# Convenience function to build record style time series representation
def flatten_element(p):
rd = {'time':p.time}
for m in p.members:
rd[m['standard']] = m['value']
return rd
# sta.get_unique_members() serves the same function as the pyoos SOS get_unique_members method
# Convenience function to extract a dict of unique members (observed properties) by standard name
obsprops_bystdname = lambda sta: {m['standard']:m for m in sta.get_unique_members()}
# NERRS/CDMO access token.
accesstoken = 'TOKEN STRING'
# Initialize pyoos NERRS collector object
nerrsData = NerrsSoap()
# Access pdbpfmet station, for the last 7 days (roughly)
nerrsData.filter(features=['pdbpfmet'],
start=datetime.utcnow() - timedelta(days=7),
end=datetime.utcnow() - timedelta(hours=12))
response = nerrsData.collect(accesstoken)
sta = response.elements[0]
obsprops_bystdname_dict = obsprops_bystdname(sta)
# FROM pyoos SOS handling
# For first (and only) station
flattenedsta_0 = map(flatten_element, sta.elements)
sta_0df = pd.DataFrame.from_records(flattenedsta_0, index=['time'])
sta_0df.head()
# Time series plot.
# "wind_speed" is currently mispelled; that's in pyoos, and can be fixed easily
obsprop_name = 'wind_sped'
obsprop = obsprops_bystdname_dict[obsprop_name]
sta_0df[obsprop_name].plot()
ylabel(obsprop_name + ' ('+obsprop['unit']+')');
# pyoos NERRS collector
nerrsData = NerrsSoap()
# Get all Padilla Bay stations (pdb)
[featureid for featureid in nerrsData.list_features() if featureid.startswith('pdb')]
# Access pdbpfmet station, for the last 7 days (roughly)
nerrsData.filter(features=['pdbpfmet'],
start=datetime.utcnow() - timedelta(days=7),
end=datetime.utcnow() - timedelta(hours=12))
#nerrsData.filter(variables=["ATemp"])
response = nerrsData.collect()
# The raw response (a string) is not used outside this cell. The collect method response is what's used
# I'm showing the raw response here, just for reference
raw = nerrsData.raw()
type(raw), raw.keys()
# response.elements is a one-element array with a paegan.cdm.dsg.features.station.Station element
response.elements
# Looks like the station in the response doesn't include any info about the Reserve it belongs to. Too bad.
# Or is one of the pieces of information below the NERRS site?
sta = response.elements[0]
sta.__dict__.keys()
sta.uid, sta.name, sta.description, sta.type, sta.location, sta.properties
# 'siteid' and 'location_description' seem to refer to the NERRS reserve/site
sta.get_property('siteid'), sta._properties
staloc = sta.get_location()
print staloc, '||', staloc.type, '||', staloc.xy
obsprops_bystdname_dict = obsprops_bystdname(sta)
obsprops_bystdname_dict['wind_sped']
# The individual observations are returned in the station "elements"
stael = sta.elements
type(stael), len(stael)
stael0 = stael[0]
stael0.time
# See sta.get_unique_members(), above
# stael0.get_member_names() returns a list of member names for this station 'element'
stael0.members
# From paegan: flatten Returns a Generator of Points that are part of this collection
# Just exploring what this does...
response.flatten
# FROM pyoos SOS handling
# For first (and only) station
flattenedsta_0 = map(flatten_element, sta.elements)
sta_0df = pd.DataFrame.from_records(flattenedsta_0, index=['time'])
sta_0df.head()
# Time series plot.
# "wind_speed" is currently mispelled; that's in pyoos, and can be fixed easily
obsprop_name = 'wind_sped'
obsprop = obsprops_bystdname_dict[obsprop_name]
sta_0df[obsprop_name].plot()
ylabel(obsprop_name + ' ('+obsprop['unit']+')');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 1 - The problem of very deep neural networks
Step4: Expected Output
Step6: Expected Output
Step7: Run the following code to build the model's graph. If your implementation is not correct you will know it by checking your accuracy when running model.fit(...) below.
Step8: As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model.
Step9: The model is now ready to be trained. The only thing you need is a dataset.
Step10: Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch.
Step11: Expected Output
Step12: Expected Output
Step13: ResNet50 is a powerful model for image classification when it is trained for an adequate number of iterations. We hope you can use what you've learnt and apply it to your own classification problem to perform state-of-the-art accuracy.
Step14: You can also print a summary of your model by running the following code.
Step15: Finally, run the code below to visualize your ResNet50. You can also download a .png picture of your model by going to "File -> Open...-> model.png".
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from resnets_utils import *
from keras.initializers import glorot_uniform
import scipy.misc
from matplotlib.pyplot import imshow
%matplotlib inline
import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)
# GRADED FUNCTION: identity_block
def identity_block(X, f, filters, stage, block):
Implementation of the identity block as defined in Figure 3
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
Returns:
X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. You'll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = None
X = None
X = None
# Third component of main path (≈2 lines)
X = None
X = None
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = None
X = None
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = identity_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
# GRADED FUNCTION: convolutional_block
def convolutional_block(X, f, filters, stage, block, s = 2):
Implementation of the convolutional block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
s -- Integer, specifying the stride to be used
Returns:
X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
##### MAIN PATH #####
# First component of main path
X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = None
X = None
X = None
# Third component of main path (≈2 lines)
X = None
X = None
##### SHORTCUT PATH #### (≈2 lines)
X_shortcut = None
X_shortcut = None
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = None
X = None
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = convolutional_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
# GRADED FUNCTION: ResNet50
def ResNet50(input_shape = (64, 64, 3), classes = 6):
Implementation of the popular ResNet50 the following architecture:
CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
# Define the input as a tensor with shape input_shape
X_input = Input(input_shape)
# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)
# Stage 1
X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3, 3), strides=(2, 2))(X)
# Stage 2
X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1)
X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')
X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')
### START CODE HERE ###
# Stage 3 (≈4 lines)
X = None
X = None
X = None
X = None
# Stage 4 (≈6 lines)
X = None
X = None
X = None
X = None
X = None
X = None
# Stage 5 (≈3 lines)
X = None
X = None
X = None
# AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)"
X = None
### END CODE HERE ###
# output layer
X = Flatten()(X)
X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X)
# Create model
model = Model(inputs = X_input, outputs = X, name='ResNet50')
return model
model = ResNet50(input_shape = (64, 64, 3), classes = 6)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
model.fit(X_train, Y_train, epochs = 2, batch_size = 32)
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
model = load_model('ResNet50.h5')
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
img_path = 'images/my_image.jpg'
img = image.load_img(img_path, target_size=(64, 64))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
my_image = scipy.misc.imread(img_path)
imshow(my_image)
print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ")
print(model.predict(x))
model.summary()
plot_model(model, to_file='model.png')
SVG(model_to_dot(model).create(prog='dot', format='svg'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example.
Step2: Writing and running programs in TensorFlow has the following steps
Step3: As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
Step4: Great! To summarize, remember to initialize your variables, create a session and run the operations inside the session.
Step6: When you first defined x you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you feed data to these placeholders when running the session.
Step8: Expected Output
Step10: Expected Output
Step12: Expected Output
Step14: Expected Output
Step15: Expected Output
Step16: Change the index below and run the cell to visualize some examples in the dataset.
Step17: As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
Step19: Note that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing.
Step21: Expected Output
Step23: Expected Output
Step25: Expected Output
Step27: Expected Output
Step28: Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!
Step29: Expected Output
|
<ASSISTANT_TASK:>
Python Code:
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
%matplotlib inline
np.random.seed(1)
y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y') # Define y. Set to 39
loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss
init = tf.global_variables_initializer() # When init is run later (session.run(init)),
# the loss variable will be initialized and ready to be computed
with tf.Session() as session: # Create a session and print the output
session.run(init) # Initializes the variables
print(session.run(loss))
# Prints the loss
a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c)
sess = tf.Session()
print(sess.run(c))
# Change the value of x in the feed_dict
x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close()
# GRADED FUNCTION: linear_function
def linear_function():
Implements a linear function:
Initializes W to be a random tensor of shape (4,3)
Initializes X to be a random tensor of shape (3,1)
Initializes b to be a random tensor of shape (4,1)
Returns:
result -- runs the session for Y = WX + b
np.random.seed(1)
### START CODE HERE ### (4 lines of code)
X = tf.constant(np.random.randn(3,1), name = "X")
W = tf.constant(np.random.randn(4,3), name = "W")
b = tf.constant(np.random.randn(4,1), name = "b")
Y = tf.add(tf.matmul(W,X),b)
### END CODE HERE ###
# Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate
### START CODE HERE ###
sess = tf.Session()
result = sess.run(Y)
### END CODE HERE ###
# close the session
sess.close()
return result
print( "result = " + str(linear_function()))
# GRADED FUNCTION: sigmoid
def sigmoid(z):
Computes the sigmoid of z
Arguments:
z -- input value, scalar or vector
Returns:
results -- the sigmoid of z
### START CODE HERE ### ( approx. 4 lines of code)
# Create a placeholder for x. Name it 'x'.
x = tf.placeholder(tf.float32, name = "x")
# compute sigmoid(x)
sigmoid = tf.sigmoid(x)
# Create a session, and run it. Please use the method 2 explained above.
# You should use a feed_dict to pass z's value to x.
with tf.Session() as sess:
# Run session and call the output "result"
result = sess.run(sigmoid, feed_dict = {x:z})
### END CODE HERE ###
return result
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(12) = " + str(sigmoid(12)))
# GRADED FUNCTION: cost
def cost(logits, labels):
Computes the cost using the sigmoid cross entropy
Arguments:
logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
labels -- vector of labels y (1 or 0)
Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels"
in the TensorFlow documentation. So logits will feed into z, and labels into y.
Returns:
cost -- runs the session of the cost (formula (2))
### START CODE HERE ###
# Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
z = tf.placeholder(tf.float32, name = "z")
y = tf.placeholder(tf.float32, name = "y")
# Use the loss function (approx. 1 line)
cost = tf.nn.sigmoid_cross_entropy_with_logits(logits = z, labels = y)
# Create a session (approx. 1 line). See method 1 above.
sess = tf.Session()
# Run the session (approx. 1 line).
cost = sess.run(cost, feed_dict = {z:logits, y:labels})
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return cost
logits = sigmoid(np.array([0.2,0.4,0.7,0.9]))
cost = cost(logits, np.array([0,0,1,1]))
print ("cost = " + str(cost))
# GRADED FUNCTION: one_hot_matrix
def one_hot_matrix(labels, C):
Creates a matrix where the i-th row corresponds to the ith class number and the jth column
corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
will be 1.
Arguments:
labels -- vector containing the labels
C -- number of classes, the depth of the one hot dimension
Returns:
one_hot -- one hot matrix
### START CODE HERE ###
# Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)
C = tf.constant(C)
# Use tf.one_hot, be careful with the axis (approx. 1 line)
one_hot_matrix = tf.one_hot(labels, C, axis=0)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session (approx. 1 line)
one_hot = sess.run(one_hot_matrix, feed_dict={})
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return one_hot
labels = np.array([1,2,3,0,2,1])
one_hot = one_hot_matrix(labels, C = 4)
print ("one_hot = " + str(one_hot))
# GRADED FUNCTION: ones
def ones(shape):
Creates an array of ones of dimension shape
Arguments:
shape -- shape of the array you want to create
Returns:
ones -- array containing only ones
### START CODE HERE ###
# Create "ones" tensor using tf.ones(...). (approx. 1 line)
ones = tf.ones(shape)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session to compute 'ones' (approx. 1 line)
ones = sess.run(ones)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return ones
print ("ones = " + str(ones([3])))
# Loading the dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Example of a picture
index = 0
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
# Flatten the training and test images
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
# Normalize image vectors
X_train = X_train_flatten/255.
X_test = X_test_flatten/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6)
Y_test = convert_to_one_hot(Y_test_orig, 6)
print ("number of training examples = " + str(X_train.shape[1]))
print ("number of test examples = " + str(X_test.shape[1]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_x, n_y):
Creates the placeholders for the tensorflow session.
Arguments:
n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)
n_y -- scalar, number of classes (from 0 to 5, so -> 6)
Returns:
X -- placeholder for the data input, of shape [n_x, None] and dtype "float"
Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"
Tips:
- You will use None because it let's us be flexible on the number of examples you will for the placeholders.
In fact, the number of examples during test/train is different.
### START CODE HERE ### (approx. 2 lines)
X = tf.placeholder(tf.float32, [n_x, None], name='X')
Y = tf.placeholder(tf.float32, [n_y,None], name='Y')
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(12288, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
Initializes parameters to build a neural network with tensorflow. The shapes are:
W1 : [25, 12288]
b1 : [25, 1]
W2 : [12, 25]
b2 : [12, 1]
W3 : [6, 12]
b3 : [6, 1]
Returns:
parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 6 lines of code)
W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())
W2 = tf.get_variable("W2", [12,25], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b2 = tf.get_variable("b2", [12,1], initializer = tf.zeros_initializer())
W3 = tf.get_variable("W3", [6,12], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b3 = tf.get_variable("b3", [6,1], initializer = tf.zeros_initializer())
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2,
"W3": W3,
"b3": b3}
return parameters
tf.reset_default_graph()
with tf.Session() as sess:
parameters = initialize_parameters()
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:
Z1 = tf.add(tf.matmul(W1,X),b1) # Z1 = np.dot(W1, X) + b1
A1 = tf.nn.relu(Z1) # A1 = relu(Z1)
Z2 = tf.add(tf.matmul(W2,A1),b2) # Z2 = np.dot(W2, a1) + b2
A2 = tf.nn.relu(Z2) # A2 = relu(Z2)
Z3 = tf.add(tf.matmul(W3,A2),b3) # Z3 = np.dot(W3,Z2) + b3
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
print("Z3 = " + str(Z3))
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
logits = tf.transpose(Z3)
labels = tf.transpose(Y)
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = labels))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
print("cost = " + str(cost))
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
num_epochs = 1500, minibatch_size = 32, print_cost = True):
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)
n_y = Y_train.shape[0] # n_y : output size
costs = [] # To keep track of the cost
# Create Placeholders of shape (n_x, n_y)
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_x, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
epoch_cost = 0. # Defines a cost related to an epoch
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
### END CODE HERE ###
epoch_cost += minibatch_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# lets save the parameters in a variable
parameters = sess.run(parameters)
print ("Parameters have been trained!")
# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
return parameters
parameters = model(X_train, Y_train, X_test, Y_test)
import scipy
from PIL import Image
from scipy import ndimage
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "thumbs_up.jpg"
## END CODE HERE ##
# We preprocess your image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T
my_image_prediction = predict(my_image, parameters)
plt.imshow(image)
print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read history file from Virtual Explorer
Step2: Visualise calculated geophysical fields
Step3: We now get two files for the caluclated fields
Step4: Change history and compare gravity
Step5: As a simple test, we are changing the fault slip for all the faults and simply add 1000 m to all defined slips. In order to not mess up the original model, we are creating a copy of the history object first
Step6: We now write the adjusted history back to a new history file and then calculate the updated gravity field
Step7: Figure with all results
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import sys, os
import matplotlib.pyplot as plt
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
import pynoddy
import matplotlib.pyplot as plt
import numpy as np
from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
import pynoddy.history
reload(pynoddy.history)
# read model directly from Atlas of Virtual Geophysics
#his = pynoddy.history.NoddyHistory(url = http://virtualexplorer.com.au/special/noddyatlas/ch3/ch3_5/his/nfold_thrust.his")
# his = pynoddy.history.NoddyHistory(url = "http://virtualexplorer.com.au/special/noddyatlas/ch3/ch3_6/his/ndome_basin.his")
his = pynoddy.history.NoddyHistory(url = "http://tectonique.net/asg/ch3/ch3_5/his/fold_thrust.his")
his.determine_model_stratigraphy()
his.change_cube_size(50)
# Save to (local) file to compute and visualise model
history_name = "fold_thrust.his"
his.write_history(history_name)
# his = pynoddy.history.NoddyHistory(history_name)
output = "fold_thrust_out"
pynoddy.compute_model(history_name, output)
import pynoddy.output
reload(pynoddy.output)
# load and visualise model
h_out = pynoddy.output.NoddyOutput(output)
# his.determine_model_stratigraphy()
h_out.plot_section('x',
layer_labels = his.model_stratigraphy,
colorbar_orientation = 'horizontal',
colorbar=False,
title = '',
# savefig=True, fig_filename = 'fold_thrust_NS_section.eps',
cmap = 'YlOrRd')
h_out.plot_section('y', layer_labels = his.model_stratigraphy,
colorbar_orientation = 'horizontal', title = '', cmap = 'YlOrRd',
# savefig=True, fig_filename = 'fold_thrust_EW_section.eps',
ve=1.5)
h_out.export_to_vtk(vtk_filename = "fold_thrust")
pynoddy.compute_model(history_name, output, sim_type = 'GEOPHYSICS')
reload(pynoddy.output)
geophys = pynoddy.output.NoddyGeophysics(output)
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(111)
# imshow(geophys.grv_data, cmap = 'jet')
# define contour levels
levels = np.arange(322,344,1)
cf = ax.contourf(geophys.grv_data, levels, cmap = 'gray', vmin = 324, vmax = 342)
cbar = plt.colorbar(cf, orientation = 'horizontal')
# print levels
for i in range(4):
print("\nEvent %d" % (i+2))
print "Event type:\t" + his.events[i+2].event_type
print "Fault slip:\t%.1f" % his.events[i+2].properties['Slip']
print "Fault dip:\t%.1f" % his.events[i+2].properties['Dip']
print "Dip direction:\t%.1f" % his.events[i+2].properties['Dip Direction']
reload(pynoddy.history)
reload(pynoddy.events)
his2 = pynoddy.history.NoddyHistory("fold_thrust.his")
print his2.events[6].properties
import copy
his = pynoddy.history.NoddyHistory(history_name)
his.all_events_end += 1
his_changed = copy.deepcopy(his)
# change parameters of kinematic events
slip_change = 2000.
wavelength_change = 2000.
# his_changed.events[3].properties['Slip'] += slip_change
# his_changed.events[5].properties['Slip'] += slip_change
# change fold wavelength
his_changed.events[6].properties['Wavelength'] += wavelength_change
his_changed.events[6].properties['X'] += wavelength_change/2.
his_changed.write_history('fold_thrust_changed.his')
reload(pynoddy)
# %%timeit
# recompute block model
pynoddy.compute_model('fold_thrust_changed.his', 'fold_thrust_changed_out')
# %%timeit
# recompute geophysical response
pynoddy.compute_model('fold_thrust_changed.his', 'fold_thrust_changed_out',
sim_type = 'GEOPHYSICS')
# load changed block model
geo_changed = pynoddy.output.NoddyOutput('fold_thrust_changed_out')
# load output and visualise geophysical field
geophys_changed = pynoddy.output.NoddyGeophysics('fold_thrust_changed_out')
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(111)
# imshow(geophys_changed.grv_data, cmap = 'jet')
cf = ax.contourf(geophys_changed.grv_data, levels, cmap = 'gray', vmin = 324, vmax = 342)
cbar = plt.colorbar(cf, orientation = 'horizontal')
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(111)
# imshow(geophys.grv_data - geophys_changed.grv_data, cmap = 'jet')
maxval = np.ceil(np.max(np.abs(geophys.grv_data - geophys_changed.grv_data)))
# comp_levels = np.arange(-maxval,1.01 * maxval, 0.05 * maxval)
cf = ax.contourf(geophys.grv_data - geophys_changed.grv_data, 20, cmap = 'spectral') #, comp_levels, cmap = 'RdBu_r')
cbar = plt.colorbar(cf, orientation = 'horizontal')
# compare sections through model
geo_changed.plot_section('y', colorbar = False)
h_out.plot_section('y', colorbar = False)
his.events
for i in range(4):
print("Event %d" % (i+2))
print his.events[i+2].properties['Slip']
print his.events[i+2].properties['Dip']
print his.events[i+2].properties['Dip Direction']
# recompute the geology blocks for comparison:
pynoddy.compute_model('fold_thrust_changed', 'fold_thrust_changed_out')
geology_changed = pynoddy.output.NoddyOutput('fold_thrust_changed_out')
geology_changed.plot_section('x',
# layer_labels = his.model_stratigraphy,
colorbar_orientation = 'horizontal',
colorbar=False,
title = '',
# savefig=True, fig_filename = 'fold_thrust_NS_section.eps',
cmap = 'YlOrRd')
geology_changed.plot_section('y',
# layer_labels = his.model_stratigraphy,
colorbar_orientation = 'horizontal', title = '', cmap = 'YlOrRd',
# savefig=True, fig_filename = 'fold_thrust_EW_section.eps',
ve=1.5)
# Calculate block difference and export as VTK for 3-D visualisation:
import copy
diff_model = copy.deepcopy(geology_changed)
diff_model.block -= h_out.block
diff_model.export_to_vtk(vtk_filename = "diff_model_fold_thrust_belt")
fig = plt.figure(figsize=(20,8))
ax1 = fig.add_subplot(131)
# original plot
levels = np.arange(322,344,1)
cf1 = ax1.contourf(geophys.grv_data, levels, cmap = 'gray', vmin = 324, vmax = 342)
# cbar1 = ax1.colorbar(cf1, orientation = 'horizontal')
fig.colorbar(cf1, orientation='horizontal')
ax1.set_title('Gravity of original model')
ax2 = fig.add_subplot(132)
cf2 = ax2.contourf(geophys_changed.grv_data, levels, cmap = 'gray', vmin = 324, vmax = 342)
ax2.set_title('Gravity of changed model')
fig.colorbar(cf2, orientation='horizontal')
ax3 = fig.add_subplot(133)
comp_levels = np.arange(-10.,10.1,0.25)
cf3 = ax3.contourf(geophys.grv_data - geophys_changed.grv_data, comp_levels, cmap = 'RdBu_r')
ax3.set_title('Gravity difference')
fig.colorbar(cf3, orientation='horizontal')
plt.savefig("grav_ori_changed_compared.eps")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example - Tunnel Traffic
Step2: Let's take a look at seasonal plots over a week and over a year.
Step3: Now let's look at the periodogram
Step4: The periodogram agrees with the seasonal plots above
Step5: With our feature set created, we're ready to fit the model and make predictions. We'll add a 90-day forecast to see how our model extrapolates beyond the training data. The code here is the same as that in earlier lessons.
|
<ASSISTANT_TASK:>
Python Code:
#$HIDE_INPUT$
import numpy as np
def fourier_features(index, freq, order):
time = np.arange(len(index), dtype=np.float32)
k = 2 * np.pi * (1 / freq) * time
features = {}
for i in range(1, order + 1):
features.update({
f"sin_{freq}_{i}": np.sin(i * k),
f"cos_{freq}_{i}": np.cos(i * k),
})
return pd.DataFrame(features, index=index)
# Compute Fourier features to the 4th order (8 new features) for a
# series y with daily observations and annual seasonality:
#
# fourier_features(y, freq=365.25, order=4)
#$HIDE_INPUT$
from pathlib import Path
from warnings import simplefilter
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from sklearn.linear_model import LinearRegression
from statsmodels.tsa.deterministic import CalendarFourier, DeterministicProcess
simplefilter("ignore")
# Set Matplotlib defaults
plt.style.use("seaborn-whitegrid")
plt.rc("figure", autolayout=True, figsize=(11, 5))
plt.rc(
"axes",
labelweight="bold",
labelsize="large",
titleweight="bold",
titlesize=16,
titlepad=10,
)
plot_params = dict(
color="0.75",
style=".-",
markeredgecolor="0.25",
markerfacecolor="0.25",
legend=False,
)
%config InlineBackend.figure_format = 'retina'
# annotations: https://stackoverflow.com/a/49238256/5769929
def seasonal_plot(X, y, period, freq, ax=None):
if ax is None:
_, ax = plt.subplots()
palette = sns.color_palette("husl", n_colors=X[period].nunique(),)
ax = sns.lineplot(
x=freq,
y=y,
hue=period,
data=X,
ci=False,
ax=ax,
palette=palette,
legend=False,
)
ax.set_title(f"Seasonal Plot ({period}/{freq})")
for line, name in zip(ax.lines, X[period].unique()):
y_ = line.get_ydata()[-1]
ax.annotate(
name,
xy=(1, y_),
xytext=(6, 0),
color=line.get_color(),
xycoords=ax.get_yaxis_transform(),
textcoords="offset points",
size=14,
va="center",
)
return ax
def plot_periodogram(ts, detrend='linear', ax=None):
from scipy.signal import periodogram
fs = pd.Timedelta("1Y") / pd.Timedelta("1D")
freqencies, spectrum = periodogram(
ts,
fs=fs,
detrend=detrend,
window="boxcar",
scaling='spectrum',
)
if ax is None:
_, ax = plt.subplots()
ax.step(freqencies, spectrum, color="purple")
ax.set_xscale("log")
ax.set_xticks([1, 2, 4, 6, 12, 26, 52, 104])
ax.set_xticklabels(
[
"Annual (1)",
"Semiannual (2)",
"Quarterly (4)",
"Bimonthly (6)",
"Monthly (12)",
"Biweekly (26)",
"Weekly (52)",
"Semiweekly (104)",
],
rotation=30,
)
ax.ticklabel_format(axis="y", style="sci", scilimits=(0, 0))
ax.set_ylabel("Variance")
ax.set_title("Periodogram")
return ax
data_dir = Path("../input/ts-course-data")
tunnel = pd.read_csv(data_dir / "tunnel.csv", parse_dates=["Day"])
tunnel = tunnel.set_index("Day").to_period("D")
X = tunnel.copy()
# days within a week
X["day"] = X.index.dayofweek # the x-axis (freq)
X["week"] = X.index.week # the seasonal period (period)
# days within a year
X["dayofyear"] = X.index.dayofyear
X["year"] = X.index.year
fig, (ax0, ax1) = plt.subplots(2, 1, figsize=(11, 6))
seasonal_plot(X, y="NumVehicles", period="week", freq="day", ax=ax0)
seasonal_plot(X, y="NumVehicles", period="year", freq="dayofyear", ax=ax1);
plot_periodogram(tunnel.NumVehicles);
from statsmodels.tsa.deterministic import CalendarFourier, DeterministicProcess
fourier = CalendarFourier(freq="A", order=10) # 10 sin/cos pairs for "A"nnual seasonality
dp = DeterministicProcess(
index=tunnel.index,
constant=True, # dummy feature for bias (y-intercept)
order=1, # trend (order 1 means linear)
seasonal=True, # weekly seasonality (indicators)
additional_terms=[fourier], # annual seasonality (fourier)
drop=True, # drop terms to avoid collinearity
)
X = dp.in_sample() # create features for dates in tunnel.index
#$HIDE_INPUT$
y = tunnel["NumVehicles"]
model = LinearRegression(fit_intercept=False)
_ = model.fit(X, y)
y_pred = pd.Series(model.predict(X), index=y.index)
X_fore = dp.out_of_sample(steps=90)
y_fore = pd.Series(model.predict(X_fore), index=X_fore.index)
ax = y.plot(color='0.25', style='.', title="Tunnel Traffic - Seasonal Forecast")
ax = y_pred.plot(ax=ax, label="Seasonal")
ax = y_fore.plot(ax=ax, label="Seasonal Forecast", color='C3')
_ = ax.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 1. Utility functions
Step3: 2. Open the data file
Step4: We can open the file, as a normal HDF5 file
Step5: The object h5file is a pytables file reference. The root group is accessed with h5file.root.
Step6: We see the typical Photon-HDF5 structure. In particular the field description provides a short description of the measurement and acquisition_duration tells that the acquisition lasted 600 seconds.
Step7: Finally, we define a shortcut to the photon_data group to save some typing later
Step8: 4. Reading the data
Step9: Ok, tha's what we espect.
Step10: We may want to check the excitation wavelengths used in the measurement. This information is found in the setup group
Step11: Now, let's load the definitions of donor/acceptor channel and excitation periods
Step12: These numbers define the donor and acceptor alternation periods as shown below
Step13: Now that the data has been loaded we can plot an alternation histogram using matplotlib
Step14: 6. Timestamps in different excitation periods
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division, print_function # only needed on py2
%matplotlib inline
import numpy as np
import h5py
import matplotlib.pyplot as plt
def print_children(group):
Print all the sub-groups in `group` and leaf-nodes children of `group`.
Parameters:
data_file (h5py HDF5 file object): the data file to print
for name, value in group.items():
if isinstance(value, h5py.Group):
content = '(Group)'
else:
content = value[()]
print(name)
print(' Content: %s' % content)
print(' Description: %s\n' % value.attrs['TITLE'].decode())
filename = '../data/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5'
h5file = h5py.File(filename)
print_children(h5file)
print_children(h5file['sample'])
photon_data = h5file['photon_data']
photon_data['measurement_specs']['measurement_type'][()].decode()
timestamps = photon_data['timestamps'][:]
timestamps_unit = photon_data['timestamps_specs']['timestamps_unit'][()]
detectors = photon_data['detectors'][:]
print('Number of photons: %d' % timestamps.size)
print('Timestamps unit: %.2e seconds' % timestamps_unit)
print('Detectors: %s' % np.unique(detectors))
h5file['setup']['excitation_wavelengths'][:]
donor_ch = photon_data['measurement_specs']['detectors_specs']['spectral_ch1'][()]
acceptor_ch = photon_data['measurement_specs']['detectors_specs']['spectral_ch2'][()]
print('Donor CH: %d Acceptor CH: %d' % (donor_ch, acceptor_ch))
alex_period = photon_data['measurement_specs']['alex_period'][()]
donor_period = photon_data['measurement_specs']['alex_excitation_period1'][()]
offset = photon_data['measurement_specs']['alex_offset'][()]
acceptor_period = photon_data['measurement_specs']['alex_excitation_period2'][()]
print('ALEX period: %d \nOffset: %4d \nDonor period: %s \nAcceptor period: %s' % \
(alex_period, offset, donor_period, acceptor_period))
timestamps_donor = timestamps[detectors == donor_ch]
timestamps_acceptor = timestamps[detectors == acceptor_ch]
fig, ax = plt.subplots()
ax.hist((timestamps_acceptor - offset) % alex_period, bins=100, alpha=0.8, color='red', label='donor')
ax.hist((timestamps_donor - offset) % alex_period, bins=100, alpha=0.8, color='green', label='acceptor')
ax.axvspan(donor_period[0], donor_period[1], alpha=0.3, color='green')
ax.axvspan(acceptor_period[0], acceptor_period[1], alpha=0.3, color='red')
ax.set_xlabel('(timestamps - offset) MOD alex_period')
ax.set_title('ALEX histogram')
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False);
timestamps_mod = (timestamps - offset) % alex_period
donor_excitation = (timestamps_mod < donor_period[1])*(timestamps_mod > donor_period[0])
acceptor_excitation = (timestamps_mod < acceptor_period[1])*(timestamps_mod > acceptor_period[0])
timestamps_Dex = timestamps[donor_excitation]
timestamps_Aex = timestamps[acceptor_excitation]
fig, ax = plt.subplots()
ax.hist((timestamps_Dex - offset) % alex_period, bins=np.arange(0, alex_period, 40), alpha=0.8, color='green', label='D_ex')
ax.hist((timestamps_Aex - offset) % alex_period, bins=np.arange(0, alex_period, 40), alpha=0.8, color='red', label='A_ex')
ax.set_xlabel('(timestamps - offset) MOD alex_period')
ax.set_title('ALEX histogram (selected periods only)')
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False);
#plt.close('all')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Examine a single patient
Step4: Here we can see that these drugs were documented 2153 minutes (1.5 days) after ICU admission, but administered 87132 minutes (60 days) before ICU admission (thus, the negative offset). Since it's reasonable to assume the patient is still taking the drug (as this is the admissiondrug table), drugoffset can likely be treated as a start time for a prescription of the drug.
Step6: Instead of using the drug name, we could try to use the HICL code.
Step7: As we can see, using the HICL returned many more observations. Let's take a look at a few
Step9: All the rows use the drug name "Metolazone". Metolazone is the generic name for the brand Zaroxolyn. This demonstrates the utility of using HICL codes to identify drugs - synonyms like these are very common and can be tedious to find.
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import psycopg2
import getpass
import pdvega
# for configuring connection
from configobj import ConfigObj
import os
%matplotlib inline
# Create a database connection using settings from config file
config='../db/config.ini'
# connection info
conn_info = dict()
if os.path.isfile(config):
config = ConfigObj(config)
conn_info["sqluser"] = config['username']
conn_info["sqlpass"] = config['password']
conn_info["sqlhost"] = config['host']
conn_info["sqlport"] = config['port']
conn_info["dbname"] = config['dbname']
conn_info["schema_name"] = config['schema_name']
else:
conn_info["sqluser"] = 'postgres'
conn_info["sqlpass"] = ''
conn_info["sqlhost"] = 'localhost'
conn_info["sqlport"] = 5432
conn_info["dbname"] = 'eicu'
conn_info["schema_name"] = 'public,eicu_crd'
# Connect to the eICU database
print('Database: {}'.format(conn_info['dbname']))
print('Username: {}'.format(conn_info["sqluser"]))
if conn_info["sqlpass"] == '':
# try connecting without password, i.e. peer or OS authentication
try:
if (conn_info["sqlhost"] == 'localhost') & (conn_info["sqlport"]=='5432'):
con = psycopg2.connect(dbname=conn_info["dbname"],
user=conn_info["sqluser"])
else:
con = psycopg2.connect(dbname=conn_info["dbname"],
host=conn_info["sqlhost"],
port=conn_info["sqlport"],
user=conn_info["sqluser"])
except:
conn_info["sqlpass"] = getpass.getpass('Password: ')
con = psycopg2.connect(dbname=conn_info["dbname"],
host=conn_info["sqlhost"],
port=conn_info["sqlport"],
user=conn_info["sqluser"],
password=conn_info["sqlpass"])
query_schema = 'set search_path to ' + conn_info['schema_name'] + ';'
patientunitstayid = 2704494
query = query_schema +
select *
from admissiondrug
where patientunitstayid = {}
order by drugoffset
.format(patientunitstayid)
df = pd.read_sql_query(query, con)
df.head()
# Look at a subset of columns
cols = ['admissiondrugid','patientunitstayid','drugoffset','drugenteredoffset','drugname','drughiclseqno']
df[cols].head()
drug = 'ZAROXOLYN'
query = query_schema +
select
admissiondrugid, patientunitstayid
, drugoffset, drugenteredoffset
, drugname, drughiclseqno
from admissiondrug
where drugname = '{}'
.format(drug)
df_drug = pd.read_sql_query(query, con)
df_drug.set_index('admissiondrugid',inplace=True)
print('{} unit stays with {}.'.format(df_drug['patientunitstayid'].nunique(), drug))
hicl = 3663
query = query_schema +
select
admissiondrugid, patientunitstayid
, drugoffset, drugenteredoffset
, drugname, drughiclseqno
from admissiondrug
where drughiclseqno = {}
.format(hicl)
df_hicl = pd.read_sql_query(query, con)
df_hicl.set_index('admissiondrugid',inplace=True)
print('{} unit stays with HICL = {}.'.format(df_hicl['patientunitstayid'].nunique(), hicl))
# rows in HICL which are *not* in the drug dataframe
idx = ~df_hicl.index.isin(df_drug.index)
# count the drug names
df_hicl.loc[idx, 'drugname'].value_counts()
query = query_schema +
select
pt.hospitalid
, count(pt.patientunitstayid) as number_of_patients
, count(ad.patientunitstayid) as number_of_patients_with_admdrug
from patient pt
left join admissiondrug ad
on pt.patientunitstayid = ad.patientunitstayid
group by pt.hospitalid
.format(patientunitstayid)
df = pd.read_sql_query(query, con)
df['data completion'] = df['number_of_patients_with_admdrug'] / df['number_of_patients'] * 100.0
df.sort_values('number_of_patients_with_admdrug', ascending=False, inplace=True)
df.head(n=10)
df[['data completion']].vgplot.hist(bins=10,
var_name='Number of hospitals',
value_name='Percent of patients with data')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Read in the hanford.csv file in the data/ folder
Step2: <img src="../../images/hanford_variables.png"></img>
Step3: 4. Find a reasonable threshold to say exposure is high and recode the data
Step4: 6. Predict whether the mortality rate (Cancer per 100,000 man years) will be high at an exposure level of 50
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
%matplotlib inline
import numpy as np
from sklearn.linear_model import LogisticRegression
df = pd.read_csv("hanford.csv")
df.head()
df.describe()
df.median()
rang= df['Mortality'].max() - df['Mortality'].min()
rang
iqr_m = df['Mortality'].quantile(q=0.75)- df['Mortality'].quantile(q=0.25)
iqr_m
iqr_e = df['Exposure'].quantile(q=0.75)- df['Exposure'].quantile(q=0.25)
iqr_e
UAL_m= (iqr_m*1.5) + df['Mortality'].quantile(q=0.75)
UAL_m
UAL_e= (iqr_m*1.5) + df['Exposure'].quantile(q=0.75)
UAL_e
LAL_m= df['Mortality'].quantile(q=0.25) - (iqr_e*1.5)
LAL_m
LAL_e= df['Exposure'].quantile(q=0.25) - (iqr_e*1.5)
LAL_e
len(df[df['Mortality']> UAL_m])
len(df[df['Exposure']> UAL_e])
len(df[df['Mortality']< LAL_m])
len(df[df['Mortality'] > UAL_m])
lm = LogisticRegression()
data = np.asarray(df[['Mortality','Exposure']])
x = data[:,1:]
y = data[:,0]
data
x
y
lm.fit(x,y)
lm.coef_
lm.score(x,y)
slope = lm.coef_[0]
intercept = lm.intercept_
lm.predict(50)
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.