Id
stringlengths 1
6
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
32.5k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 2
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
450
|
2
| null |
412
|
5
| null |
I'm not quite sure, but maybe locality-sensitive hashing is a good solution. It does hashing of input data (in your case - names), so original strings would be preserved. On the other side, the main idea of LSH is to maximize hashes likelihood for similar items. There are a lot of different LSH-implementations. I tried [Nilsimsa-hash](http://spdp.di.unimi.it/papers/pdcs04.pdf) for comparing tweet texts, and it worked quite well. But I'm not sure, how well it will work in case of short strings (names) - this issue require testing. I tried your examples, and here is the result (name A, name B, "distance" - maximum is 120):
```
1. AMELIA BEDELIA - CHRISTOPH BAUER - 107
2. AMELIA BEDELIA - C J BAUER - 82
3. AMELIA BEDELIA - FRANZ HELLER - 91
4. CHRISTOPH BAUER - C J BAUER - 81
5. CHRISTOPH BAUER - FRANZ HELLER - 98
6. C J BAUER - FRANZ HELLER - 83
```
As you see, CHRISTOPH BAUER and C J BAUER turned up to be the closest pair. But difference is not significant.
And just for example - hash representation of these names:
```
AMELIA BEDELIA 6b208299602b5000c3005a048122a43a828020889042240005011c1880864502
CHRISTOPH BAUER 22226448000ab10102e2860b52062487ff0000928e0822ee106028016cc01237
C J BAUER 2282204100961060048050004400240006032400148000802000a80130402002
FRANZ HELLER 58002002400880080b49172044020008030002442631e004009195020ad01158
```
| null |
CC BY-SA 3.0
| null |
2014-06-18T14:15:37.957
|
2014-06-18T14:15:37.957
| null | null |
941
| null |
451
|
1
|
457
| null |
5
|
910
|
I'm developing a distributed application, and as it's been designed, there'll be a great load of communication during the processing. Since the communication is already as much spread along the entire process as possible, I'm wondering if there any standard solutions to improve the performance of the message passing layer of my application.
What changes/improvements could I apply to my code to reduce the time spent sending messages? For what it's worth, I'm communicating up to 10GB between 9 computing nodes, and the framework I'm using is implemented with OpenMPI.
|
How to speedup message passing between computing nodes
|
CC BY-SA 3.0
| null |
2014-06-18T14:36:07.980
|
2014-06-19T01:35:24.700
| null | null |
84
|
[
"efficiency",
"distributed",
"performance"
] |
452
|
2
| null |
406
|
3
| null |
As @Christopher Lauden mentioned above, time-series analysis is most appropriate for this sort of thing. If, however, you wished to do a more traditional "machine learning approach", something that I have done in the past is to block up your data into overlapping windows of time as features, then use it to predict the next days (or weeks) traffic.
Your feature matrix would be something like:
```
t1 | t2 | ... | tN
t2 | t3 | ... | tN+1
t3 | t4 | ... | tN+2
...
tW | tW+1 | ... |tN+W
```
where `tI` is the traffic on day `I`. The feature you'll be predicting is the traffic on the day after the last column. In essence, use a window of traffic to predict the next day's traffic.
Any sort of ML model would work for this.
Edit
In response to the question, "can you elaborate on how you use this feature matrix":
The feature matrix has values indicating past traffic over a period of time (for instance, hourly traffic over 1 week), and we use this to predict traffic for some specified time period in the future. We take our historic data and build a feature matrix of historic traffic and label this with the traffic at some period in the future (e.g. 2 days after the window in the feature). Using some sort of regression machine learning model, we can take historic traffic data, and try and build a model that can predict how traffic moved in our historic data set. The presumption is that future traffic will resemble past traffic.
| null |
CC BY-SA 3.0
| null |
2014-06-18T15:10:22.637
|
2014-06-19T14:32:43.503
|
2014-06-19T14:32:43.503
|
403
|
403
| null |
453
|
1
| null | null |
13
|
530
|
I understand that compression methods may be split into two main sets:
- global
- local
The first set works regardless of the data being processed, i.e., they do not rely on any characteristic of the data, and thus need not to perform any preprocessing on any part of the dataset (before the compression itself). On the other hand, local methods analyze the data, extracting information that usually improves the compression rate.
While reading about some of these methods, I noticed that [the unary method is not universal](http://en.wikipedia.org/wiki/Universal_code_%28data_compression%29#Universal_and_non-universal_codes), which surprised me since I thought "globality" and "universality" referred to the same thing. The unary method does not rely on characteristics of the data to yield its encoding (i.e., it is a global method), and therefore it should be global/universal, shouldn't it?
My primary questions:
- What is the difference between universal and global methods?
- Aren't these classifications synonyms?
|
What is the difference between global and universal compression methods?
|
CC BY-SA 3.0
| null |
2014-06-18T15:27:23.313
|
2018-07-27T00:11:06.840
|
2016-12-02T17:49:10.983
|
26596
|
84
|
[
"classification",
"algorithms",
"encoding"
] |
454
|
1
|
620
| null |
16
|
2453
|
I have a highly biased binary dataset - I have 1000x more examples of the negative class than the positive class. I would like to train a Tree Ensemble (like Extra Random Trees or a Random Forest) on this data but it's difficult to create training datasets that contain enough examples of the positive class.
What would be the implications of doing a stratified sampling approach to normalize the number of positive and negative examples? In other words, is it a bad idea to, for instance, artificially inflate (by resampling) the number of positive class examples in the training set?
|
What are the implications for training a Tree Ensemble with highly biased datasets?
|
CC BY-SA 3.0
| null |
2014-06-18T15:48:19.497
|
2016-07-22T20:19:02.920
|
2015-11-22T16:25:06.530
|
13727
|
403
|
[
"machine-learning",
"feature-selection",
"class-imbalance"
] |
455
|
1
|
464
| null |
9
|
2914
|
Which freely available datasets can I use to train a text classifier?
We are trying to enhance our users engagement by recommending the most related content for him, so we thought If we classified our content based on a predefined bag of words we can recommend to him engaging content by getting his feedback on random number of posts already classified before.
We can use this info to recommend for him pulses labeled with those classes. But we found If we used a predefined bag of words not related to our content the feature vector will be full of zeros, also categories may be not relevant to our content. so for those reasons we tried another solution that will be clustering our content not classifying it.
Thanks :)
|
Suggest text classifier training datasets
|
CC BY-SA 3.0
| null |
2014-06-18T16:21:12.203
|
2016-07-05T08:40:00.757
|
2015-05-29T08:59:43.343
|
553
|
960
|
[
"machine-learning",
"classification",
"dataset",
"clustering",
"text-mining"
] |
456
|
2
| null |
451
|
4
| null |
If you expect (or find) that nodes are requesting the same data more than once, perhaps you could benefit from a caching strategy? Especially where some data is used much more often than others, so you can target only the most frequently-used information.
If the data is mutable, you also need a way to confirm that it hasn't changed since the last request that's less expensive than repeating the request.
This is further complicated if each node has its own separate cache. Depending on the nature of your system and task(s), you could consider adding a node dedicated to serving information between the processing nodes, and building a single cache on that node.
For an example of when that might be a good idea, let's suppose I retrieve some data from a remote data store over a low-bandwidth connection, and I have some task(s) requiring that data, which are distributed exclusively among local nodes. I definitely wouldn't want each node requesting information separately over that low-bandwidth connection, which another node might have previously requested. Since my local I/O is much less expensive than my I/O over the low-bandwidth connection, I might add a node between the processing nodes and the remote source that acts as an intermediate server. This node would take requests from the processing nodes, communicate with the remote data store, and cache frequently-requested data to minimize the use of that low-bandwidth connection.
The core concepts here that may be applicable to your specific case are:
- Eliminate or reduce redundant I/O;
- Take advantage of trade-offs between memory use and computation time;
- Not all I/O is created equal.
| null |
CC BY-SA 3.0
| null |
2014-06-18T17:22:46.207
|
2014-06-18T17:22:46.207
| null | null |
322
| null |
457
|
2
| null |
451
|
7
| null |
Firstly, I would generally agree with everything that AirThomas suggested. Caching things is generally good if you can, but I find it slightly brittle since that's very dependent on exactly what your application is. Data compression is another very solid suggestion, but my impression on both of these is that the speedups you're looking at are going to be relatively marginal. Maybe as high as 2-5x, but I would be very surprised if they were any faster than that.
Under the assumption that pure I/O (writing to/reading from memory) is not your limiting factor (if it is, you're probably not going to get a lot faster), I would make a strong plug for [zeromq](http://zeromq.org/). In the words of the creators:
>
We took a normal TCP socket, injected it with a mix of radioactive
isotopes stolen from a secret Soviet atomic research project,
bombarded it with 1950-era cosmic rays, and put it into the hands of a
drug-addled comic book author with a badly-disguised fetish for
bulging muscles clad in spandex. Yes, ØMQ sockets are the world-saving
superheroes of the networking world.
While that may be a little dramatic, `zeromq` sockets in my opinion are one of the most amazing pieces of software that the world of computer networks has put together in several years. I'm not sure what you're using for your message-passing layer right now, but if you're using something traditional like `rabbitmq`, you're liable to see speedups of multiple orders of magnitude (personally noticed about 500x, but depends a lot of architecture)
Check out some basic benchmarks [here.](http://blog.x-aeon.com/2013/04/10/a-quick-message-queue-benchmark-activemq-rabbitmq-hornetq-qpid-apollo/)
| null |
CC BY-SA 3.0
| null |
2014-06-18T18:22:11.680
|
2014-06-18T18:22:11.680
| null | null |
548
| null |
458
|
1
|
459
| null |
17
|
10358
|
[K-means](http://en.wikipedia.org/wiki/K-means_clustering) is a well known algorithm for clustering, but there is also an online variation of such algorithm (online K-means). What are the pros and cons of these approaches, and when should each be preferred?
|
K-means vs. online K-means
|
CC BY-SA 3.0
| null |
2014-06-18T19:48:54.883
|
2017-04-26T16:24:21.560
|
2017-04-26T16:24:21.560
|
31513
|
84
|
[
"clustering",
"algorithms",
"k-means"
] |
459
|
2
| null |
458
|
13
| null |
Online k-means (more commonly known as [sequential k-means](https://stackoverflow.com/questions/3698532/online-k-means-clustering)) and traditional k-means are very similar. The difference is that online k-means allows you to update the model as new data is received.
Online k-means should be used when you expect the data to be received one by one (or maybe in chunks). This allows you to update your model as you get more information about it. The drawback of this method is that it is dependent on the order in which the data is received ([ref](http://www.cs.princeton.edu/courses/archive/fall08/cos436/Duda/C/sk_means.htm)).
| null |
CC BY-SA 3.0
| null |
2014-06-18T20:07:05.017
|
2014-06-18T20:07:05.017
|
2017-05-23T12:38:53.587
|
-1
|
178
| null |
460
|
2
| null |
454
|
5
| null |
A fast, easy an often effective way to approach this imbalance would be to randomly subsample the bigger class (which in your case is the negative class), run the classification N number of times with members from the two classes (one full and the other subsampled) and report the average metric values, the average being computed over N (say 1000) iterations.
A more methodical approach would be to execute the Mapping Convergence (MC) algorithm, which involves identifying a subset of strong negative samples with the help of a one-class classifier, such as OSVM or SVDD, and then iteratively execute binary classification on the set of strong negative and positive samples. More details of the MC algorithm can be found in this [paper](http://link.springer.com/article/10.1007/s10994-005-1122-7#page-1).
| null |
CC BY-SA 3.0
| null |
2014-06-18T21:56:29.147
|
2014-06-18T21:56:29.147
| null | null |
984
| null |
461
|
1
| null | null |
12
|
1925
|
There's this side project I'm working on where I need to structure a solution to the following problem.
I have two groups of people (clients). Group `A` intends to buy and group `B` intends to sell a determined product `X`. The product has a series of attributes `x_i`, and my objective is to facilitate the transaction between `A` and `B` by matching their preferences. The main idea is to point out to each member of `A` a corresponding in `B` whose product better suits his needs, and vice versa.
Some complicating aspects of the problem:
- The list of attributes is not finite. The buyer might be interested in a very particular characteristic or some kind of design, which is rare among the population and I can't predict. Can't previously list all the attributes;
- Attributes might be continuous, binary, or non-quantifiable (ex: price, functionality, design);
Any suggestion on how to approach this problem and solve it in an automated way?
I would also appreciate some references to other similar problems if possible.
---
Great suggestions! Many similarities in to the way I'm thinking of approaching the problem.
The main issue on mapping the attributes is that the level of detail to which the product should be described depends on each buyers. Let’s take an example of a car. The product “car” has lots and lots of attributes that range from its performance, mechanical structure, price etc.
Suppose I just want a cheap car, or an electric car. Ok, that's easy to map because they represent main features of this product. But let’s say, for instance, that I want a car with Dual-Clutch transmission or Xenon headlights. Well there might be many cars on the data base with this attributes but I wouldn't ask the seller to fill in this level of detail to their product prior to the information that there is someone looking them. Such a procedure would require every seller fill a complex, very detailed, form just try to sell his car on the platform. Just wouldn't work.
But still, my challenge is to try to be as detailed as necessary in the search to make a good match. So the way I'm thinking is mapping main aspects of the product, those that are probably relevant to everyone, to narrow down de group of potential sellers.
Next step would be a “refined search”. In order to avoid creating a too detailed form I could ask buyers and sellers to write a free text of their specification. And then use some word matching algorithm to find possible matches. Although I understand that this is not a proper solution to the problem because the seller cannot “guess” what the buyer needs. But might get me close.
The weighting criteria suggested is great. It allows me to quantify the level to which the seller matches the buyer’s needs. The scaling part might be a problem though, because the importance of each attribute varies from client to client. I'm thinking of using some kind of pattern recognition or just asking de buyer to input the level of importance of each attribute.
|
Preference Matching Algorithm
|
CC BY-SA 3.0
| null |
2014-06-18T22:10:58.497
|
2017-11-06T08:07:09.260
|
2017-11-06T08:07:09.260
|
29575
|
986
|
[
"bigdata",
"text-mining",
"recommender-system"
] |
462
|
2
| null |
454
|
12
| null |
I would recommend training on more balanced subsets of your data. Training random forest on sets of randomly selected positive example with a similar number of negative samples. In particular if the discriminative features exhibit a lot of variance this will be fairly effective and avoid over-fitting. However in stratification it is important to find balance as over-fitting can become a problem regardless. I would suggest seeing how the model does with the whole data set then progressively increasing the ratio of positive to negative samples approaching an even ratio, and selecting for the one that maximizes your performance metric on some representative hold out data.
This paper seems fairly relevant [http://statistics.berkeley.edu/sites/default/files/tech-reports/666.pdf](http://statistics.berkeley.edu/sites/default/files/tech-reports/666.pdf) it talks about a `weighted Random Forest` which more heavily penalizes misclassification of the minority class.
| null |
CC BY-SA 3.0
| null |
2014-06-18T22:27:06.503
|
2014-06-18T22:27:06.503
| null | null |
548
| null |
463
|
2
| null |
461
|
9
| null |
My first suggestion would be to somehow map the non-quantifiable attributes to quantities with the help of suitable mapping functions. Otherwise, simply leave them out.
Secondly, I don't think that you need to assume that the list of attributes is not finite. A standard and intuitive approach is to represent each attribute as an individual dimension in a vector space. Each product is then simply a point in this space. In that case, if you want to dynamically add more attributes you simply have to remap the product vectors into the new feature space (with additional dimensions).
With this representation, a seller is a point in the feature space with product attributes and a buyer is a point in the same feature space with the preference attributes. The task is then to find out the most similar buyer point for a given seller point.
If your dataset (i.e. the number of buyers/sellers) is not very large, you can solve this with a nearest neighbour approach implemented with the help of k-d trees.
For very large sized data, you can take an IR approach. Index the set of sellers (i.e. the product attributes) by treating each attribute as a separate term with the term-weight being set to the attribute value. A query in this case is a buyer which is also encoded in the term space as a query vector with appropriate term weights. The retrieval step would return you a list of top K most similar matches.
| null |
CC BY-SA 3.0
| null |
2014-06-18T22:45:25.677
|
2014-06-18T22:45:25.677
| null | null |
984
| null |
464
|
2
| null |
455
|
14
| null |
Some standard datasets for text classification are the 20-News group, Reuters (with 8 and 52 classes) and WebKb. You can find all of them [here](http://web.ist.utl.pt/~acardoso/datasets/).
| null |
CC BY-SA 3.0
| null |
2014-06-18T22:48:53.350
|
2014-06-18T22:48:53.350
| null | null |
984
| null |
465
|
2
| null |
369
|
9
| null |
nDCG is used to evaluate a golden ranked list (typically human judged) against your output ranked list. The more is the correlation between the two ranked lists, i.e. the more similar are the ranks of the relevant items in the two lists, the closer is the value of nDCG to 1.
RMSE (Root Mean Squared Error) is typically used to evaluate regression problems where the output (a predicted scalar value) is compared with the true scalar value output for a given data point.
So, if you are simply recommending a score (such as recommending a movie rating), then use RMSE. Whereas, if you are recommending a list of items (such as a list of related movies), then use nDCG.
| null |
CC BY-SA 3.0
| null |
2014-06-18T22:58:35.260
|
2014-06-18T22:58:35.260
| null | null |
984
| null |
466
|
1
| null | null |
10
|
789
|
I tried to detect outliers in the energy gas consumption of some dutch buildings, building a neural network model. I have very bad results, but I can't find the reason.
I am not an expert so I would like to ask you what I can improve and what I'm doing wrong. This is the complete description: [https://github.com/denadai2/Gas-consumption-outliers](https://github.com/denadai2/Gas-consumption-outliers).
The neural network is a FeedFoward Network with Back Propagation. As described [here](http://nbviewer.ipython.org/github/denadai2/Gas-consumption-outliers/blob/master/3-%20Regression_NN.ipynb) I splitted the dataset in a "small" dataset of 41'000 rows, 9 features and I tried to add more features.
I trained the networks but the results have 14.14 RMSE, so it can't predict so well the gas consumptions, consecutively I can't run a good outlier detection mechanism. I see that in some papers that even if they predict daily or hourly consumption in the electric power, they have errors like MSE = 0.01.
What can I improve? What am I doing wrong? Can you have a look of my description?
|
Gas consumption outliers detection - Neural network project. Bad results
|
CC BY-SA 3.0
| null |
2014-06-18T23:02:54.200
|
2017-05-04T19:57:55.813
|
2017-05-04T19:57:55.813
|
31513
|
989
|
[
"neural-network",
"outlier"
] |
468
|
1
| null | null |
11
|
1527
|
I'm working on a project and need resources to get me up to speed.
The dataset is around 35000 observations on 30 or so variables. About half the variables are categorical with some having many different possible values, i.e. if you split the categorical variables into dummy variables you would have a lot more than 30 variables. But still probably on the order of a couple of hundred max. (n>p).
The response we want to predict is ordinal with 5 levels (1,2,3,4,5). Predictors are a mix of continuous and categorical, about half of each. These are my thoughts/plans so far:
1. Treat the response as continuous and run vanilla linear regression.
2. Run nominal and ordinal logistic and probit regression
3. Use MARS and/or another flavor of non-linear regression
I'm familiar with linear regression. MARS is well enough described by Hastie and Tibshirani. But I'm at a loss when it comes to ordinal logit/probit, especially with so many variables and a big data set.
The r package [glmnetcr](http://cran.r-project.org/web/packages/glmnetcr/index.html) seems to be my best bet so far, but the documentation hardly suffices to get me where I need to be.
Where can I go to learn more?
|
Learning ordinal regression in R?
|
CC BY-SA 3.0
| null |
2014-06-19T03:43:23.853
|
2014-06-27T13:22:22.577
|
2014-06-26T01:50:53.147
|
906
|
994
|
[
"r",
"logistic-regression"
] |
469
|
2
| null |
468
|
6
| null |
I suggest this tutorial on ordered logit: [http://www.ats.ucla.edu/stat/r/dae/ologit.htm](http://www.ats.ucla.edu/stat/r/dae/ologit.htm)
It showcases the use of `polr` in the `MASS` package, and also explains the assumptions and how to interpret the results.
| null |
CC BY-SA 3.0
| null |
2014-06-19T05:52:57.527
|
2014-06-19T05:52:57.527
| null | null |
906
| null |
470
|
2
| null |
466
|
8
| null |
Just an idea - your data is highly seasonal: daily and weekly cycles are quite perceptible. So first of all, try to decompose your variables (gas and electricity consumption, temperature, and solar radiation). [Here is](http://www.r-bloggers.com/time-series-decomposition/) a nice tutorial on time series decomposition for R.
After obtaining trend and seasonal components, the most interesting part begins. It's just an assumption, but I think, gas and electricity consumption variables would be quite predictable by means of time series analysis (e.g., [ARIMA model](http://statsmodels.sourceforge.net/devel/examples/notebooks/generated/tsa_arma_0.html)). From my point of view, the most exiting part here is to try to predict residuals after decomposition, using available data (temperature anomalies, solar radiation, wind speed). I suppose, these residuals would be outliers, you are looking for. Hope, you will find this useful.
| null |
CC BY-SA 3.0
| null |
2014-06-19T06:09:43.963
|
2014-06-19T06:09:43.963
| null | null |
941
| null |
471
|
2
| null |
455
|
7
| null |
One of the most widely used test collection for text categorization research (link below). I've used many times. Enjoy your exploration :)
[http://www.daviddlewis.com/resources/testcollections/reuters21578/](http://www.daviddlewis.com/resources/testcollections/reuters21578/)
or
[http://archive.ics.uci.edu/ml/datasets/Reuters-21578+Text+Categorization+Collection](http://archive.ics.uci.edu/ml/datasets/Reuters-21578+Text+Categorization+Collection)
| null |
CC BY-SA 3.0
| null |
2014-06-19T07:22:38.987
|
2014-06-19T07:22:38.987
| null | null |
944
| null |
473
|
1
|
484
| null |
23
|
12524
|
The usual definition of regression (as far as I am aware) is predicting a continuous output variable from a given set of input variables.
Logistic regression is a binary classification algorithm, so it produces a categorical output.
Is it really a regression algorithm? If so, why?
|
Is logistic regression actually a regression algorithm?
|
CC BY-SA 3.0
| null |
2014-06-19T08:56:46.847
|
2021-03-11T20:07:53.137
|
2014-06-19T10:55:38.920
|
922
|
922
|
[
"algorithms",
"logistic-regression"
] |
474
|
1
|
481
| null |
4
|
684
|
In network structure, what is the difference between k-cliques and p-cliques, can anyone give a brief explaination with examples? Thanks in advanced!
============================
EDIT:
I found an online [ppt](http://open.umich.edu/sites/default/files/SI508-F08-Week7-Lab6.ppt) while I am googling, please take a look on p.37 and p.39, can you comment on them?
|
Network structure: k-cliques vs. p-cliques
|
CC BY-SA 3.0
| null |
2014-06-19T09:42:28.160
|
2014-06-20T02:27:28.790
|
2014-06-20T02:27:28.790
|
957
|
957
|
[
"definitions",
"social-network-analysis",
"graphs"
] |
475
|
2
| null |
473
|
0
| null |
As you discuss, the definition of regression is predicting a continuous variable. [Logistic regression](http://en.wikipedia.org/wiki/Logistic_regression) is a binary classifier. Logistic regression is the application of a logit function on the output of a usual regression approach. Logit function turns $(-\infty, +\infty)$ to $[0,1]$. I think it is just for historical reasons that it keeps that name.
Saying something like "I did some regression to classify images. In particular I used logistic regression." is wrong.
| null |
CC BY-SA 4.0
| null |
2014-06-19T09:50:53.657
|
2021-03-11T20:07:53.137
|
2021-03-11T20:07:53.137
|
29169
|
418
| null |
477
|
1
| null | null |
2
|
731
|
There's this side project I'm working on where I need to structure a solution to the following problem.
I have two groups of people (clients). Group "A" intends to buy and group "B" intends to sell a determined product "X".
The product has a series of attributes x_i and my objective is to facilitate the transaction between "A" e "B" by matching their preferences. The main idea is to point out to each member of "A" a corresponding in "B" who’s product better suits his needs, and vice versa.
Some complicating aspects of the problem:
- The list of attributes is not finite. The buyer might be interested in a very particular characteristic or some kind of design which is rare among the population and I can’t predict. Can’t previously list all the attributes;
- Attributes might be continuous, binary or non-quantifiable (ex: price, functionality, design).
Any suggestion on how to approach this problem and solve it in an automated way?
The idea is to really think out of the box here so feel free to "go wild" on your suggestions.
I would also appreciate some references to other similar problems if possible.
|
Preference Matching Algorithm
|
CC BY-SA 3.0
| null |
2014-06-18T22:15:43.820
|
2014-06-19T10:30:43.993
| null | null |
986
|
[
"algorithms"
] |
478
|
2
| null |
477
|
1
| null |
This can be a cross between machine learning and simple matching exercise.
I think X_i tend to be rather defined and finite, while A_i can be vague and not finite. From a pure algorithm perspective I would search for instances where X_i = A_i and store the results into a container of sort. The more hits for certain X'es where X_i_n = A_i_k the more points X scores. X'es are then presented to A in the order of points from best match to lowest match.
Onto the machine learning mechanism, as the algorithm serves a lot of As (by that mean thousands and thousands, even millions) patterns will start to develop and certain combination of A_i's will be more prevalent, or in other words, worth more to other A_i's for a certain category of A. Using these patterns, the weighting of points will be re-balanced for higher chance of hitting the correct offers.
Kind of like how a search engine works.
| null |
CC BY-SA 3.0
| null |
2014-06-19T03:20:00.847
|
2014-06-19T03:58:35.730
| null | null | null | null |
480
|
2
| null |
468
|
6
| null |
One fairly powerful R package for regression with an ordinal categorical response is VGAM, on the CRAN. The vignette contains some examples of ordinal regression, but admittedly I have never tried it on such a large dataset, so I cannot estimate how long it may take. You may find some additional material about VGAM on the author's [page](https://www.stat.auckland.ac.nz/~yee/VGAM/). Alternatively you could take a look at Laura Thompson's [companion](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&cad=rja&uact=8&ved=0CDUQFjAC&url=http://home.comcast.net/~lthompson221/Splusdiscrete2.pdf&ei=8bmiU4HSFcjA7Abk4YHgDg&usg=AFQjCNHuBp2_nRpaPOFgdkcQWJGuSO9V6A&sig2=hAy4d3mu9WCJZulqxCzraw) to Agresti's book "Categorical Data Analysis". Chapter 7 of Thompson's book describes cumulative logit models, which are frequently used with ordinal responses.
Hope this helps!
| null |
CC BY-SA 3.0
| null |
2014-06-19T10:35:37.190
|
2014-06-19T10:35:37.190
| null | null |
1004
| null |
481
|
2
| null |
474
|
2
| null |
In graph theory a clique indicates a fully connected set of nodes: [as noted here](http://books.google.com/books?id=E3-OSVSPbU0C&pg=PA40&lpg=PA40&dq=%22graph%20theory%22,%20%22p-clique%22&source=bl&ots=smbhcK-9AC&sig=X0v_EbqSqB4WBbudgPbqo_j1pvk&hl=en&sa=X&ei=VNWiU8KaA9WxsQSQhYGIAg&ved=0CDEQ6AEwBA#v=onepage&q=%22graph%20theory%22,%20%22p-clique%22&f=false), a p-clique simply indicates a clique comoprised of p nodes. A k-clique is an undirected graph and a number k, and the output is a clique of size k if one exists.
[Clique Problem](http://en.wikipedia.org/wiki/Clique_problem)
| null |
CC BY-SA 3.0
| null |
2014-06-19T12:22:42.017
|
2014-06-19T12:22:42.017
| null | null |
59
| null |
482
|
2
| null |
473
|
11
| null |
Short Answer
Yes, logistic regression is a regression algorithm and it does predict a continuous outcome: the probability of an event. That we use it as a binary classifier is due to the interpretation of the outcome.
Detail
Logistic regression is a type of generalize linear regression model.
In an ordinary linear regression model, a continuous outcome, `y`, is modeled as the sum of the product of predictors and their effect:
```
y = b_0 + b_1 * x_1 + b_2 * x_2 + ... b_n * x_n + e
```
where `e` is the error.
Generalized linear models do not model `y` directly. Instead, they use transformations to expand the domain of `y` to all real numbers. This transformation is called the link function. For logistic regression the link function is the logit function (usually, see note below).
The logit function is defined as
```
ln(y/(1 + y))
```
Thus the form of logistic regression is:
```
ln(y/(1 + y)) = b_0 + b_1 * x_1 + b_2 * x_2 + ... b_n * x_n + e
```
where `y` is the probability of an event.
The fact that we use it as a binary classifier is due to the interpretation of the outcome.
Note: probit is another link function used for logistic regression but logit is the most widely used.
| null |
CC BY-SA 3.0
| null |
2014-06-19T13:23:53.387
|
2014-06-19T13:47:55.017
|
2014-06-19T13:47:55.017
|
178
|
178
| null |
483
|
2
| null |
406
|
3
| null |
You could try Neural Network. You can find 2 great explanations on how to apply NN on time series [here](https://stats.stackexchange.com/questions/10162/how-to-apply-neural-network-to-time-series-forecasting) and [here](https://stackoverflow.com/questions/18670558/prediction-using-recurrent-neural-network-on-time-series-dataset).
Note that it is best practice to :
- Deseasonalize/detrend the input data (so that the NN will not learn the seasonality).
- Rescale/Normalize the input data.
Because what you are looking for is a regression problem, the activation functions should be `linear` and not `sigmoid` or `tanh` and you aim to minimize the `sum-of-squares error` (as opposition to the maximization of the `negative log-likelihood` in a classification problem).
| null |
CC BY-SA 3.0
| null |
2014-06-19T14:15:07.433
|
2014-06-19T14:15:07.433
|
2017-05-23T12:38:53.150
|
-1
|
968
| null |
484
|
2
| null |
473
|
36
| null |
Logistic regression is regression, first and foremost. It becomes a classifier by adding a decision rule. I will give an example that goes backwards. That is, instead of taking data and fitting a model, I'm going to start with the model in order to show how this is truly a regression problem.
In logistic regression, we are modeling the log odds, or logit, that an event occurs, which is a continuous quantity. If the probability that event $A$ occurs is $P(A)$, the odds are:
$$\frac{P(A)}{1 - P(A)}$$
The log odds, then, are:
$$\log \left( \frac{P(A)}{1 - P(A)}\right)$$
As in linear regression, we model this with a linear combination of coefficients and predictors:
$$\operatorname{logit} = b_0 + b_1x_1 + b_2x_2 + \cdots$$
Imagine we are given a model of whether a person has gray hair. Our model uses age as the only predictor. Here, our event A = a person has gray hair:
log odds of gray hair = -10 + 0.25 * age
...Regression! Here is some Python code and a plot:
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
x = np.linspace(0, 100, 100)
def log_odds(x):
return -10 + .25 * x
plt.plot(x, log_odds(x))
plt.xlabel("age")
plt.ylabel("log odds of gray hair")
```

Now, let's make it a classifier. First, we need to transform the log odds to get out our probability $P(A)$. We can use the sigmoid function:
$$P(A) = \frac1{1 + \exp(-\text{log odds}))}$$
Here's the code:
```
plt.plot(x, 1 / (1 + np.exp(-log_odds(x))))
plt.xlabel("age")
plt.ylabel("probability of gray hair")
```

The last thing we need to make this a classifier is to add a decision rule. One very common rule is to classify a success whenever $P(A) > 0.5$. We will adopt that rule, which implies that our classifier will predict gray hair whenever a person is older than 40 and will predict non-gray hair whenever a person is under 40.
Logistic regression works great as a classifier in more realistic examples too, but before it can be a classifier, it must be a regression technique!
| null |
CC BY-SA 3.0
| null |
2014-06-19T14:52:59.877
|
2016-05-02T13:15:30.030
|
2016-05-02T13:15:30.030
|
18364
|
1011
| null |
485
|
2
| null |
14
|
8
| null |
My answer would be no. I consider Data mining to be one of the miscellaneous fields in Data science. Data Mining is mostly considered on yielding questions rather than answering them. It is often termed as "detecting something new", when compared to Data science, where the data scientist try to solve complex problems to be able to reach their end results. However both terms have many commonalities between them.
For example..if u have an agricultural land where u aim to find the affected plants..Here spatial data mining plays a key role in doing this job.There are good chances that you may end up with not only finding out the affected plants in the land but also the extent to which they are affected.......this is something that is not possible with data science.
| null |
CC BY-SA 3.0
| null |
2014-06-19T16:07:29.907
|
2014-06-20T17:36:05.023
|
2014-06-20T17:36:05.023
|
1015
|
1015
| null |
487
|
2
| null |
422
|
6
| null |
A good list of publicly available social network datasets can be found on the Stanford Network Analysis Project website:
[SNAP datasets](https://snap.stanford.edu/data/)
The site contains internet social network data (Facebook, Twitter, Google Plus), Citation networks for academic journals, co-purchasing networks from Amazon and several others kinds of networks. They have directed, undirected, and bipartite graphs and all datasets are snapshots that can be downloaded in compressed form.
| null |
CC BY-SA 3.0
| null |
2014-06-19T18:01:36.120
|
2014-06-19T18:01:36.120
| null | null |
1011
| null |
488
|
1
|
489
| null |
12
|
15256
|
I thought that generalized linear model (GLM) would be considered a statistical model, but a friend told me that some papers classify it as a machine learning technique. Which one is true (or more precise)? Any explanation would be appreciated.
|
Is GLM a statistical or machine learning model?
|
CC BY-SA 3.0
| null |
2014-06-19T18:02:24.650
|
2016-12-05T11:43:00.267
|
2015-07-08T11:37:50.907
|
21
|
1021
|
[
"machine-learning",
"statistics",
"glm"
] |
489
|
2
| null |
488
|
22
| null |
A GLM is absolutely a statistical model, but statistical models and machine learning techniques are not mutually exclusive. In general, statistics is more concerned with inferring parameters, whereas in machine learning, prediction is the ultimate goal.
| null |
CC BY-SA 3.0
| null |
2014-06-19T18:05:51.070
|
2014-06-19T18:05:51.070
| null | null |
1011
| null |
491
|
2
| null |
461
|
7
| null |
As suggested, going wild. First of all, correct me if I’m wrong:
- Just a few features exist for each unique product;
- There is no ultimate features list, and clients are able to add new features to their products.
If so, constructing full product-feature table could be computational expensive. And final data table would be extremely sparse.
The first step is narrowing customers (products) list for matching. Let’s build a bipartite graph, where sellers would be type-1 nodes, and buyers would be type-2 nodes. Create an edge between any seller and buyer every time they reference a similar product feature, as in the following sketch:

Using the above graph, for every unique seller’s product you can select only buyers who are interested in features that match the product (it’s possible to filter at least one common feature, match the full set of features, or set a threshold level). But certainly, that’s not enough. The next step is to compare feature values, as described by the seller and buyer. There are a lot of variants (e.g., k-Nearest-Neighbors). But why not try to solve this question using the existing graph? Let’s add weights to the edges:
- for continuous features (e.g., price):
- for binary and non-quantifiable features - just logical biconditional:
The main idea here is to scale every feature to the interval `[0, 1]`. Additionally, we can use feature coefficients to determine most important features. E.g., assuming price is twice as important as availability of some rare function:


One of the final steps is simplifying the graph structure and reducing many edges to one edge with weight equal to the sum of the previously calculated weights of each feature. With such a reduced structure every pair of customers/products could have only one edge (no parallel edges). So, to find the best deal for exact seller you just need to select connected buyers with max weighted edges.
Future challenge: introduce a cheap method for weighting edges on first step :)
| null |
CC BY-SA 3.0
| null |
2014-06-19T18:24:32.023
|
2014-06-20T03:28:55.463
|
2014-06-20T03:28:55.463
|
322
|
941
| null |
492
|
1
|
2367
| null |
27
|
13852
|
I'm looking to use google's word2vec implementation to build a named entity recognition system. I've heard that recursive neural nets with back propagation through structure are well suited for named entity recognition tasks, but I've been unable to find a decent implementation or a decent tutorial for that type of model. Because I'm working with an atypical corpus, standard NER tools in NLTK and similar have performed very poorly, and it looks like I'll have to train my own system.
In short, what resources are available for this kind of problem? Is there a standard recursive neural net implementation available?
|
Word2Vec for Named Entity Recognition
|
CC BY-SA 3.0
| null |
2014-06-19T19:29:57.797
|
2020-08-05T08:41:02.810
|
2017-05-19T16:11:58.100
|
21
|
684
|
[
"machine-learning",
"python",
"neural-network",
"nlp"
] |
493
|
2
| null |
406
|
2
| null |
Well, firstly, I would not even use things like Machine learning without having in depth knowledge. Simplistic things I would do if I had this time series is:
- Write sql queries to understand which of the times you have the busiest, average and low foot traffic.
- Then try to visualize the whole time series, and you could use basic pattern matching algorithms to pick up patterns.
This two things will help you understand what your data set is telling you. Then, with that in hand, you will probably be in a better state to use machine learning algorithms.
Also, I'm currently working in building something on time series, and using time series analysis will help you much more than machine learning. For example, there are pattern recognition algorithms that you can use that uses every day data to show patterns, and ones which use up to as much as 3 to 6 months of data to catch a pattern.
| null |
CC BY-SA 3.0
| null |
2014-06-19T19:33:12.067
|
2014-06-20T00:34:48.087
|
2014-06-20T00:34:48.087
|
84
|
1029
| null |
494
|
1
| null | null |
5
|
234
|
My data set is formatted like this:
```
User-id | Threat_score
aaa 45
bbb 32
ccc 20
```
The list contains the top 100 users with the highest threat scores. I generate such a list monthly and store each month's list in its own file.
There are three things I would like to get from this data:
1. Users who are consistently showing up in this list
2. Users who are consistently showing up in this list with high threat scores
3. Users whose threat scores are increasing very quickly
I am thinking a visual summary would be nice; each month (somehow) decide which users I want to plot on a graph of historic threat scores.
Are there any known visualization techniques that deal with similar requirements?
How should I be transforming my current data to achieve what I am looking for?
|
Techniques for trend extraction from unbalanced panel data
|
CC BY-SA 3.0
| null |
2014-06-19T19:33:40.620
|
2014-06-26T01:51:28.873
|
2014-06-26T01:51:28.873
|
322
|
1028
|
[
"visualization",
"dataset",
"graphs"
] |
495
|
2
| null |
494
|
6
| null |
I would add a third column called `month` and then concatenate each list. So if you have a top 100 list for 5 months you will create one big table with 500 entries:
```
User-id | Threat_score | month
aaa 45 1
bbb 32 1
ccc 20 1
... ... ...
bbb 64 2
ccc 29 2
... ... ...
```
Then, to answer your first question, you could simply count the occurrences of each user-id. For example, if user `bbb` is in your concatenated table five times, then you know that person made your list all five months.
To answer you second question, you could do a `group by` operation to compute some aggregate function of the users. A `group by` operation with an average function is a little crude and sensitive to outliers, but it would probably get you close to what you are looking for.
One possibility for the third question is to compute the difference in threat score between month `n-1` and month `n`. That is, for each month (not including the first month) you subtract the user's previous threat score from the current threat score. You can make this a new column so your table would now look like:
```
User-id | Threat_score | month | difference
aaa 45 1 null
bbb 32 1 null
ccc 20 1 null
... ... ... ...
bbb 64 2 32
ccc 29 2 9
... ... ...
```
With this table, you could again do a `group by` operation to find people who consistently have a higher threat score than the previous month or you could simply find people with a large difference between the current month and the previous month.
As you suggest, visualizing this data is a really good idea. If you care about these threat scores over time (which I think you do), I strongly recommend a simple line chart, with month on the x-axis and threat score on the y-axis. It's not fancy, but it's extremely easy to interpret and should give you useful information about the trends.
Most of this stuff (not the visualization) can be done in SQL and all of it can be done in R or Python (and many other languages). Good luck!
| null |
CC BY-SA 3.0
| null |
2014-06-19T22:02:44.050
|
2014-06-19T22:02:44.050
| null | null |
1011
| null |
496
|
2
| null |
494
|
3
| null |
You have three questions to answer and 100 records per month to analyze.
Based on this size, I'd recommend doing analysis in a simple SQL database or a spreadsheet to start off with. The first two questions are fairly easy to figure out. The third is a little more difficult.
I'd definitely add a column for month and group all of that data together into a spreadsheet or database table given the questions you want to answer.
question 1. Users who are consistently showing up in this list
In excel, this answer should help you out: [https://superuser.com/questions/442653/ms-excel-how-count-occurence-of-item-in-a-list](https://superuser.com/questions/442653/ms-excel-how-count-occurence-of-item-in-a-list)
For a SQL database: [https://stackoverflow.com/questions/2516546/select-count-duplicates](https://stackoverflow.com/questions/2516546/select-count-duplicates)
question 2. Users who are consistently showing up in this list with high risk score
This is just adding a little complexity to the above. For SQL, you would further qualify your query based on a minimum risk score value.
In excel, a straight pivot isn't going to work, you'll have to copy the unique values in one column to another, then drag a CountIf function adjacent to each unique value, qualifying the countif function with a minimum risk score.
question 3. Users who have/reaching the high risk level very fast.
A fast rise in risk level could be defined as the difference between two months being larger than a given value.
For each user record you want to know the previous month's threat value, or assume zero as the previous threat value.
If that difference is greater than your risk threshold, you want to include it in your report. If not, they can be filtered from the list.
If I had to do this month after month, I would spend the two hours it might take to automate a report after the first couple of months. I'd throw all the data in a SQL database and write a quick script in perl or java to iterate through the 100 records, do the calculation, and output the users who crossed the threshold.
If I needed it to look pretty, I'd use a reporting tool. I'm not particularly partial to any of them.
If I needed to trend threshold values over time, I'd output the results for all people into a second table add records to that table each month.
If I just needed to do it once or twice, figuring out how to do it in excel by adding a new column using VLookUp and some basic math and a filter would probably be the fastest and easiest way to get it done. I tend to avoid using excel for things I'll need to use with consistency because there are limits that you run into when your data gets sizeable.
| null |
CC BY-SA 3.0
| null |
2014-06-20T03:18:07.653
|
2014-06-20T03:18:07.653
|
2017-05-23T12:38:53.587
|
-1
|
434
| null |
497
|
1
|
506
| null |
23
|
712
|
I am trying to find a formula, method, or model to use to analyze the likelihood that a specific event influenced some longitudinal data. I am having difficultly figuring out what to search for on Google.
Here is an example scenario:
Image you own a business that has an average of 100 walk-in customers every day. One day, you decide you want to increase the number of walk-in customers arriving at your store each day, so you pull a crazy stunt outside your store to get attention. Over the next week, you see on average 125 customers a day.
Over the next few months, you again decide that you want to get some more business, and perhaps sustain it a bit longer, so you try some other random things to get more customers in your store. Unfortunately, you are not the best marketer, and some of your tactics have little or no effect, and others even have a negative impact.
What methodology could I use to determine the probability that any one individual event positively or negatively impacted the number of walk-in customers? I am fully aware that correlation does not necessarily equal causation, but what methods could I use to determine the likely increase or decrease in your business's daily walk in client's following a specific event?
I am not interested in analyzing whether or not there is a correlation between your attempts to increase the number of walk-in customers, but rather whether or not any one single event, independent of all others, was impactful.
I realize that this example is rather contrived and simplistic, so I will also give you a brief description of the actual data that I am using:
I am attempting to determine the impact that a particular marketing agency has on their client's website when they publish new content, perform social media campaigns, etc. For any one specific agency, they may have anywhere from 1 to 500 clients. Each client has websites ranging in size from 5 pages to well over 1 million. Over the course of the past 5 year, each agency has annotated all of their work for each client, including the type of work that was done, the number of webpages on a website that were influenced, the number of hours spent, etc.
Using the above data, which I have assembled into a data warehouse (placed into a bunch of star/snowflake schemas), I need to determine how likely it was that any one piece of work (any one event in time) had an impact on the traffic hitting any/all pages influenced by a specific piece of work. I have created models for 40 different types of content that are found on a website that describes the typical traffic pattern a page with said content type might experience from launch date until present. Normalized relative to the appropriate model, I need to determine the highest and lowest number of increased or decreased visitors a specific page received as the result of a specific piece of work.
While I have experience with basic data analysis (linear and multiple regression, correlation, etc), I am at a loss for how to approach solving this problem. Whereas in the past I have typically analyzed data with multiple measurements for a given axis (for example temperature vs thirst vs animal and determined the impact on thirst that increased temperate has across animals), I feel that above, I am attempting to analyze the impact of a single event at some point in time for a non-linear, but predictable (or at least model-able), longitudinal dataset. I am stumped :(
Any help, tips, pointers, recommendations, or directions would be extremely helpful and I would be eternally grateful!
|
What statistical model should I use to analyze the likelihood that a single event influenced longitudinal data
|
CC BY-SA 3.0
| null |
2014-06-20T03:18:59.477
|
2019-02-15T11:30:40.717
|
2014-10-22T12:07:33.977
|
134
|
1047
|
[
"machine-learning",
"data-mining",
"statistics"
] |
498
|
2
| null |
424
|
5
| null |
Have you considered a frequency-based approach exploiting simple word co-occurence in corpora? At least, that's what I've seen most folks use for this. I think it might be covered briefly in Manning and Schütze's book, and I seem to remember something like this as a homework assignment back in grad school...
More background [here](http://nlp.stanford.edu/IR-book/html/htmledition/automatic-thesaurus-generation-1.html).
For this step:
>
Rank other concepts based on their "distance" to the original keywords;
There are several semantic similarity metrics you could look into. [Here](http://www.eecis.udel.edu/%7Etrnka/CISC889-11S/lectures/greenbacker-WordNet-Similarity.pdf)'s a link to some slides I put together for a class project using a few of these similarity metrics in WordNet.
| null |
CC BY-SA 4.0
| null |
2014-06-20T04:09:04.230
|
2020-08-06T16:18:05.960
|
2020-08-06T16:18:05.960
|
98307
|
819
| null |
499
|
2
| null |
128
|
30
| null |
Anecdotally, I've never been impressed with the output from hierarchical LDA. It just doesn't seem to find an optimal level of granularity for choosing the number of topics. I've gotten much better results by running a few iterations of regular LDA, manually inspecting the topics it produced, deciding whether to increase or decrease the number of topics, and continue iterating until I get the granularity I'm looking for.
Remember: hierarchical LDA can't read your mind... it doesn't know what you actually intend to use the topic modeling for. Just like with k-means clustering, you should choose the k that makes the most sense for your use case.
| null |
CC BY-SA 3.0
| null |
2014-06-20T04:27:02.467
|
2014-06-20T04:27:02.467
| null | null |
819
| null |
500
|
2
| null |
488
|
6
| null |
In addition to Ben's answer, the subtle distinction between statistical models and machine learning models is that, in statistical models, you explicitly decide the output equation structure prior to building the model. The model is built to compute the parameters/coefficients.
Take linear model or GLM for example,
```
y = a1x1 + a2x2 + a3x3
```
Your independent variables are x1, x2, x3 and the coefficients to be determined are a1,a2,a3. You define your equation structure this way prior to building the model and compute a1,a2,a3. If you believe that y is somehow correlated to x2 in a non-linear way, you could try something like this.
```
y = a1x1 + a2(x2)^2 + a3x3.
```
Thus, you put a restriction in terms of the output structure. Inherently statistical models are linear models unless you explicitly apply transformations like sigmoid or kernel to make them nonlinear (GLM and SVM).
In case of machine learning models, you rarely specify output structure and algorithms like decision trees are inherently non-linear and work efficiently.
Contrary to what Ben pointed out, machine learning models aren't just about prediction, they do classification, regression etc which can be used to make predictions which are also done by various statistical models.
| null |
CC BY-SA 3.0
| null |
2014-06-20T06:49:25.237
|
2014-06-20T06:49:25.237
| null | null |
514
| null |
501
|
2
| null |
411
|
5
| null |
The best language depends on what you want to do. First remark: don't limit yourself to one language. Learning a new language is always a good thing, but at some point you will need to choose. Facilities offered by the language itself are an obvious thing to keep into account but in my opinion the following are more important:
- available libraries: do you have to implement everything from scratch or can you reuse existing stuff? Note that this these libraries need not be in whatever language you are considering, as long as you can interface easily. Working in a language without library access won't help you get things done.
- number of experts: if you want external developers or start working in a team, you have to consider how many people actually know the language. As an extreme example: if you decide to work in Brainfuck because you happen to like it, know that you will likely work alone. Many surveys exists that can help assess the popularity of languages, including the number of questions per language on SO.
- toolchain: do you have access to good debuggers, profilers, documentation tools and (if you're into that) IDEs?
I am aware that most of my points favor established languages. This is from a 'get-things-done' perspective.
That said, I personally believe it is far better to become proficient in a low level language and a high level language:
- low level: C++, C, Fortran, ... using which you can implement certain profiling hot spots only if you need to because developing in these languages is typically slower (though this is subject to debate). These languages remain king of the hill in terms of critical performance and are likely to stay on top for a long time.
- high level: Python, R, Clojure, ... to 'glue' stuff together and do non-performance critical stuff (preprocessing, data handling, ...). I find this to be important simply because it is much easier to do rapid development and prototyping in these languages.
| null |
CC BY-SA 3.0
| null |
2014-06-20T07:11:40.053
|
2014-06-20T07:11:40.053
| null | null |
119
| null |
502
|
2
| null |
488
|
16
| null |
Regarding prediction, statistics and machine learning sciences started to solve mostly the same problem from different perspectives.
Basically statistics assumes that the data were produced by a given stochastic model. So, from a statistical perspective, a model is assumed and given various assumptions the errors are treated and the model parameters and other questions are inferred.
Machine learning comes from a computer science perspective. The models are algorithmic and usually very few assumptions are required regarding the data. We work with hypothesis space and learning bias. The best exposition of machine learning I found is contained in Tom Mitchell's book called [Machine Learning](http://www.cs.cmu.edu/afs/cs.cmu.edu/user/mitchell/ftp/mlbook.html).
For a more exhaustive and complete idea regarding the two cultures you can read the Leo Breiman paper called [Statistical Modeling: The Two Cultures](http://www.google.ie/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CDIQFjAA&url=http://strimmerlab.org/courses/ss09/current-topics/download/breiman2001.pdf&ei=RQGkU_DgH8Le7AaRmID4DA&usg=AFQjCNGNnrlqadmBT2fZMT_NfoUQ1rEuow&sig2=g5qovUTsuuJUGGE64g0nrQ)
However what must be added is that even if the two sciences started with different perspectives, both of them now now share a fair amount of common knowledge and techniques. Why, because the problems were the same, but the tools were different. So now machine learning is mostly treated from a statistical perspective (check the Hastie,Tibshirani, Friedman book [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/) from a machine learning point of view with a statistical treatement, and perhaps Kevin P. Murphy 's book [Machine Learning: A probabilistic perspective](http://mitpress.mit.edu/books/machine-learning-2), to name just a few of the best books available today).
Even the history of the development of this field show the benefits of this merge of perspectives. I will describe two events.
The first is the creation of CART trees, which was created by Breiman with a solid statistical background. At approximately the same time, Quinlan developed ID3,C45,See5, and so on, decision tree suite with a more computer science background. Now both this families of trees and the ensemble methods like bagging and forests become quite similar.
The second story is about boosting. Initially they were developed by Freund and Shapire when they discovered AdaBoost. The choices for designing AdaBoost were done mostly from a computational perspective. Even the authors did not understood well why it works. Only 5 years later Breiman (again!) described the adaboost model from a statistical perspective and gave an explanation for why that works. Since then, various eminent scientists, with both type of backgrounds, developed further those ideas leading to a Pleiads of boosting algorithms, like logistic boosting, gradient boosting, gentle boosting ans so on. It is hard now to think at boosting without a solid statistical background.
Generalized Linear Models is a statistical development. However new Bayesian treatments puts this algorithm also in machine learning playground. So I believe both claims could be right, since the interpretation and treatment of how it works could be different.
| null |
CC BY-SA 3.0
| null |
2014-06-20T10:01:03.363
|
2016-12-05T08:33:23.513
|
2016-12-05T08:33:23.513
|
26766
|
108
| null |
503
|
2
| null |
497
|
2
| null |
Edit: Warning, i leave my message but my answer seems wrong, please check out the comment below!
I'm not an expert but I guess the main problem is to answer this question:
Has an/any event affected the number of hits on a certain day?
But I don't know how to treat multiple events, so I would try to answer this question:
- Does event X affected the number of hits on a certain day?
Which can be answered using hypothesis testing with p-values (what scientist do to evaluate for instance if a medicine affects a disease or not).
By using p-values, you could determinate if the number of hits in a certain day were mere random and acceptable under normal circumstances or that they must correspond to a change in your model.
You can read more about p-values in [Open Intro to Statistics Book](http://www.openintro.org/stat/), I've actually learn about them from there.
Then, the other parts of the problem are how to identify your events and calculate the necessary parameters to answer your question (average/median, variance, etc.) and also how to keep that up-to-date and working.
| null |
CC BY-SA 3.0
| null |
2014-06-20T14:17:27.970
|
2014-06-20T19:39:41.533
|
2014-06-20T19:39:41.533
|
1057
|
1057
| null |
504
|
1
| null | null |
7
|
245
|
I'm currently working with a dataset with a wide range of document lengths -- anywhere from a single word to a full page of text. In addition, the grammatical structure and use of punctuation varies wildly from document to document. The goal is to classify those documents into one of about 10-15 categories. I'm currently using ridge regression and logistic regression for the task, and CV for the alpha values of ridge. The feature vectors are tf-idf ngrams.
Recently I've noticed that longer documents are much less likely to be categorized. Why might this be the case, and how can one "normalize" for this kind of variation? As a more general question, how does one typically deal with diverse data sets? Should documents be grouped based off of metrics like document length, use of punctuation, grammatical rigor, etc. and then fed through different classifiers?
|
Dealing with diverse text data
|
CC BY-SA 3.0
| null |
2014-06-20T14:58:09.320
|
2014-06-20T16:56:41.160
| null | null |
684
|
[
"classification",
"nlp"
] |
506
|
2
| null |
497
|
13
| null |
For the record, I think this is the type of question that's perfect for the data science Stack Exchange. I hope we get a bunch of real world examples of data problems and several perspectives on how best to solve them.
I would encourage you not to use p-values as they can be pretty misleading ([1](http://andrewgelman.com/2013/03/12/misunderstanding-the-p-value/), [2](http://occamstypewriter.org/boboh/2008/08/19/why_p_values_are_evil/)). My approach hinges on you being able to summarize traffic on a given page before and after some intervention. What you care about is the difference in the rate before and after the intervention. That is, how does the number of hits per day change? Below, I explain a first stab approach with some simulated example data. I will then explain one potential pitfall (and what I would do about it).
First, let's think about one page before and after an intervention. Pretend the intervention increases hits per day by roughly 15%:
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
def simulate_data(true_diff=0):
#First choose a number of days between [1, 1000] before the intervention
num_before = np.random.randint(1, 1001)
#Next choose a number of days between [1, 1000] after the intervention
num_after = np.random.randint(1, 1001)
#Next choose a rate for before the intervention. How many views per day on average?
rate_before = np.random.randint(50, 151)
#The intervention causes a `true_diff` increase on average (but is also random)
rate_after = np.random.normal(1 + true_diff, .1) * rate_before
#Simulate viewers per day:
vpd_before = np.random.poisson(rate_before, size=num_before)
vpd_after = np.random.poisson(rate_after, size=num_after)
return vpd_before, vpd_after
vpd_before, vpd_after = simulate_data(.15)
plt.hist(vpd_before, histtype="step", bins=20, normed=True, lw=2)
plt.hist(vpd_after, histtype="step", bins=20, normed=True, lw=2)
plt.legend(("before", "after"))
plt.title("Views per day before and after intervention")
plt.xlabel("Views per day")
plt.ylabel("Frequency")
plt.show()
```

We can clearly see that the intervention increased the number of hits per day, on average. But in order to quantify the difference in rates, we should use one company's intervention for multiple pages. Since the underlying rate will be different for each page, we should compute the percent change in rate (again, the rate here is hits per day).
Now, let's pretend we have data for `n = 100` pages, each of which received an intervention from the same company. To get the percent difference we take (mean(hits per day before) - mean(hits per day after)) / mean(hits per day before):
```
n = 100
pct_diff = np.zeros(n)
for i in xrange(n):
vpd_before, vpd_after = simulate_data(.15)
# % difference. Note: this is the thing we want to infer
pct_diff[i] = (vpd_after.mean() - vpd_before.mean()) / vpd_before.mean()
plt.hist(pct_diff)
plt.title("Distribution of percent change")
plt.xlabel("Percent change")
plt.ylabel("Frequency")
plt.show()
```

Now we have the distribution of our parameter of interest! We can query this result in different ways. For example, we might want to know the mode, or (approximation of) the most likely value for this percent change:
```
def mode_continuous(x, num_bins=None):
if num_bins is None:
counts, bins = np.histogram(x)
else:
counts, bins = np.histogram(x, bins=num_bins)
ndx = np.argmax(counts)
return bins[ndx:(ndx+1)].mean()
mode_continuous(pct_diff, 20)
```
When I ran this I got 0.126, which is not bad, considering our true percent change is 15. We can also see the number of positive changes, which approximates the probability that a given company's intervention improves hits per day:
```
(pct_diff > 0).mean()
```
Here, my result is 0.93, so we could say there's a pretty good chance that this company is effective.
Finally, a potential pitfall: Each page probably has some underlying trend that you should probably account for. That is, even without the intervention, hits per day may increase. To account for this, I would estimate a simple linear regression where the outcome variable is hits per day and the independent variable is day (start at day=0 and simply increment for all the days in your sample). Then subtract the estimate, y_hat, from each number of hits per day to de-trend your data. Then you can do the above procedure and be confident that a positive percent difference is not due to the underlying trend. Of course, the trend may not be linear, so use discretion! Good luck!
| null |
CC BY-SA 3.0
| null |
2014-06-20T16:09:41.637
|
2014-06-20T16:09:41.637
| null | null |
1011
| null |
507
|
2
| null |
497
|
10
| null |
Back in my data analyst days this type of problem was pretty typical. Basically, everyone in marketing would come up with a crazy idea that the sold to higher ups as the single event that would boost KPI's by 2000%. The higher ups would approve them and then they would begin their "test". Results would come back, and management would dump it on the data analysts to determine what worked and who did it.
The short answer is you cant really know if it wasn't run as a random A/B style test on like time periods. But I am very aware of how deficient that answer is, especially if the fact that a pure answer doesn't exist is irrelevant to the urgency of future business decisions. Here are some of the techniques I would use to salvage the analysis in this situation, bear in mind this is more of an art then a science.
Handles
A handle is something that exists in the data that you can hold onto. From what you are telling me in your situation you have a lot of info on who the marketing agency is, when they tried a tactic, and to which site they applied it to. These are your starting point and information like this going to be the corner stone of your analysis.
Methodology
The methodology is going to probably hold the strongest impact on which agencies are given credit for any and all gains so you are going to need to make sure that it is clearly outlines and all stake holders agree that it makes sense. If you cant do that it is going to be difficult for people to trust your analysis.
An example of this are conversions. Say the marketing department purchases some leads and they arrive at our landing page, we would track them for 3 days, if they made a purchase within that time we would count them as having been converted. Why 3 days, why not 5 or 1? Thats not important as long as everyone agrees, you now have a definition you can build off of.
Comparisons
In an ideal would you would have a nice A/B test to prove a definitive relationship, I am going to assume that you are running short on those, still, you can learn something from a simple comparison of like data. When companies are trying to determine the efficacy of radio advertising they will often run ads on offset months in the same market, or for several months in one market and compare that with the results in a separate but similar market. Its doesn't pass for science, but even with all that noise a strong results will almost always be noticeable.
I would combine these in your case to determine how long an event is given to register an effect. Once you have the data from that time period run it against your modeled out traffic prediction, week over week growth, month over month etc. Which, can then allow a meaningful comparison between agencies, and across time periods.
Pragmatism
The aspiration is to be able to provide a deep understanding of cause and effect, but it is probably not realistic. Because of how messy outside factors make your analysis, you are constantly going to run up against the question over and over again: Did this event raise volume/sales/click throughs, or would doing anything at all have had the same effect? The best advise I can give for this is set very realistic goals for what you are looking to measure. A good starting point is, within the methodology you have, which event had the largest impact. Once you have those open your aperture from there.
Summary
Once you have reasoned out all of these aspects you can go about building a general solution which can then be automated. The advantage to designing your solution in this manner is that the business logic is already built in. This will make your results much more approachable and intuitive to non-technical business leaders.
| null |
CC BY-SA 3.0
| null |
2014-06-20T16:29:27.673
|
2014-06-20T16:29:27.673
| null | null |
780
| null |
508
|
2
| null |
504
|
5
| null |
I'm not sure how you are applying a regression framework for document classification. The way I'd approach the problem is to apply a standard discriminative classification approach such as SVM.
In a discriminative classification approach the notion of similarity or inverse distance between data points (documents in this case) is pivotal. Fortunately for documents, there is a standard way of defining pairwise similarity. This is the standard [cosine similarity](http://en.wikipedia.org/wiki/Cosine_similarity) measure which makes use of document length normalization to take different document lengths into account.
Thus, practically speaking, in cosine similarity you would work with relative term weights normalized by document lengths and hence document length diversity should not be a major issue in the similarity computation.
One also has to be careful when applying idf in term weights. If the number of documents is not significantly large the idf measure may be statistically imprecise thus adding noise to the term weights. It's also a standard practice to ignore stopwords and punctuations.
| null |
CC BY-SA 3.0
| null |
2014-06-20T16:56:41.160
|
2014-06-20T16:56:41.160
| null | null |
984
| null |
509
|
2
| null |
130
|
-3
| null |
For example...if u have an agricultural land then selecting one particular area of that land would be feature selection.If u aim to find the affected plants in that area den u need to observe each plant based on a particular feature that is common in each plant so as to find the abnormalities...for this u would be considering feature extraction.In this example the original agricultural land corresponds to Dimensionality reduction.
| null |
CC BY-SA 3.0
| null |
2014-06-20T17:30:39.097
|
2014-07-09T17:18:44.043
|
2014-07-09T17:18:44.043
|
1015
|
1015
| null |
510
|
2
| null |
422
|
22
| null |
A couple of words about social networks APIs. About a year ago I wrote a review of popular social networks’ APIs for researchers. Unfortunately, it is in Russian. Here is a summary:
Twitter ([https://dev.twitter.com/docs/api/1.1](https://dev.twitter.com/docs/api/1.1))
- almost all data about tweets/texts and users is available;
- lack of sociodemographic data;
- great streaming API: useful for real time text processing;
- a lot of wrappers for programing languages;
- getting network structure (connections) is possible, but time-expensive (1 request per 1 minute).
Facebook ([https://developers.facebook.com/docs/reference/api/](https://developers.facebook.com/docs/reference/api/))
- rate limits: about 1 request per second;
- well documented, sandbox present;
- FQL (SQL-like) and «regular Rest» Graph API;
- friendship data and sociodemographic features present;
- a lot of data is beyond event horizon: only friends' and friends' of friends data is more or less complete, almost nothing could be investigated about random user;
- some strange API bugs, and looks like nobody cares about it (e.g., some features available through FQL, but not through Graph API synonym).
Instagram ([http://instagram.com/developer/](http://instagram.com/developer/))
- rate limits: 5000 requests per hour;
- real-time API (like Streaming API for Twitter, but with photos) - connection to it is a little bit tricky: callbacks are used;
- lack of sociodemographic data;
- photos, filters data available;
- unexpected imperfections (e.g., it’s possible to collect only 150 comments to post/photo).
Foursquare ([https://developer.foursquare.com/overview/](https://developer.foursquare.com/overview/))
- rate limits: 5000 requests per hour;
- kingdom of geosocial data :)
- quite closed from researches because of privacy issues. To collect checkins data one need to build composite parser working with 4sq, bit.ly, and twitter APIs at once;
- again: lack of sociodemographic data.
Google+ ([https://developers.google.com/+/api/latest/](https://developers.google.com/+/api/latest/))
- about 5 requests per second (try to verify);
- main methods: activities and people;
- like on Facebook, a lot of personal data for random user is hidden;
- lack of user connections data.
And out-of-competition: I reviewed social networks for Russian readers, and #1 network here is [vk.com](http://en.wikipedia.org/wiki/VK_%28social_network%29). It’s translated to many languages, but popular only in Russia and other CIS countries. API docs link: [http://vk.com/dev/](http://vk.com/dev/). And from my point of view, it’s the best choice for homebrew social media research. At least, in Russia. That’s why:
- rate limits: 3 requests per second;
- public text and media data available;
- sociodemographic data available: for random user availability level is about 60-70%;
- connections between users are also available: almost all friendships data for random user is available;
- some special methods: e.g., there is a method to get online/offline status for exact user in realtime, and one could build schedule for his audience.
| null |
CC BY-SA 3.0
| null |
2014-06-20T17:34:51.633
|
2014-06-20T17:34:51.633
| null | null |
941
| null |
511
|
1
|
513
| null |
13
|
11080
|
I wonder which type of model cross-validation to choose for classification problem: K-fold or random sub-sampling (bootstrap sampling)?
My best guess is to use 2/3 of the data set (which is ~1000 items) for training and 1/3 for validation.
In this case K-fold gives only three iterations(folds), which is not enough to see stable average error.
On the other hand I don't like random sub-sampling feature: that some items won't be ever selected for training/validation, and some will be used more than once.
Classification algorithms used: random forest & logistic regression.
|
Cross-validation: K-fold vs Repeated random sub-sampling
|
CC BY-SA 3.0
| null |
2014-06-20T17:57:46.363
|
2014-07-01T08:04:30.127
|
2014-06-27T17:36:39.530
|
97
|
97
|
[
"cross-validation",
"sampling"
] |
512
|
2
| null |
412
|
3
| null |
Here's an approach I didn't see mentioned: separate the process into two steps: the first step focused on encoding names so that alternative versions of the same name are encoded the same (or nearly the same), and the second step focused on making them anonymous.
For the first step, you could use one of the [Phonetic Algorithms (Soundex and variants)](https://en.wikipedia.org/wiki/Phonetic_encoding), applied to first name, last name, and initials in various orders. (See [this article](http://www.stevemorse.org/phonetics/bmpm2.htm), also). It's in this step where you resolve similarities vs. differences in names to balance false positives from false negatives.
For the second step, you can pick any hashing or cryptographic method you like, without concern for how that method affects name matching. This gives you freedom to use a method that has the best characteristics for both performance, robustness, and anonymity.
| null |
CC BY-SA 3.0
| null |
2014-06-20T18:56:52.110
|
2014-06-20T18:56:52.110
| null | null |
609
| null |
513
|
2
| null |
511
|
8
| null |
If you have an adequate number of samples and want to use all the data, then k-fold cross-validation is the way to go. Having ~1,500 seems like a lot but whether it is adequate for k-fold cross-validation also depends on the dimensionality of the data (number of attributes and number of attribute values). For example, if each observation has 100 attributes, then 1,500 observations is low.
Another potential downside to k-fold cross-validation is the possibility of a single, extreme outlier skewing the results. For example, if you have one extreme outlier that can heavily bias your classifier, then in a 10-fold cross-validation, 9 of the 10 partitions will be affected (though for random forests, I don't think you would have that problem).
Random subsampling (e.g., bootstrap sampling) is preferable when you are either undersampled or when you have the situation above, where you don't want each observation to appear in k-1 folds.
| null |
CC BY-SA 3.0
| null |
2014-06-20T19:15:52.093
|
2014-06-20T19:15:52.093
| null | null |
964
| null |
514
|
2
| null |
313
|
7
| null |
I like Amir Ali Akbari's suggestions, and I'll add a few of my own, focusing on topics and skills that are not adequately covered in most machine learning and data analysis books that focus on math and/or programming.
Data Cleaning:
- Osborne 2012, Best Practices in Data Cleaning
- McCallom 2012, Bad Data Handbook: Cleaning Up The Data So You Can Get Back To Work
Bayesian Data Analysis (alternative to Fisher-style Null Hypothesis Significance Testing):
- Kruschke 2011, Doing Bayesian Data Analysis
Inference in the face of uncertainty, incompleteness, contradictions, ambiguity, imprecision, ignorance, etc.:
- Schum & Starace, 2001, The Evidential Foundations of Probabilistic Reasoning
- Bammer & Smithson 2008, Uncertainty and Risk: Multidisciplinary Perspectives
- Smithson 1989, Ignorance and Uncertainty
- CIA 2008, A Tradecraft Primer: Structured Analytic Techniques for Improving Intelligence Analysis (FREE! as PDF)
- Morgan & Winship 2007, Counterfactuals and Causal Inference: Methods and Principles for Social Research
Experiments:
- Glennerster & Takavarasha 2013, Running Randomized Evaluations: A Practical Guide
- Dunning 2012, Natural Experiments in the Social Sciences
Simulation:
- Epstein 2006, Generative Social Science: Studies in Agent-Based Computational Modeling
- Nelson 2010, Stochastic Modeling: Analysis and Simulation
Expert elicitation, probabilistic estimation:
- Hubbard 2014, How to Measure Anything: Finding the Value of Intangibles in Business
| null |
CC BY-SA 3.0
| null |
2014-06-20T20:03:51.247
|
2014-06-20T20:03:51.247
| null | null |
609
| null |
516
|
1
| null | null |
5
|
297
|
I'm working on multiclass logistic regression model with a large number of features (numFeatures > 100). Using a Maximum Likelihood Estimation based on the cost function and gradient, the fmincg algorithm solves the problem quickly. However, I'm also experimenting with a different cost function and do not have a gradient.
Is there a good way to speed up the calculation process? E.g., is there a different algorithm or fmincg setting that I can use?
|
Efficient solution of fmincg without providing gradient?
|
CC BY-SA 3.0
| null |
2014-06-21T04:59:06.620
|
2014-06-22T13:18:38.760
|
2014-06-21T06:38:58.260
|
84
|
985
|
[
"bigdata",
"data-mining"
] |
517
|
2
| null |
516
|
3
| null |
If you do not have a gradient available, but the problem is convex, you can use the [Nelder-Mead simplex method](http://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method). It is available in most optimization packages, for example in [scipy.optimize](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html#scipy.optimize.minimize).
| null |
CC BY-SA 3.0
| null |
2014-06-21T07:02:10.687
|
2014-06-21T07:02:10.687
| null | null |
119
| null |
518
|
1
|
583
| null |
-6
|
233
|
Please, could someone recommend a paper or blog post that describes the online k-means algorithm.
|
Online k-means explanation
|
CC BY-SA 3.0
|
0
|
2014-06-21T10:55:41.700
|
2017-08-14T13:29:11.063
|
2017-08-14T13:29:11.063
|
8432
|
960
|
[
"machine-learning",
"clustering"
] |
519
|
2
| null |
466
|
2
| null |
The main problem here is that even before attempting to apply anomaly detection algorithms, you are not getting good enough predictions of gas consumption using neural networks.
If the main goal here is to reach the stage when anomaly detection algorithms could be used and you state that you have access to examples of successful application of linear regression for this problem, this approach could be more productive. One of the principles of successful machine learning application is that several different algorithms can be tried out before final selection based on results.
It you choose to tune your neural network performance, [learning curve](https://stackoverflow.com/questions/4617365/what-is-a-learning-curve-in-machine-learning) plotting the effect of change in different hyperparameters on the error rate can be used. Hyperparameters that can be modified are:
- number of features
- order of the polynomial
- regularization parameter
- number of layers in the network
Best settings can be selected by the performance on cross validation set.
| null |
CC BY-SA 3.0
| null |
2014-06-21T12:59:15.373
|
2014-06-21T13:36:29.167
|
2017-05-23T12:38:53.587
|
-1
|
454
| null |
520
|
2
| null |
466
|
3
| null |
In your training notebook you present results for training with 20 epochs. Have you tried varying that parameter, to see if it affects your performance? This is an important parameter for back-propagation.
For estimating your model parameters, as user tomaskazemekas pointed out, plotting Learning Curves is a very good approach. In addition to that, you could also create a plot using a model parameter (e.g. training epochs or hidden layer size) vs. Training and Validation error. This will allow you to understand the bias/variance tradeoff, and help you pick a good value for your parameters. [Some info can be found here](http://www.astroml.org/sklearn_tutorial/practical.html#cross-validation-and-testing). Naturally, it is a good idea to keep a small percentage of your data for a (third) Test set.
As a side note, it seems that increasing the number of neurons in your model show no significant improvement for your RMSE. This suggests that you could also try with a simpler model, i.e. with less neurons and see how your model behaves.
In fact, I would suggest (if you haven't done so already) trying a simple model with few or no parameters first e.g. Linear Regression, and compare your results with the literature, just as a sanity check.
| null |
CC BY-SA 3.0
| null |
2014-06-21T19:13:47.757
|
2014-06-21T19:13:47.757
| null | null |
1085
| null |
521
|
2
| null |
516
|
2
| null |
Conjugate Gradient -- the cg in Function MINimization (nonlinear) Conjugate Gradiant -- requires you to have a gradient function (or approximation) since that is a critical part of the algorithm itself: it needs to find the steepest descent direction quickly.
`fminsearch` implements Nelder-Mead, a nonlinear gradient-free method. Its convergence properties are not anywhere near as good.
What is your cost function? Are there approximations that are differentiable (pref. twice so you can use the very powerful quasi-Newton methods)?
| null |
CC BY-SA 3.0
| null |
2014-06-22T12:02:52.773
|
2014-06-22T12:02:52.773
| null | null |
1061
| null |
522
|
2
| null |
516
|
2
| null |
I have been able to optimize very strange functions with simulated annealing, and it does not require a gradient. Instead, it uses random numbers in a way very similar to Markov Chain Monte Carlo, which helps it avoid getting stuck in local optima. A decent explanation that gives the intuition behind it can be found in this lecture: [Simulated Annealing](http://iacs-courses.seas.harvard.edu/courses/am207/blog/lecture-13.html). scipy 0.14 includes this algorithm in its optimization module: [scipy.optimize.anneal](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.anneal.html).
| null |
CC BY-SA 3.0
| null |
2014-06-22T13:18:38.760
|
2014-06-22T13:18:38.760
| null | null |
1011
| null |
523
|
2
| null |
466
|
2
| null |
In your notebooks, I did not see your neural network model, can you point which library is using, how many layers you have and what type of neural network are you using?
In your notebooks, it seems you are using the noisy and outlier dataset to train the neural network, I think you should train the neural network on the dataset that you do not have any outliers so that you could see the observation distance from the prediction of the neural network to label the observation either outlier or not.
I wrote [couple](http://bugra.github.io/work/notes/2014-03-31/outlier-detection-in-time-series-signals-fft-median-filtering/) of [things](http://bugra.github.io/work/notes/2014-05-11/robust-regression-and-outlier-detection-via-gaussian-processes/) on outlier detection in time-series signals, your data is highly seasonal as sobach mentioned and you could use FFT(first link above) to get the overall trend in the signal. After you get the frequency component in the gas consumption, you could look at the high frequency components to get the outliers.
Also if you want to insist on using neural network for seasonal data, you may want to check recurrent neural networks out as they could incorporate the past observations better than a vanilla neural network, and supposedly may provide a better result for the data that you have.
| null |
CC BY-SA 3.0
| null |
2014-06-22T16:03:39.047
|
2014-06-22T16:03:39.047
| null | null |
1096
| null |
525
|
2
| null |
430
|
15
| null |
In order to understand the variety of ways machine learning can be integrated into production applications, I think it is useful to look at open source projects and papers/blog posts from companies describing their infrastructure.
The common theme that these systems have is the separation of model training from model application. In production systems, model application needs to be fast, on the order of 100s of ms, but there is more freedom in how frequently fitted model parameters (or equivalent) need to be updated.
People use a wide range of solutions for model training and deployment:
- Build a model, then export and deploy it with PMML
AirBnB describes their model training in R/Python and deployment of PMML models via OpenScoring.
Pattern is project related to Cascading that can consume PMML and deploy predictive models.
- Build a model in MapReduce and access values in a custom system
Conjecture is an open source project from Etsy that allows for model training with Scalding, an easier to use scala wrapper around MapReduce, and deployment via Php.
Kiji is an open source project from WibiData that allows for real-time model scoring (application) as well as functioanlity for persisting user data and training models on that data via Scalding.
- Use an online system that allows for continuously updating model parameters.
Google released a great paper about an online collaborative filtering they implemented to deal with recommendations in Google News.
| null |
CC BY-SA 3.0
| null |
2014-06-22T18:35:22.207
|
2014-06-22T19:54:28.380
|
2014-06-22T19:54:28.380
|
406
|
406
| null |
526
|
2
| null |
430
|
2
| null |
Chapter 1 of Practical Data Science with R ([http://www.manning.com/zumel/](http://www.manning.com/zumel/)) has a great breakdown of the data science process, including team roles and how they relate to specific tasks. The book follows the models laid out in the chapter by referencing which stages/personnel this or that particular task would be performed by.
| null |
CC BY-SA 3.0
| null |
2014-06-22T18:50:07.193
|
2014-06-23T13:36:51.493
|
2014-06-23T13:36:51.493
|
1103
|
1103
| null |
527
|
1
|
535
| null |
9
|
2220
|
I have a load of documents, which have a load of key value pairs in them. The key might not be unique so there might be multiple keys of the same type with different values.
I want to compare the similarity of the keys between 2 documents. More specifically the string similarity of these values. I am thinking of using something like the [Smith-Waterman Algorithm](http://en.wikipedia.org/wiki/Smith%E2%80%93Waterman_algorithm) to compare the similarity.
So I've drawn a picture of how I'm thinking about representing the data -

The values in the cells are the result of the smith-waterman algorithm (or some other string similarity metric).
Image that this matrix represents a key type of "things" I then need to add the "things" similarity score into a vector of 0 or 1. Thats ok.
What I can't figure out is how I determine if the matrix is similar or not similar - ideally I want to convert the matrix to an number between 0 and 1 and then I'll just set a threshold to score it as either 0 or 1.
Any ideas how I can create a score of the matrix? Does anyone know any algorithms that do this type of thing (obviously things like how smith waterman works is kind of applicable).
|
Score matrix string similarity
|
CC BY-SA 3.0
| null |
2014-06-22T21:45:35.800
|
2017-01-18T00:53:31.453
|
2014-06-22T22:44:19.880
|
84
|
1107
|
[
"algorithms",
"similarity"
] |
529
|
2
| null |
430
|
4
| null |
[Airbnb](http://nerds.airbnb.com/architecting-machine-learning-system-risk/) and [Etsy](http://codeascraft.com/2014/06/18/conjecture-scalable-machine-learning-in-hadoop-with-scalding/) both recently posted detailed information about their workflows.
| null |
CC BY-SA 3.0
| null |
2014-06-23T03:44:57.643
|
2014-06-23T03:44:57.643
| null | null |
159
| null |
530
|
1
|
532
| null |
5
|
1222
|
There is a general recommendation that algorithms in ensemble learning combinations should be different in nature. Is there a classification table, a scale or some rules that allow to evaluate how far away are the algorithms from each other? What are the best combinations?
|
How to select algorithms for ensemble methods?
|
CC BY-SA 3.0
| null |
2014-06-23T04:39:26.623
|
2014-06-24T15:44:52.540
| null | null |
454
|
[
"machine-learning"
] |
531
|
1
|
533
| null |
16
|
6262
|
I have a dataset with the following specifications:
- Training dataset with 193,176 samples with 2,821 positives
- Test Dataset with 82,887 samples with 673 positives
- There are 10 features.
I want to perform a binary classification (0 or 1). The issue I am facing is that the data is very unbalanced. After normalization and scaling the data along with some feature engineering and using a couple of different algorithms, these are the best results I could achieve:
```
mean square error : 0.00804710026904
Confusion matrix : [[82214 667]
[ 0 6]]
```
i.e only 6 correct positive hits. This is using logistic regression. Here are the various things I tried with this:
- Different algorithms like RandomForest, DecisionTree, SVM
- Changing parameters value to call the function
- Some intuition based feature engineering to include compounded features
Now, my questions are:
- What can I do to improve the number of positive hits ?
- How can one determine if there is an overfit in such a case ? ( I have tried plotting etc. )
- At what point could one conclude if maybe this is the best possible fit I could have? ( which seems sad considering only 6 hits out of 673 )
- Is there a way I could make the positive sample instances weigh more so the pattern recognition improves leading to more hits ?
- Which graphical plots could help detect outliers or some intuition about which pattern would fit the best?
I am using the scikit-learn library with Python and all implementations are library functions.
edit:
Here are the results with a few other algorithms:
Random Forest Classifier(n_estimators=100)
```
[[82211 667]
[ 3 6]]
```
Decision Trees:
```
[[78611 635]
[ 3603 38]]
```
|
Binary classification model for unbalanced data
|
CC BY-SA 4.0
| null |
2014-06-23T07:03:15.643
|
2019-06-28T10:01:52.693
|
2019-05-07T04:22:04.987
|
1330
|
793
|
[
"machine-learning",
"python",
"classification",
"logistic-regression"
] |
532
|
2
| null |
530
|
8
| null |
In general in an ensemble you try to combine the opinions of multiple classifiers. The idea is like asking a bunch of experts on the same thing. You get multiple opinions and you later have to combine their answers (e.g. by a voting scheme). For this trick to work you want the classifiers to be different from each other, that is you don't want to ask the same "expert" twice for the same thing.
In practice, the classifiers do not have to be different in the sense of a different algorithm. What you can do is train the same algorithm with different subset of the data or a different subset of features (or both). If you use different training sets you end up with different models and different "independent" classifiers.
There is no golden rule on what works best in general. You have to try to see if there is an improvement for your specific problem.
| null |
CC BY-SA 3.0
| null |
2014-06-23T08:03:50.327
|
2014-06-23T11:21:51.677
|
2014-06-23T11:21:51.677
|
1085
|
418
| null |
533
|
2
| null |
531
|
11
| null |
- Since you are doing binary classification, have you tried adjusting the classification threshold? Since your algorithm seems rather insensitive, I would try lowering it and check if there is an improvement.
- You can always use Learning Curves, or a plot of one model parameter vs. Training and Validation error to determine whether your model is overfitting. It seems it is under fitting in your case, but that's just intuition.
- Well, ultimately it depends on your dataset, and the different models you have tried. At this point, and without further testing, there can not be a definite answer.
- Without claiming to be an expert on the topic, there are a number of different techniques you may follow (hint: first link on google), but in my opinion you should first make sure you choose your cost function carefully, so that it represents what you are actually looking for.
- Not sure what you mean by pattern intuition, can you elaborate?
By the way, what were your results with the different algorithms you tried? Were they any different?
| null |
CC BY-SA 3.0
| null |
2014-06-23T08:13:22.837
|
2014-06-23T08:13:22.837
|
2017-05-23T12:38:53.587
|
-1
|
1085
| null |
534
|
2
| null |
527
|
3
| null |
If your goal is to transform your matrix into a number (your similarity measure), you may want to use a [matrix norm](http://en.wikipedia.org/wiki/Matrix_norm).
For instance, using the [Frobenius norm](http://en.wikipedia.org/wiki/Matrix_norm#Frobenius_norm) on your example would return 1.488086.
| null |
CC BY-SA 3.0
| null |
2014-06-23T09:23:27.617
|
2014-06-23T09:23:27.617
| null | null |
883
| null |
535
|
2
| null |
527
|
3
| null |
As I understood, Document 1 and Document 2 may have different number of keys. And you wand to get final similarity evaluation between 0 and 1. If so, I would propose following algorithm:
- Sum of max. vals is equal to 0.
- Select maximum value from doc-doc matrix and add it to Sum of max. vals.
- Remove row and column with maximum value from the matrix.
- Repeat steps 2-3 until rows or columns are ended.
- Denominate Sum of max. vals by average number of key words in two texts.
Final estimation would be equal to 1, if both documents have identical length, and every word from Doc 1 has equivalent in Doc 2.
You haven't mentioned software, you are using, but here is R example of function, computing such similarity (it takes object of class matrix as input):
```
eval.sim <- function(sim.matrix){
similarity <- 0
denominator <- sum(dim(sim.matrix)) / 2
for(i in 1:(min(c(nrow(sim.matrix), ncol(sim.matrix))) - 1)){
extract <- which(sim.matrix == max(sim.matrix), arr.ind=T)[1, ]
similarity <- similarity + sim.matrix[extract[1], extract[2]]
sim.matrix <- sim.matrix[-extract[1], -extract[2]]
}
similarity <- similarity + max(sm.copy)
similarity <- similarity / denominator
}
```
In python -
```
import numpy as np
def score_matrix(sim_matrix):
similarity = 0
denominator = sum(sim_matrix.shape) / 2
for i in range(min(sim_matrix.shape)):
x, y = np.where(sim_matrix == np.max(sim_matrix))[0][0], np.where(sim_matrix == np.max(sim_matrix))[1][0]
similarity += sim_matrix[x, y]
sim_matrix = np.delete(sim_matrix,(x),axis=0)
sim_matrix = np.delete(sim_matrix,(y),axis=1)
return similarity / denominator
```
| null |
CC BY-SA 3.0
| null |
2014-06-23T09:42:46.450
|
2014-06-30T05:35:26.340
|
2014-06-30T05:35:26.340
|
941
|
941
| null |
536
|
1
| null | null |
6
|
366
|
More often than not, data I am working with is not 100% clean. Even if it is reasonably clean, still there are portions that need to be fixed.
When a fraction of data needs it, I write a script and incorporate it in data processing.
But what to do if only a few entries needs to be fixed (e.g. misspelled city or zip code)? Let's focus on "small data", such as that in CSV files or a relational database.
The practical problems I encountered:
- Writing a general script trying solve all similar errors may give unintended consequences (e.g. matching cities that are different but happen to have similar names).
- Copying and modifying data may make a mess, as:
Generating it again will destroy all fixes.
When there are more errors of different kinds, too many copies of the same file result, and it is hard to keep track of them all.
- Writing a script to modify particular entries seems the best, but there is overhead in comparison to opening CSV and fixing it (but still, seems to be the best); and either we need to create more copies of data (as in the previous point) or run the script every time we load data.
What are the best practices in such a case as this?
EDIT: The question is on the workflow, not whether to use it or not.
(In my particular case I don't want the end-user to see misspelled cities and, even worse, see two points of data, for the same city but with different spelling; the data is small, ~500 different cities, so manual corrections do make sense.)
|
Good practices for manual modifications of data
|
CC BY-SA 3.0
| null |
2014-06-23T11:03:09.767
|
2014-10-15T11:37:47.090
|
2014-07-02T08:48:41.517
|
24
|
289
|
[
"data-cleaning"
] |
538
|
1
| null | null |
5
|
366
|
Data Science and Machine Learning include a lot of different topics and it´s hard to stay up-to-date about all the news about papers, researches or new tutorials and tools.
What sources do you use to get all the information?
I use mostly Reddit as my first source and the subreddits [Machine Learning](http://www.reddit.com/r/MachineLearning) and [R](http://www.reddit.com/r/rstats).
But I would also [Datatau](http://www.datatau.com) and of course the great [KDNuggets](http://www.kdnuggets.com).
|
What are your favorite sources for news about Machine Learning and Data Science?
|
CC BY-SA 4.0
| null |
2014-06-23T13:31:41.443
|
2021-04-01T08:22:59.627
|
2021-03-30T11:24:36.087
|
85045
|
1125
|
[
"machine-learning"
] |
539
|
2
| null |
538
|
5
| null |
I like following certain tags on [Cross Validated](http://stats.stackexchange.com), as well as the [r tag](https://stackoverflow.com/questions/tagged/r) StackOverflow.
[arXiv.org](http://www.arxiv.org) is also pretty cool, it has several subsections devoted to stats and ML.
| null |
CC BY-SA 4.0
| null |
2014-06-23T13:36:18.273
|
2021-04-01T08:22:59.627
|
2021-04-01T08:22:59.627
|
85045
|
1127
| null |
540
|
1
| null | null |
0
|
1007
|
What is the very first thing you do when you get your hands on a new data set (assuming it is cleaned and well structured)? Please share sample code snippets as I am sure this would be extremely helpful for both beginners and experienced.
|
First steps on a new cleaned dataset
|
CC BY-SA 3.0
| null |
2014-06-23T15:38:10.920
|
2021-03-30T11:52:05.423
| null | null |
1131
|
[
"knowledge-base",
"dataset"
] |
541
|
2
| null |
536
|
2
| null |
The #1 most important thing is to explicitly document your process.
Different people working on different data make different choices. Any scientists who claim that their work is entirely objective and rational are in denial about the nature of science; every single model we create is informed by our biases and perspectives. The key is that we each have different biases and perspectives, which is one reason that science is the pursuit of a society, not of individuals, and underscores the importance of peer review.
You have already identified some trade-offs between an algorithmic solution and individual/case-by-case corrections. If we have training as scientists, we may be used to thinking that everything has to be applied as an overall rule, so we may bend over backwards to find correlations and fit curves to data that just won't reasonably accept it. Applying an algorithm to the entire data set can be powerful when it works, but it's not the only tool at your disposal. Sometimes you need to use your judgment as the scientist.
But, if you use your individual judgment to make exceptions and to manually correct your data, be prepared to defend your judgment and argue your case. Be prepared to show that you considered all the data. If you're going to manually correct 8 observations out of a set of 100,000, you need to do more than justify those 8 manual corrections - you also need to justify not correcting the other 99,992 observations.
You can still take a methodical approach, perhaps by classifying your data in a systematic way, and choosing which subset of the data to apply your cleaning algorithm to based on that system of classification. And when you do this, you document it, you make your case, and you respect the judgment of your colleagues in the field.
On the other hand, why do all this extra work before you know it's necessary? Plenty of "dirty" data sets will still produce useful results. Perhaps 0.5% of your data is "dirty" in a way that you know will have a bad influence on your end product. Well, any analysis is subject to error, and the more you obsess over that 0.5% the more you will start to think of it and treat it like it was 5%, or 25% - much more significant than it truly is. Try to apply your analysis while admitting you know there is some error, and only do extra work to clean the data once you can show for certain that your analysis fails or is not useful otherwise.
## For example...
Say you have a set of test results from a survey of wells, showing the concentrations of certain dissolved substances, hourly over the course of a year. And you observe in this set certain spikes, of short duration, and orders of magnitude higher than the surrounding data. If they are few, and obvious, and you know that the sensors used to produce the data set occasionally malfunction, then there's no reason to apply an algorithmic solution to the entire data set. You have a choice between excluding some data or modifying it.
I recommend the exclusion route whenever possible, since you are making fewer assumptions when you don't have to additionally choose a "correct" value. But if your analysis will absolutely fail with a discontinuity, there are many options. You could choose to replace "bad" values via linear interpolation. You could hold constant the previous value. If there is some great previous literature to apply, and you really have a strong case that the "bad" data is purely due to equipment malfunction, perhaps there's an established model that you can apply to fill in those regions.
There are a great many approaches, which is why the most important thing is to document your process, explicitly and exhaustively. Arguing your case is important too, but not strictly necessary; the community is just more likely to ignore you if you don't make a full effort to engage the review process.
| null |
CC BY-SA 3.0
| null |
2014-06-23T15:48:10.033
|
2014-06-23T16:14:33.900
|
2014-06-23T16:14:33.900
|
322
|
322
| null |
543
|
2
| null |
540
|
1
| null |
Well, you mention in your question of the data being 'clean and well structured'. In practice, close to 70% of the time is spent in doing these two steps. Of course the first thing you do is to separate the training and test data. Considering the plethora of libraries and tools available irrespective of which technology/language you prefer to use, the next step would be to understand the data via graph plots and drawing useful intuitions specific to your target goal. This would then be followed by various other problem specific methods. As pointed out, the question is very broad and citing code snippets is simply not feasible.
| null |
CC BY-SA 3.0
| null |
2014-06-23T17:50:51.320
|
2014-06-23T17:50:51.320
| null | null |
793
| null |
544
|
2
| null |
538
|
5
| null |
You could also try:
- Quora
- Machine Learning Daily
| null |
CC BY-SA 3.0
| null |
2014-06-23T17:56:32.917
|
2014-06-23T17:56:32.917
| null | null |
793
| null |
545
|
2
| null |
540
|
3
| null |
I think this is a reasonable question. Here is what I do:
- Peak at the first few rows
- Visualize the distribution of the features I care about (histograms)
- Visualize the relationship between pairs of features (scatterplots)
I downloaded the abalone dataset from the UCI Machine Learning repository [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.data). Let's say I care about how height and diameter can be used to predict whole weight. For completeness, I've included the step of reading the data from file.
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
data = pd.read_csv("abalone.data", header=False)
data.columns = ["sex", "length", "diameter", "height",
"whole_weight", "shucked_weight",
"viscera_weight", "shell_weight", "rings"]
```
Now we can take a peak at the first few rows:
```
data.head()
```

Now, I know that the variables I care about are floating point values and they can be treated as continuous. I want to take a look to see how these three variables are distributed:
```
fig = plt.figure(figsize=(20,5))
plt.subplot(1, 3, 1)
plt.hist(data['diameter'], normed=True)
plt.title("Diameter")
plt.subplot(1, 3, 2)
plt.hist(data['height'], normed=True)
plt.title("Height")
plt.subplot(1, 3, 3)
plt.hist(data['whole_weight'], normed=True)
plt.title("Whole Weight")
plt.show()
```

Great! Now, I know that diameter and whole weight are skewed left and right (respectively). I also know that there are some outliers in terms of height (which is why matplotlib gives me extra room to the right of the distribution). Finally, I'd like to see if I can find any visual patterns between my predictors and outcome variable. I use a scatter plot for this:
```
plt.figure(figsize=(15,5))
plt.subplot(1, 2, 1)
plt.plot(data['diameter'], data['whole_weight'], 'o')
plt.title("Diameter vs. Whole Weight")
plt.ylabel("Whole Weight")
plt.xlabel("Diameter")
plt.subplot(1, 2, 2)
plt.plot(data['height'], data['whole_weight'], 'o')
plt.title("Height vs. Whole Weight")
plt.ylabel("Whole Weight")
plt.xlabel("Height")
plt.show()
```

Here, I see there is a non-linear relationship between diameter and whole weight and I'm going to have to deal with my height outliers. Now, I'm ready to do some analysis!
| null |
CC BY-SA 3.0
| null |
2014-06-23T18:38:34.260
|
2014-06-23T18:38:34.260
| null | null |
1011
| null |
546
|
2
| null |
536
|
12
| null |
Even if you are effectively modifying certain records by hand, as in the city name example you give, I would recommend doing it in code. The reason to strongly prefer code over hand-tweaking records is that the code makes a result reproducible. You want to make sure that you can always go from raw data to final result without any human intervention.
Here's a quick example. Let's say I have a list of city names in a pandas dataframe and I am certain they should all be "omaha" (you need to be absolutely certain, because changing values by hand is fraught with danger). But instead I have the following strings:
```
pd.unique(data.city)
array(['omaha', 'omahd', 'imaha', 'omaka'], dtype=object)
```
You could make the change like this:
```
data.city.values[data.city.values == 'omahd'] = 'omaha'
data.city.values[data.city.values == 'imaha'] = 'omaha'
data.city.values[data.city.values == 'omaka'] = 'omaha'
```
That code is ugly, but if you run it on the same raw dataset, you will always get the same result.
| null |
CC BY-SA 3.0
| null |
2014-06-23T18:56:11.577
|
2014-06-23T18:56:11.577
| null | null |
1011
| null |
548
|
1
| null | null |
11
|
3963
|
I often am building a model (classification or regression) where I have some predictor variables that are sequences and I have been trying to find technique recommendations for summarizing them in the best way possible for inclusion as predictors in the model.
As a concrete example, say a model is being built to predict if a customer will leave the company in the next 90 days (anytime between t and t+90; thus a binary outcome). One of the predictors available is the level of the customers financial balance for periods t_0 to t-1. Maybe this represents monthly observations for the prior 12 months (i.e. 12 measurements).
I am looking for ways to construct features from this series. I use descriptives of each customers series such as the mean, high, low, std dev., fit a OLS regression to get the trend. Are their other methods of calculating features? Other measures of change or volatility?
ADD:
As mentioned in a response below, I also considered (but forgot to add here) using Dynamic Time Warping (DTW) and then hierarchical clustering on the resulting distance matrix - creating some number of clusters and then using the cluster membership as a feature. Scoring test data would likely have to follow a process where the DTW was done on new cases and the cluster centroids - matching the new data series to their closest centroids...
|
Feature Extraction Technique - Summarizing a Sequence of Data
|
CC BY-SA 3.0
| null |
2014-06-23T23:20:36.180
|
2016-03-26T14:34:31.200
|
2014-06-24T14:03:22.697
|
1138
|
1138
|
[
"machine-learning",
"feature-selection",
"time-series"
] |
549
|
2
| null |
548
|
2
| null |
What you're trying to do here is reduce the dimensionality of your features. You can search for dimensionality reduction to get several options, but one very popular technique is principal components analysis (PCA). Principal components are not interpretable like the options you've mentioned, but they do a good job of summarizing all of the information.
| null |
CC BY-SA 3.0
| null |
2014-06-24T00:54:13.110
|
2014-06-24T00:54:13.110
| null | null |
1011
| null |
550
|
2
| null |
548
|
2
| null |
Feature extraction is always a challenge and the less addressed topic in literature, since it's widely application dependant.
Some ideas you can try:
- Raw data, measured day-by-day. That's kind of obvious with some implications and extra preprocessing (normalisation) in order to make timelines of different length comparable.
- Higher moments: skewness, kurtosis, etc
- Derivative(s): speed of evolution
- Time span is not that large but maybe it is worth trying some time series analysis features like for example autocorrelation.
- Some customised features like breaking timeline in weeks and measure the quantities you already measure in each week separately. Then a non-linear classifier would be able to combine e.g first-week features with last-week features in order to get insight of evolution in time.
| null |
CC BY-SA 3.0
| null |
2014-06-24T03:50:41.410
|
2014-06-24T03:50:41.410
| null | null |
418
| null |
552
|
2
| null |
548
|
2
| null |
At first glance, you need to extract features from your time series (x - 12) - x. One possible approach is to compute summary metrics: average, dispersion, etc. But doing so, you will loose all time-series related information. But data, extracted from curve shape may be quite useful. I recommend you to look through [this](http://cs.stanford.edu/people/jure/pubs/memeshapes-wsdm11.pdf) article, where authors propose algorithm for time series clustering. Hope, it will be useful. Additionally to such clustering you can add summary statistics to your feature list.
| null |
CC BY-SA 3.0
| null |
2014-06-24T07:03:35.567
|
2014-06-24T07:03:35.567
| null | null |
941
| null |
554
|
1
| null | null |
13
|
805
|
I am a CS master student in data mining. My supervisor once told me that before I run any classifier or do anything with a dataset I must fully understand the data and make sure that the data is clean and correct.
My questions:
- What are the best practices to understand a dataset (high dimensional with numerical and nominal attributes)?
- Practices to make sure the dataset is clean?
- Practices to make sure the dataset doesn't have wrong values or so?
|
Datasets understanding best practices
|
CC BY-SA 3.0
| null |
2014-06-24T07:29:57.787
|
2021-03-13T23:23:44.427
| null | null |
728
|
[
"statistics",
"dataset"
] |
555
|
2
| null |
536
|
15
| null |
1. Do not modify the original data
Having the original data source intact is important. You may find that updates you make to the data are not valid. You may also find a more efficient way to make updates and you will want to regression test those updates.
Always work with a copy of the data, and add columns/properties/metadata that includes any processed corrections.
Example, if your data is a .csv file that includes a city name that contains several misspellings:
```
1. Copy the .csv to a new file
2. Add a column for containing "adjusted_city_name"
3. Copy data from the city_name column to the
adjusted_city_name column and make corrections in that column.
```
2. Document proposed changes
Any changes you want to make to data should be documented so that they can be replicated moving forward.
Version control and timestamp the document every time you change it. That will help in troubleshooting at a later date.
Be explicit. Do not simply state "correct capitalization problems", state "ensure that the first letter of each city name begins with a capital letter and the remaining letters are lower-case."
Update the document with references to any automation routines that have been built to manage data cleansing.
3. Decide on a standard data cleansing technology
Whether you use perl, python, java, a particular utility, a manual process or something else is not the issue. The issue is that in the future you want to hand the data cleansing process to somebody else. If they have to know 12 different data cleansing technologies, delegating the cleansing procedure will be very difficult.
4. Standardize the workflow
There should be a standard way to handle new data. Ideally, it will be as simple as dropping a file in a specific location and a predictable automated process cleanses it and hands off a cleansed set of data to the next processing step.
5. Make as few changes as is absolutely necessary
It's always better to have a fault tolerant analysis than one that makes assumptions about the data.
6. Avoid manual updates
It's always tempting, but people are error-prone and again it makes delegation difficult.
Notes on manual processing
To more completely address the original question as to whether there's a "good" way to do manual processing, I would say no, there is not. My answer is based on experience and is not one that I make arbitrarily.
I have had more than one project lose days of time due to a client insisting that a manual data cleansing process was just fine and could be handled internally. You do not want your projects to be dependent on a single individual accomplishing a judgement based task of varying scale.
It's much better to have that individual build and document a rule set based on what they would do than to have them manually cleanse data. (And then automating that rule set)
If automation fails you in the end or is simply not possible, the ability to delegate that rule set to others without domain specific knowledge is vital.
In the end, routines to do something like correct city names can be applied to other data sets.
| null |
CC BY-SA 3.0
| null |
2014-06-24T07:35:16.930
|
2014-06-24T07:35:16.930
| null | null |
434
| null |
556
|
2
| null |
554
|
8
| null |
There are basic things you can do with any set of data:
- Validate values (String length tolerance, data type, formatting masks, required field presence, etc.)
- Range correctness (Does this seemingly correct data fall within expected ranges of values)
- Preliminary processing (If I attempt to analyze this data, can I perform the basics without running into errors)
- Preliminary reporting (run a report against a data set and ensure that it passes a sanity test)
- Defining null vs. empty vs. zero vs. False for any given column of data
- Identifying data that is out of place (numeric values dramatically different than other values in a data set, string values that look like they might be misspelled, etc.)
- Eliminating or correcting obviously errant data
Understanding data to identify errors is a whole different ball game, and it is very important.
For instance, you can have a rule that says a serial number must be present in a given data set and that serial number must be alphanumeric with a maximum string length of 255 and a minimum string length of 5.
Looking at the data, you may find one particular serial number value reads `"PLEASE ENTER SERIAL"` It's perfectly valid, but wrong.
That's kind of an obvious one, but say you're processing stock data and you had a price range for 1000 stocks that was under a dollar. A lot of people would not know that a stock price so low is invalid on certain exchanges and perfectly valid on others. You need knowledge about your data to understand if what you are seeing is problematic or not.
In the real world, you don't always have the luxury of understanding your data intimately.
The way I avoid problems is by leveraging the people around me. For small data sets, I can ask someone to review the data in it's entirety. For large ones, pulling a set of random samples and asking someone to do a sanity check on the data is more appropriate.
Further, questioning the source of the data and how well that data source can be trusted is imperative. I often have multiple conflicting sources of data and we create rules to determine the "source of truth". Sometimes one data set has great data in a given aspect, but other data sets are stronger in other areas.
Manually entered data is usually what I'm most skeptical about, but in some cases it is stronger than anything that can be acquired through automation.
| null |
CC BY-SA 3.0
| null |
2014-06-24T08:09:49.300
|
2014-06-24T08:09:49.300
| null | null |
434
| null |
559
|
1
|
561
| null |
17
|
3127
|
I have a hobby project which I am contemplating committing to as a way of increasing my so far limited experience of machine learning. I have taken and completed the Coursera MOOC on the topic. My question is with regards to the feasibility of the project.
The task is the following:
Neighboring cats are from time to time visiting my garden, which I dislike since they tend to defecate on my lawn. I would like to have a warning system that alerts me when there's a cat present so that I may go chase it off using my super soaker. For simplicity's sake, say that I only care about a cat with black and white coloring.
I have setup a raspberry pi with camera module that can capture video and/or pictures of a part of the garden.
Sample image:

My first idea was to train a classifier to identify cat or cat-like objects, but after realizing that I will be unable to obtain a large enough number of positive samples, I have abandoned that in favor of anomaly detection.
I estimate that if I captured a photo every second of the day, I would end up with maybe five photos containing cats (out of about 60,000 with sunlight) per day.
Is this feasible using anomaly detection? If so, what features would you suggest? My ideas so far would be to simply count the number of pixels with that has certain colors; do some kind of blob detection/image segmenting (which I do not know how do to, and would thus like to avoid) and perform the same color analysis on them.
|
Detecting cats visually by means of anomaly detection
|
CC BY-SA 3.0
| null |
2014-06-24T12:28:10.990
|
2014-06-25T03:30:19.787
| null | null |
1147
|
[
"machine-learning"
] |
560
|
2
| null |
559
|
4
| null |
This is an interesting and also quite ambitious project :)
I am not sure anomaly detection (at least in the sense described in the course you followed) would be a very fitting algorithm in this case.
I would consider a more viable approach to be what has been discussed at the end of that course where a Photo OCR workflow was demonstrated.
The approach would consist of segmenting your image in smaller "blocks", and going through them one-by-one using a supervised learning algorithm and try to classify each block according to whether it contains a cat or not. If one block contains a cat, the alarm goes off. As a bonus, you get the position of the cat as well, so that you may think of incorporating some "automatic" response as a future step to your project.
The benefit here is that you will not have to train your algorithm using a dataset specific to your garden (which, as you mention is difficult to create), but you can use images of cats taken off the net (e.g. perhaps you can search for "cat on grass" or something), and perhaps patches of photos from your (or other) gardens. Therefore you don;t have to spend your time collecting photos from your camera, and you avoid the risk of having a very small (comparable) sample of positives (i.e. cats).
Now, of course how easy it is to build an accurate cat detector is another topic..
| null |
CC BY-SA 3.0
| null |
2014-06-24T12:55:26.457
|
2014-06-24T12:55:26.457
| null | null |
1085
| null |
561
|
2
| null |
559
|
8
| null |
You could simplify your problem significantly by using a motion/change detection approach. For example, you could compare each image/frame with one from an early time (e.g., a minute earlier), then only consider pixels that have changed since the earlier time. You could then extract the rectangular region of change and use that as the basis for your classification or anomaly detection.
Taking this type of approach can significantly simplify your classifier and reduce your false target rate because you can ignore anything that is not roughly the size of a cat (e.g., a person or bird). You would then use the extracted change regions that were not filtered out to form the training set for your classifier (or anomaly detector).
Just be sure to get your false target rate sufficiently low before mounting a laser turret to your feline intrusion detection system.
| null |
CC BY-SA 3.0
| null |
2014-06-24T13:34:12.107
|
2014-06-24T13:34:12.107
| null | null |
964
| null |
562
|
2
| null |
530
|
3
| null |
As a rule of thumb I always propose three different options:
- Use a bagging learning technique, similar to that one followed by Random Forest. This technique allows the training of 'small' classifiers which see a small portion of the whole data. Afterwards, a simple voting scheme (as in Random Forest) will lead you to a very interesting and robust classification.
- Use any technique related to fusioning information or probabilistic fusion. This is a very suitable solution in order to combine different likelihoods from different classifiers.
- My last suggestion is the use of fuzzy logic, a very adequate tool in order to combine information properly from a probabilistic (belonging) perspective.
The selection of specific methods or strategies will depend enormously on the data.
| null |
CC BY-SA 3.0
| null |
2014-06-24T15:44:52.540
|
2014-06-24T15:44:52.540
| null | null |
1155
| null |
563
|
2
| null |
559
|
3
| null |
The strategy of motion/change detection is certainly adequate, but I would add an extra operation. I would detect those regions that are more likely to be changed, for instance, the ladder seems a place where humans can be (also cats) and grass where dogs, cats or humans can be.
I would capture a map with size of the object and trajectory and with this I would create a cluster with the aim of detecting an object (with specific size within the image in terms of pixels) that moves with a certain speed and trajectory.
You can achieve this by using R or I would suggest OpenCV in order to detect movement and follow different objects.
| null |
CC BY-SA 3.0
| null |
2014-06-24T15:55:37.110
|
2014-06-24T15:55:37.110
| null | null |
1155
| null |
564
|
1
|
567
| null |
5
|
330
|
I've written a simple recommender which generates recommendations for users based on what they have clicked. The recommender generates a data file with the following format:
```
userid,userid,simmilarity (between 0 and 1 - closer to 0 the more similar the users)
a,b,.2
a,c,.3
a,d,.4
a,e,.1
e,b,.3
e,c,.5
e,d,.8
```
I've looked at [some graphs](https://github.com/mbostock/d3/wiki/Gallery), but I'm not sure which one to use, or if are there other ones that will better display the user similarities from the dataset above. Any suggestions?
I'm aiming this visualization at business users who are not at all technical. I would just like to show them an easy to understand visual that details how similar some users are and so convince the business that for these users the recommendation system is useful.
@Steve Kallestad do you mean something like this :

|
What visualization technique to best describe a recommendation dataset?
|
CC BY-SA 3.0
| null |
2014-06-24T18:59:30.560
|
2014-06-25T07:56:29.733
|
2014-06-25T07:56:29.733
|
237
|
237
|
[
"visualization",
"javascript"
] |
565
|
1
| null | null |
11
|
18501
|
As I increase the number of trees in [scikit learn](http://scikit-learn.org/stable/)'s [GradientBoostingRegressor](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingRegressor.html), I get more negative predictions, even though there are no negative values in my training or testing set. I have about 10 features, most of which are binary.
Some of the parameters that I was tuning were:
- the number of trees/iterations;
- learning depth;
- and learning rate.
The percentage of negative values seemed to max at ~2%. The learning depth of 1(stumps) seemed to have the largest % of negative values. This percentage also seemed to increase with more trees and a smaller learning rate. The dataset is from one of the kaggle playground competitions.
My code is something like:
```
from sklearn.ensemble import GradientBoostingRegressor
X_train, X_test, y_train, y_test = train_test_split(X, y)
reg = GradientBoostingRegressor(n_estimators=8000, max_depth=1, loss = 'ls', learning_rate = .01)
reg.fit(X_train, y_train)
ypred = reg.predict(X_test)
```
|
Why does Gradient Boosting regression predict negative values when there are no negative y-values in my training set?
|
CC BY-SA 3.0
| null |
2014-06-24T19:43:24.643
|
2020-10-04T14:08:47.880
|
2014-06-25T16:46:30.323
|
322
|
1162
|
[
"machine-learning",
"python",
"algorithms",
"scikit-learn",
"kaggle"
] |
566
|
1
|
2456
| null |
6
|
1940
|
Many times [Named Entity Recognition](http://en.wikipedia.org/wiki/Named-entity_recognition) (NER) doesn't tag consecutive NNPs as one NE. I think editing the NER to use RegexpTagger also can improve the NER.
For example, consider the following input:
>
"Barack Obama is a great person."
And the output:
```
Tree('S', [Tree('PERSON', [('Barack', 'NNP')]), Tree('ORGANIZATION', [('Obama', 'NNP')]),
('is', 'VBZ'), ('a', 'DT'), ('great', 'JJ'), ('person', 'NN'), ('.', '.')])
```
where as for the input:
>
'Former Vice President Dick Cheney told conservative radio host Laura Ingraham that he "was honored" to be compared to Darth Vader while in office.'
the output is:
```
Tree('S', [('Former', 'JJ'), ('Vice', 'NNP'), ('President', 'NNP'),
Tree('NE', [('Dick', 'NNP'), ('Cheney', 'NNP')]), ('told', 'VBD'), ('conservative', 'JJ'),
('radio', 'NN'), ('host', 'NN'), Tree('NE', [('Laura', 'NNP'), ('Ingraham', 'NNP')]),
('that', 'IN'), ('he', 'PRP'), ('``', '``'), ('was', 'VBD'), ('honored', 'VBN'),
("''", "''"), ('to', 'TO'), ('be', 'VB'), ('compared', 'VBN'), ('to', 'TO'),
Tree('NE', [('Darth', 'NNP'), ('Vader', 'NNP')]), ('while', 'IN'), ('in', 'IN'),
('office', 'NN'), ('.', '.')])
```
Here `Vice/NNP, President/NNP, (Dick/NNP, Cheney/NNP)` is correctly extracted. So, I think if `nltk.ne_chunk` is used first, and then if two consecutive trees are NNP, there are higher chances that both refer to one entity.
I have been playing with NLTK toolkit, and I came across this problem a lot, but couldn't find a satisfying answer. Any suggestion will be really appreciated. I'm looking for flaws in my approach.
|
Named Entity Recognition: NLTK using Regular Expression
|
CC BY-SA 3.0
| null |
2014-06-24T17:06:10.310
|
2021-03-30T22:42:29.823
|
2021-03-30T22:42:29.823
|
29169
|
1165
|
[
"nlp",
"named-entity-recognition"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.