Install mpi4py on Windows 10

Install Microsoft MPI

https://www.microsoft.com/en-us/download/details.aspx?id=54607

You need to run both files msmpisdk.msi and MSMpiSetup.exe

Add $PATH$ in the Environment Variables.

$PATH$ is where you install the Microsoft MPI. In my case:

C:\Program Files (x86)\Microsoft SDKs\MPI

Install package in Anaconda

conda install mpi4py

And finally, test the installation

from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
print (“hello world from process “, rank)

 

DL libraries

1. Install Keras on Anaconda Windows

  • Install TDM GCC x64. Add PATH
  • Install Anaconda x64.
  • Open the Anaconda prompt
  • Run conda update conda
  • Run conda update --all
  • Run conda install mingw libpython
  • Install the latest version of Theano, pip install git+git://github.com/Theano/Theano.git
  • Run pip install git+git://github.com/fchollet/keras.git

2. Install pyCUDA

  • Get the the PyCUDA prebuilt binary from Christoph Golke (his page is an invaluable resource for installing Python packages on Windows).
  • Select the right combination of OS (for me it was pycuda-2016.1.2+cuda8044-cp35-cp35m-win_amd64.whl)
  • Use pip to install the above package.  Simply type this in your Anaconda prompt
    pip install <directory>\pycuda-2016.1.2+cuda8044-cp35-cp35m-win_amd64.whl

 

Latent Dirichlet Allocation (LDA) with Python

Source: https://rstudio-pubs-static.s3.amazonaws.com/79360_850b2a69980c4488b1db95987a24867a.html

What is LDA?

Latent Dirichlet allocation (LDA) is a topic model that generates topics based on word frequency from a set of documents. LDA is particularly useful for finding reasonably accurate mixtures of topics within a given document set.

LDA walkthrough

This walkthrough goes through the process of generating an LDA model with a highly simplified document set. This is not an exhaustive explanation of LDA. The goal of this walkthrough is to guide users through key steps in preparing their data and providing example output.

Packages required

This walkthrough uses the following Python packages:

  • NLTK, a natural language toolkit for Python. A useful package for any natural language processing.
    • For Mac/Unix with pip: $ sudo pip install -U nltk.
  • stop_words, a Python package containing stop words.
    • For Mac/Unix with pip: $ sudo pip install stop-words.
  • gensim, a topic modeling package containing our LDA model.
    • For Mac/Unix with pip: $ sudo pip install gensim.

Importing your documents

Here is our sample documents:

doc_a = "Brocolli is good to eat. My brother likes to eat good brocolli, but not my mother."
doc_b = "My mother spends a lot of time driving my brother around to baseball practice."
doc_c = "Some health experts suggest that driving may cause increased tension and blood pressure."
doc_d = "I often feel pressure to perform well at school, but my mother never seems to drive my brother to do better."
doc_e = "Health professionals say that brocolli is good for your health."

# compile sample documents into a list
doc_set = [doc_a, doc_b, doc_c, doc_d, doc_e]

Cleaning your documents

Data cleaning is absolutely crucial for generating a useful topic model: as the saying goes, “garbage in, garbage out.” The steps below are common to most natural language processing methods:

  • Tokenizing: converting a document to its atomic elements.
  • Stopping: removing meaningless words.
  • Stemming: merging words that are equivalent in meaning.

Tokenization

Tokenization segments a document into its atomic elements. In this case, we are interested in tokenizing to words. Tokenization can be performed many ways–we are using NLTK’s tokenize.regexp module:

from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'\w+')

The above code will match any word characters until it reaches a non-word character, like a space. This is a simple solution, but can cause problems for words like “don’t” which will be read as two tokens, “don” and “t.” NLTK provides a number of pre-constructed tokenizers like nltk.tokenize.simple. For unique use cases, it’s better to use regex and iterate until your document is accurately tokenized.

Note: this example calls tokenize() on a single document. You’ll need to create a for loop to traverse all your documents. Check the script at the end of this page for an example.

raw = doc_a.lower()
tokens = tokenizer.tokenize(raw)

>>> print(tokens)
['brocolli', 'is', 'good', 'to', 'eat', 'my', 'brother', 'likes', 'to', 'eat', 'good', 'brocolli', 'but', 'not', 'my', 'mother']

Our document from doc_a is now a list of tokens.

Stop words

Certain parts of English speech, like conjunctions (“for”, “or”) or the word “the” are meaningless to a topic model. These terms are called stop words and need to be removed from our token list.

The definition of a stop word is flexible and the kind of documents may alter that definition. For example, if we’re topic modeling a collection of music reviews, then terms like “The Who” will have trouble being surfaced because “the” is a common stop word and is usually removed. You can always construct your own stop word list or seek out another package to fit your use case.

In our case, we are using the stop_words package from Pypi, a relatively conservative list. We can call get_stop_words() to create a list of stop words:

from stop_words import get_stop_words

# create English stop words list
en_stop = get_stop_words('en')

Removing stop words is now a matter of looping through our tokens and comparing each word to the en_stop list.

# remove stop words from tokens
stopped_tokens = [i for i in tokens if not i in en_stop]

>>> print(stopped_tokens)
['brocolli', 'good', 'eat', 'brother', 'likes', 'eat', 'good', 'brocolli', 'mother']

Stemming

Stemming words is another common NLP technique to reduce topically similar words to their root. For example, “stemming,” “stemmer,” “stemmed,” all have similar meanings; stemming reduces those terms to “stem.” This is important for topic modeling, which would otherwise view those terms as separate entities and reduce their importance in the model.

Like stopping, stemming is flexible and some methods are more aggressive. The Porter stemming algorithm is the most widely used method. To implement a Porter stemming algorithm, import the Porter Stemmer module from NLTK:

from nltk.stem.porter import PorterStemmer

# Create p_stemmer of class PorterStemmer
p_stemmer = PorterStemmer()

Note that p_stemmer requires all tokens to be type str. p_stemmer returns the string parameter in stemmed form, so we need to loop through our stopped_tokens:

# stem token
texts = [p_stemmer.stem(i) for i in stopped_tokens]

>>> print(stemmed_tokens)
['brocolli', 'good', 'eat', 'brother', 'like', 'eat', 'good', 'brocolli', 'mother']

In our example, not much happened: likes became like.

Constructing a document-term matrix

The result of our cleaning stage is texts, a tokenized, stopped and stemmed list of words from a single document. Let’s fast forward and imagine that we looped through all our documents and appended each one to texts. So now texts is a list of lists, one list for each of our original documents.

To generate an LDA model, we need to understand how frequently each term occurs within each document. To do that, we need to construct a document-term matrix with a package called gensim:

from gensim import corpora, models

dictionary = corpora.Dictionary(texts)

The Dictionary() function traverses texts, assigning a unique integer id to each unique token while also collecting word counts and relevant statistics. To see each token’s unique integer id, try print(dictionary.token2id).

Next, our dictionary must be converted into a bag-of-words:

corpus = [dictionary.doc2bow(text) for text in texts]

The doc2bow() function converts dictionary into a bag-of-words. The result, corpus, is a list of vectors equal to the number of documents. In each document vector is a series of tuples. As an example, print(corpus[0]) results in the following:

>>> print(corpus[0])
[(0, 2), (1, 1), (2, 2), (3, 2), (4, 1), (5, 1)]

This list of tuples represents our first document, doc_a. The tuples are (term ID, term frequency) pairs, so if print(dictionary.token2id) says brocolli’s id is 0, then the first tuple indicates that brocolli appeared twice in doc_a. doc2bow() only includes terms that actually occur: terms that do not occur in a document will not appear in that document’s vector.

Applying the LDA model

corpus is a document-term matrix and now we’re ready to generate an LDA model:

ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics=3, id2word = dictionary, passes=20)

The LdaModel class is described in detail in the gensim documentation. Parameters used in our example:

Parameters:

  • num_topics: required. An LDA model requires the user to determine how many topics should be generated. Our document set is small, so we’re only asking for three topics.
  • id2word: required. The LdaModel class requires our previous dictionary to map ids to strings.
  • passes: optional. The number of laps the model will take through corpus. The greater the number of passes, the more accurate the model will be. A lot of passes can be slow on a very large corpus.

Examining the results

Our LDA model is now stored as ldamodel. We can review our topics with the print_topic and print_topics methods:

>>> print(ldamodel.print_topics(num_topics=3, num_words=3))
['0.141*health + 0.080*brocolli + 0.080*good', '0.060*eat + 0.060*drive + 0.060*brother', '0.059*pressur + 0.059*mother + 0.059*brother']

What does this mean? Each generated topic is separated by a comma. Within each topic are the three most probable words to appear in that topic. Even though our document set is small the model is reasonable. Some things to think about: – health, brocolli and good make sense together. – The second topic is confusing. If we revisit the original documents, we see that drive has multiple meanings: driving a car and driving oneself to improve. This is something to note in our results. – The third topic includes mother and brother, which is reasonable.

Adjusting the model’s number of topics and passes is important to getting a good result. Two topics seems like a better fit for our documents:

ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics=2, id2word = dictionary, passes=20)

>>> print(ldamodel.print_topics(num_topics=2, num_words=4))
['0.054*pressur + 0.054*drive + 0.054*brother + 0.054*mother', '0.070*brocolli + 0.070*good + 0.070*health + 0.050*eat']

So what does LDA actually do?

This explanation is a little lengthy, but useful for understanding the model we worked so hard to generate.

LDA assumes documents are produced from a mixture of topics. Those topics then generate words based on their probability distribution, like the ones in our walkthrough model. In other words, LDA assumes a document is made from the following steps:

  1. Determine the number of words in a document. Let’s say our document has 6 words.
  2. Determine the mixture of topics in that document. For example, the document might contain 1/2 the topic “health” and 1/2 the topic “vegetables.”
  3. Using each topic’s multinomial distribution, output words to fill the document’s word slots. In our example, the “health” topic is 1/2 our document, or 3 words. The “health” topic might have the word “diet” at 20% probability or “exercise” at 15%, so it will fill the document word slots based on those probabilities.

Given this assumption of how documents are created, LDA backtracks and tries to figure out what topics would create those documents in the first place.

Sample script in full

from nltk.tokenize import RegexpTokenizer
from stop_words import get_stop_words
from nltk.stem.porter import PorterStemmer
from gensim import corpora, models
import gensim

tokenizer = RegexpTokenizer(r'\w+')

# create English stop words list
en_stop = get_stop_words('en')

# Create p_stemmer of class PorterStemmer
p_stemmer = PorterStemmer()
    
# create sample documents
doc_a = "Brocolli is good to eat. My brother likes to eat good brocolli, but not my mother."
doc_b = "My mother spends a lot of time driving my brother around to baseball practice."
doc_c = "Some health experts suggest that driving may cause increased tension and blood pressure."
doc_d = "I often feel pressure to perform well at school, but my mother never seems to drive my brother to do better."
doc_e = "Health professionals say that brocolli is good for your health." 

# compile sample documents into a list
doc_set = [doc_a, doc_b, doc_c, doc_d, doc_e]

# list for tokenized documents in loop
texts = []

# loop through document list
for i in doc_set:
    
    # clean and tokenize document string
    raw = i.lower()
    tokens = tokenizer.tokenize(raw)

    # remove stop words from tokens
    stopped_tokens = [i for i in tokens if not i in en_stop]
    
    # stem tokens
    stemmed_tokens = [p_stemmer.stem(i) for i in stopped_tokens]
    
    # add tokens to list
    texts.append(stemmed_tokens)

# turn our tokenized documents into a id <-> term dictionary
dictionary = corpora.Dictionary(texts)
    
# convert tokenized documents into a document-term matrix
corpus = [dictionary.doc2bow(text) for text in texts]

# generate LDA model
ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics=2, id2word = dictionary, passes=20)

 

Matrix Factorization: A Simple Tutorial and Implementation in Python

Source: http://www.quuxlabs.com/blog/2010/09/matrix-factorization-a-simple-tutorial-and-implementation-in-python/

There is probably no need to say that there is too much information on the Web nowadays. Search engines help us a little bit. What is better is to have something interesting recommended to us automatically without asking. Indeed, from as simple as a list of the most popular bookmarks on Delicious, to some more personalized recommendations we received on Amazon, we are usually offered recommendations on the Web.

Recommendations can be generated by a wide range of algorithms. While user-based or item-based collaborative filtering methods are simple and intuitive, matrix factorization techniques are usually more effective because they allow us to discover the latent features underlying the interactions between users and items. Of course, matrix factorization is simply a mathematical tool for playing around with matrices, and is therefore applicable in many scenarios where one would like to find out something hidden under the data.

In this tutorial, we will go through the basic ideas and the mathematics of matrix factorization, and then we will present a simple implementation in Python. We will proceed with the assumption that we are dealing with user ratings (e.g. an integer score from the range of 1 to 5) of items in a recommendation system.

Basic Ideas

Just as its name suggests, matrix factorization is to, obviously, factorize a matrix, i.e. to find out two (or more) matrices such that when you multiply them you will get back the original matrix.

As I have mentioned above, from an application point of view, matrix factorization can be used to discover latent features underlying the interactions between two different kinds of entities. (Of course, you can consider more than two kinds of entities and you will be dealing with tensor factorization, which would be more complicated.) And one obvious application is to predict ratings in collaborative filtering.

In a recommendation system such as Netflix or MovieLens, there is a group of users and a set of items (movies for the above two systems). Given that each users have rated some items in the system, we would like to predict how the users would rate the items that they have not yet rated, such that we can make recommendations to the users. In this case, all the information we have about the existing ratings can be represented in a matrix. Assume now we have 5 users and 10 items, and ratings are integers ranging from 1 to 5, the matrix may look something like this (a hyphen means that the user has not yet rated the movie):

D1 D2 D3 D4
U1 5 3 1
U2 4 1
U3 1 1 5
U4 1 4
U5 1 5 4

Hence, the task of predicting the missing ratings can be considered as filling in the blanks (the hyphens in the matrix) such that the values would be consistent with the existing ratings in the matrix.

The intuition behind using matrix factorization to solve this problem is that there should be some latent features that determine how a user rates an item. For example, two users would give high ratings to a certain movie if they both like the actors/actresses of the movie, or if the movie is an action movie, which is a genre preferred by both users. Hence, if we can discover these latent features, we should be able to predict a rating with respect to a certain user and a certain item, because the features associated with the user should match with the features associated with the item.

In trying to discover the different features, we also make the assumption that the number of features would be smaller than the number of users and the number of items. It should not be difficult to understand this assumption because clearly it would not be reasonable to assume that each user is associated with a unique feature (although this is not impossible). And anyway if this is the case there would be no point in making recommendations, because each of these users would not be interested in the items rated by other users. Similarly, the same argument applies to the items.

The mathematics of matrix factorization

Having discussed the intuition behind matrix factorization, we can now go on to work on the mathematics. Firstly, we have a set U of users, and a set D of items. Let \mathbf{R} of size |U| \times |D| be the matrix that contains all the ratings that the users have assigned to the items. Also, we assume that we would like to discover $K$ latent features. Our task, then, is to find two matrics matrices \mathbf{P} (a |U| \times K matrix) and \mathbf{Q} (a |D| \times K matrix) such that their product approximates \mathbf{R}:

\mathbf{R} \approx \mathbf{P} \times \mathbf{Q}^T = \hat{\mathbf{R}} 

In this way, each row of \mathbf{P} would represent the strength of the associations between a user and the features. Similarly, each row of \mathbf{Q} would represent the strength of the associations between an item and the features. To get the prediction of a rating of an item d_j by u_i, we can calculate the dot product of the two vectors corresponding to u_i and d_j:

\hat{r}_{ij} = p_i^T q_j = \sum_{k=1}^k{p_{ik}q_{kj}} 

Now, we have to find a way to obtain \mathbf{P} and \mathbf{Q}. One way to approach this problem is the first intialize the two matrices with some values, calculate how `different’ their product is to \mathbf{M}, and then try to minimize this difference iteratively. Such a method is called gradient descent, aiming at finding a local minimum of the difference.

The difference here, usually called the error between the estimated rating and the real rating, can be calculated by the following equation for each user-item pair:

e_{ij}^2 = (r_{ij} - \hat{r}_{ij})^2 = (r_{ij} - \sum_{k=1}^K{p_{ik}q_{kj}})^2 

Here we consider the squared error because the estimated rating can be either higher or lower than the real rating.

To minimize the error, we have to know in which direction we have to modify the values of p_{ik} and q_{kj}. In other words, we need to know the gradient at the current values, and therefore we differentiate the above equation with respect to these two variables separately:

\frac{\partial}{\partial p_{ik}}e_{ij}^2 = -2(r_{ij} - \hat{r}_{ij})(q_{kj}) = -2 e_{ij} q_{kj}
  \frac{\partial}{\partial q_{ik}}e_{ij}^2 = -2(r_{ij} - \hat{r}_{ij})(p_{ik}) = -2 e_{ij} p_{ik} 

Having obtained the gradient, we can now formulate the update rules for both p_{ik} and q_{kj}:

p'_{ik} = p_{ik} + \alpha \frac{\partial}{\partial p_{ik}}e_{ij}^2 = p_{ik} + 2\alpha e_{ij} q_{kj}
q'_{kj} = q_{kj} + \alpha \frac{\partial}{\partial q_{kj}}e_{ij}^2 = q_{kj} + 2\alpha e_{ij} p_{ik}  

Here, \alpha is a constant whose value determines the rate of approaching the minimum. Usually we will choose a small value for \alpha, say 0.0002. This is because if we make too large a step towards the minimum we may run into the risk of missing the minimum and end up oscillating around the minimum.

A question might have come to your mind by now: if we find two matrices \mathbf{P} and \mathbf{Q} such that \mathbf{P} \times \mathbf{Q} approximates \mathbf{R}, isn’t that our predictions of all the unseen ratings will all be zeros? In fact, we are not really trying to come up with \mathbf{P} and \mathbf{Q} such that we can reproduce \mathbf{R} exactly. Instead, we will only try to minimise the errors of the observed user-item pairs. In other words, if we let T be a set of tuples, each of which is in the form of (u_i, d_j, r_{ij}), such that T contains all the observed user-item pairs together with the associated ratings, we are only trying to minimise every e_{ij} for (u_i, d_j, r_{ij}) \in T. (In other words, T is our set of training data.) As for the rest of the unknowns, we will be able to determine their values once the associations between the users, items and features have been learnt.

Using the above update rules, we can then iteratively perform the operation until the error converges to its minimum. We can check the overall error as calculated using the following equation and determine when we should stop the process.

E = \sum_{(u_i,d_j,r_{ij}) \in T}{e_{ij}} = \sum_{(u_i,d_j,r_{ij}) \in T}{(r_{ij} - \sum_{k=1}^K{p_{ik}q_{kj}})^2}  

Regularization

The above algorithm is a very basic algorithm for factorizing a matrix. There are a lot of methods to make things look more complicated. A common extension to this basic algorithm is to introduce regularization to avoid overfitting. This is done by adding a parameter \beta and modify the squared error as follows:

e_{ij}^2 = (r_{ij} - \sum_{k=1}^K{p_{ik}q_{kj}})^2 + \frac{\beta}{2} \sum_{k=1}^K{(||P||^2 + ||Q||^2)} 

In other words, the new parameter \beta is used to control the magnitudes of the user-feature and item-feature vectors such that P and Q would give a good approximation of R without having to contain large numbers. In practice, \beta is set to some values in the range of 0.02. The new update rules for this squared error can be obtained by a procedure similar to the one described above. The new update rules are as follows.

p'_{ik} = p_{ik} + \alpha \frac{\partial}{\partial p_{ik}}e_{ij}^2 = p_{ik} + \alpha(2 e_{ij} q_{kj} - \beta p_{ik} )
q'_{kj} = q_{kj} + \alpha \frac{\partial}{\partial q_{kj}}e_{ij}^2 = q_{kj} + \alpha(2 e_{ij} p_{ik} - \beta q_{kj} ) 

Implementation in Python

Once we have derived the update rules as described above, it actually becomes very straightforward to implement the algorithm. The following is a function that implements the algorithm in Python (note that this implementation requires the numpy module).

01 import numpy
02
03 def matrix_factorization(R, P, Q, K, steps=5000, alpha=0.0002, beta=0.02):
04     Q = Q.T
05     for step in xrange(steps):
06         for i in xrange(len(R)):
07             for j in xrange(len(R[i])):
08                 if R[i][j] > 0:
09                     eij = R[i][j] - numpy.dot(P[i,:],Q[:,j])
10                     for k in xrange(K):
11                         P[i][k] = P[i][k] + alpha * (2 * eij * Q[k][j] - beta * P[i][k])
12                         Q[k][j] = Q[k][j] + alpha * (2 * eij * P[i][k] - beta * Q[k][j])
13         eR = numpy.dot(P,Q)
14         e = 0
15         for i in xrange(len(R)):
16             for j in xrange(len(R[i])):
17                 if R[i][j] > 0:
18                     e = e + pow(R[i][j] - numpy.dot(P[i,:],Q[:,j]), 2)
19                     for k in xrange(K):
20                         e = e + (beta/2) * (pow(P[i][k],2) + pow(Q[k][j],2))
21         if e < 0.001:
22             break
23     return P, Q.T

We can try to apply it to our example mentioned above and see what we would get. Below is a code snippet in Python for running the example.

01 R = [
02      [5,3,0,1],
03      [4,0,0,1],
04      [1,1,0,5],
05      [1,0,0,4],
06      [0,1,5,4],
07     ]
08
09 R = numpy.array(R)
10
11 N = len(R)
12 M = len(R[0])
13 K = 2
14
15 P = numpy.random.rand(N,K)
16 Q = numpy.random.rand(M,K)
17
18 nP, nQ = matrix_factorization(R, P, Q, K)
19 nR = numpy.dot(nP, nQ.T)

And the matrix obtained from the above process would look something like this:

D1 D2 D3 D4
U1 4.97 2.98 2.18 0.98
U2 3.97 2.40 1.97 0.99
U3 1.02 0.93 5.32 4.93
U4 1.00 0.85 4.59 3.93
U5 1.36 1.07 4.89 4.12

We can see that for existing ratings we have the approximations very close to the true values, and we also get some ‘predictions’ of the unknown values. In this simple example, we can easily see that U1 and U2 have similar taste and they both rated D1 and D2 high, while the rest of the users preferred D3, D4 and D5. When the number of features (K in the Python code) is 2, the algorithm is able to associate the users and items to two different features, and the predictions also follow these associations. For example, we can see that the predicted rating of U4 on D3 is 4.59, because U4 and U5 both rated D4 high.

Further Information

We have discussed the intuitive meaning of the technique of matrix factorization and its use in collaborative filtering. In fact, there are many different extensions to the above technique. An important extension is the requirement that all the elements of the factor matrices (\mathbf{P} and \mathbf{Q} in the above example) should be non-negative. In this case it is called non-negative matrix factorization (NMF). One advantage of NMF is that it results in intuitive meanings of the resultant matrices. Since no elements are negative, the process of multiplying the resultant matrices to get back the original matrix would not involve subtraction, and can be considered as a process of generating the original data by linear combinations of the latent features.

Factorization Machines A Theoretical Introduction

Source: http://jxieeducation.com/2016-06-26/Factorization-Machines-A-Theoretical-Introduction/

Factorization Machines

This post is going to focus on explaining what factorization machines are and why they are important. The future posts will provide practical modeling examples and a numpy clone implementation of factorization machines.

What problem do Factorization Machines solve?

TL;DR: FMs are a combination of linear regression and matrix factorization that models sparse feature interactions but in linear time.

Normally when we think of linear regression, we think of this formula.

pic_32

In the formula above, the run time is O(n)

where n is the number of features. When we consider quadratic feature interactions, the complexity increases to O(n^2), in the formula below.

pic_33

Now consider a very sparse set of features, the runtime blows up. In most cases, we instead model a very limited set of feature interactions to manage the complexity.

How do Factorization Machines solve the problem?

In the recommendation problem space, we have historically dealt with the sparsity problem with a well documented technique called (non-negative) matrix factorization.

matrix factorization diagram

We factorize sparse user item matrix (r) R^{UxI}

into a user matrix (u) R^{UxK} and an item matrix (i) ^R^{IxK}, where K<<U and K<<I

.

User (u_i)’s preference for item i_j can be approximated by u_ii_j

Factorization Machines takes inspiration from matrix factorization, and models the feature iteractions like using latent vectors of size K. As a result, every sparse feature f_i has a corresponding latent vector v_i. And two feature’s interactions are modelled as v_iv_j

Factorization Machines Math

pic_34

Instead of modeling feature interactions explicitly, factorization machines uses the dot product of features’ interactions. The model learns v_i implicitly during training via techniques like gradient descent.

The intuition is that each feature will learn a dense encoding, with the property that high positive correlation between two features have a high dot product value and vise versa.

Of course, the latent vectors can only encode so much. There is an expected hit on accuracy compared to using conventional quadratic interactions. The benefit is that this model will run in linear time.

FM complexity

For the full proof, see lemma 3.1 in Rendle’s paper.

pic_35

Since pic_36 can be precomputed, the complexity is reduced to O(KN).

Installing Apache Spark on Ubuntu 16.04

Source: https://www.santoshsrinivas.com/installing-apache-spark-on-ubuntu-16-04/

The following installation steps worked for me on Ubuntu 16.04.

The below options worked for me:

Download Apache Spark
Download Apache Spark
  • Unzip and move Spark
cd ~/Downloads/  
tar xzvf spark-2.0.1-bin-hadoop2.7.tgz  
mv spark-2.0.1-bin-hadoop2.7/ spark  
sudo mv spark/ /usr/lib/  
  • Install SBT

As mentioned at sbt – Download

echo "deb https://dl.bintray.com/sbt/debian /" | sudo tee -a /etc/apt/sources.list.d/sbt.list  
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 2EE0EA64E40A89B84B2DF73499E82A75642AC823  
sudo apt-get update  
sudo apt-get install sbt  
  • Make sure Java is installed

If not, install java

sudo apt-add-repository ppa:webupd8team/java  
sudo apt-get update  
sudo apt-get install oracle-java8-installer  
  • Configure Spark
cd /usr/lib/spark/conf/  
cp spark-env.sh.template spark-env.sh  
vi spark-env.sh  

Add the following lines

JAVA_HOME=/usr/lib/jvm/java-8-oracle  
SPARK_WORKER_MEMORY=4g  
  • Configure IPv6

Basically, disable IPv6 using sudo vi /etc/sysctl.conf and add below lines

net.ipv6.conf.all.disable_ipv6 = 1  
net.ipv6.conf.default.disable_ipv6 = 1  
net.ipv6.conf.lo.disable_ipv6 = 1  
  • Configure .bashrc

I modified .bashrc in Sublime Text using subl ~/.bashrc and added the following lines

export JAVA_HOME=/usr/lib/jvm/java-8-oracle  
export SBT_HOME=/usr/share/sbt-launcher-packaging/bin/sbt-launch.jar  
export SPARK_HOME=/usr/lib/spark  
export PATH=$PATH:$JAVA_HOME/bin  
export PATH=$PATH:$SBT_HOME/bin:$SPARK_HOME/bin:$SPARK_HOME/sbin  
  • Configure fish (Optional – But I love the fish shell)

Modify config.fish using subl ~/.config/fish/config.fish and add the following lines

#Credit: http://fishshell.com/docs/current/tutorial.html#tut_startup

set -x PATH $PATH /usr/lib/spark  
set -x PATH $PATH /usr/lib/spark/bin  
set -x PATH $PATH /usr/lib/spark/sbin  
  • Test Spark (Should work both in fish and bash)

Run pyspark (this is available in /usr/lib/spark/bin/) and test out.

For example ….

>>> a = 5
>>> b = 3
>>> a+b
8  
>>> print(“Welcome to Spark”)
Welcome to Spark  
## type Ctrl-d to exit

Try also, the built in run-example using run-example org.apache.spark.examples.SparkPi

That’s it! You are ready to rock on using Apache Spark!

IPython notebook and Spark setup for Windows 10

From: https://nerdsrule.co/2016/06/15/ipython-notebook-and-spark-setup-for-windows-10/

Install Java jdk

You can download and install from Oracle here. Once installed I created a new folder called Java in program files moved the JDK folder into it.  Copy the path and set JAVA_HOME within your system environment variables.  The add %JAVA_HOME%\bin to Path variable. To get the click start, then settings, and then search environment variables. This should bring up ‘Edit your system environment variables’, which should look like this:

Screenshot (2)

Installing and setting up Python and IPython

For simplicity I downloaded and installed Anaconda with the python 2.7 version from Continuum Analytics (free) using the built-in install wizard. Once installed you need to do the same thing you did with Java.  Name the variable as PYTHONPATH, where my paths were C:\User\cconnell\Anaconda2. You will also need to add %PYTHONPATH% to the Path variable

Installing Spark

I am using 1.6.0 (I like to go back at least one version) from Apache Spark here. Once unzipped do the same thing as before setting Spark_Home variable to where your spark folder is and then add %Spark_Home%\bin to the path variable.

Installing Hadoop binaries

Download and unzip the Hadoop common binaries and set a HADOOP_HOME variable to that location.

Getting everything to work together

  1. Go into your spark/conf folder and rename log4j.properties.template to log4j.properties
  2. Open log4j.properties in a text editor and change log4j.rootCategory to WARN from INFO
  3. Add two new environment variables like before: PYSPARK_DRIVER_PYTHON to jupyter (edited from ‘ipython’ in the pic) and PYSPARK_DRIVER_PYTHON_OPTS to notebook

Screenshot (1)

Now to launch your PySpark notebook just type pyspark from the console and it will automatically launch in your browser.

How to make Bubble Charts with matplotlib

Source: http://glowingpython.blogspot.de/2011/11/how-to-make-bubble-charts-with.html

from pylab import *
from scipy import *

# reading the data from a csv file
durl = 'http://datasets.flowingdata.com/crimeRatesByState2005.csv'
rdata = genfromtxt(durl,dtype='S8,f,f,f,f,f,f,f,i',delimiter=',')

rdata[0] = zeros(8) # cutting the label's titles
rdata[1] = zeros(8) # cutting the global statistics

x = []
y = []
color = []
area = []

for data in rdata:
 x.append(data[1]) # murder
 y.append(data[5]) # burglary
 color.append(data[6]) # larceny_theft 
 area.append(sqrt(data[8])) # population
 # plotting the first eigth letters of the state's name
 text(data[1], data[5], 
      data[0],size=11,horizontalalignment='center')

# making the scatter plot
sct = scatter(x, y, c=color, s=area, linewidths=2, edgecolor='w')
sct.set_alpha(0.75)

axis([0,11,200,1280])
xlabel('Murders per 100,000 population')
ylabel('Burglaries per 100,000 population')
show()

The following figure is the resulting bubble chart

It shows the number of burglaries versus the number of murders per 100,000 population. Every bubble is a state of America, the size of the bubbles represents the population of the state and the color is the number of larcenies.

#####

dataset

state,murder,forcible_rape,robbery,aggravated_assault,burglary,larceny_theft,motor_vehicle_theft,population
United States,5.6,31.7,140.7,291.1,726.7,2286.3,416.7,295753151
Alabama,8.2,34.3,141.4,247.8,953.8,2650,288.3,4545049
Alaska,4.8,81.1,80.9,465.1,622.5,2599.1,391,669488
Arizona,7.5,33.8,144.4,327.4,948.4,2965.2,924.4,5974834
Arkansas,6.7,42.9,91.1,386.8,1084.6,2711.2,262.1,2776221
California,6.9,26,176.1,317.3,693.3,1916.5,712.8,35795255
Colorado,3.7,43.4,84.6,264.7,744.8,2735.2,559.5,4660780
Connecticut,2.9,20,113,138.6,437.1,1824.1,296.8,3477416
Delaware,4.4,44.7,154.8,428.2,688.9,2144,278.5,839906
District of Columbia,35.4,30.2,672.1,721.3,649.7,2694.9,1402.3,582049
Florida,5,37.1,169.4,496.6,926.3,2658.3,423.3,17783868
Georgia,6.2,23.6,154.8,264.3,931,2751.1,490.2,9097428
Hawaii,1.9,26.9,78.5,147.8,767.9,3308.4,716.4,1266117
Idaho,2.4,40.4,18.6,195.4,564.4,1931.7,201.8,1425862
Illinois,6,33.7,181.7,330.2,606.9,2164.8,308.6,12674452
Indiana,5.7,29.6,108.6,179.9,697.6,2412,346.7,6253120
Iowa,1.3,27.9,38.9,223.3,606.4,2042.7,184.6,2949450
Kansas,3.7,38.4,65.3,280,689.2,2758.1,339.6,2741771
Kentucky,4.6,34,88.4,139.8,634,1685.8,210.8,4182293
Louisiana,9.9,31.4,118,435.1,870.6,2494.5,318.1,4497691
Maine,1.4,24.7,24.4,61.7,478.5,1832.6,102,1311631
Maryland,9.9,22.6,256.7,413.8,641.4,2294.3,608.4,5582520
Massachusetts,2.7,27.1,119,308.1,541.1,1527.4,295.1,6453031
Michigan,6.1,51.3,131.8,362.9,696.8,1917.8,476.5,10090554
Minnesota,2.2,44,92,158.7,578.9,2226.9,278.2,5106560
Mississippi,7.3,39.3,82.3,149.4,919.7,2083.9,256.5,2900116
Missouri,6.9,28,124.1,366.4,738.3,2746.2,443.1,5806639
Montana,1.9,32.2,18.9,228.5,389.2,2543,210.7,934801
Nebraska,2.5,32.9,59.1,192.5,532.4,2574.3,316.5,1751721
Nevada,8.5,42.1,194.7,361.5,972.4,2153.9,1115.2,2408804
New Hampshire,1.4,30.9,27.4,72.3,317,1377.3,102.1,1301415
New Jersey,4.8,13.9,151.6,184.4,447.1,1568.4,317.5,8621837
New Mexico,7.4,54.1,98.7,541.9,1093.9,2639.9,414.5,1916538
New York,4.5,18.9,182.7,239.7,353.3,1569.6,185.6,19330891
North Carolina,6.7,26.5,145.5,289.4,1201.1,2546.2,327.8,8669452
North Dakota,1.1,24.2,7.4,65.5,311.9,1500.3,166,635365
Ohio,5.1,39.8,163.1,143.4,872.8,2429,360.9,11475262
Oklahoma,5.3,41.7,91,370.5,1006,2644.2,391.8,3532769
Oregon,2.2,34.8,68.1,181.8,758.6,3112.2,529,3617869
Pennsylvania,6.1,28.9,154.6,235,451.6,1729.1,236.5,12418161
Rhode Island,3.2,29.8,72.1,146.1,494.2,1816,408.7,1064989
South Carolina,7.4,42.5,132.1,579,1000.9,2954.1,384.4,4256199
South Dakota,2.3,46.7,18.6,108.1,324.4,1343.7,108.4,780084
Tennessee,7.2,36.4,167.3,541.9,1026.9,2828.1,420.6,5995748
Texas,6.2,37.2,156.6,329.8,961.6,2961.7,408.7,22801920
Utah,2.3,37.3,44.3,143.4,606.2,2918.8,343.9,2499637
Vermont,1.3,23.3,11.7,83.5,491.8,1686.1,102.9,618814
Virginia,6.1,22.7,99.2,154.8,392.1,2035,211.1,7563887
Washington,3.3,44.7,92.1,205.8,959.7,3149.5,783.9,6261282
West Virginia,4.4,17.7,44.6,206.1,621.2,1794,210,1803920
Wisconsin,3.5,20.6,82.2,135.2,440.8,1992.8,226.6,5541443
Wyoming,2.7,24,15.3,188.1,476.3,2533.9,145.1,506242

Tf-Idf and Cosine similarity

Seeking Wisdom

In the year 1998 Google handled 9800 average search queries every day. In 2012 this number shot up to 5.13 billion average searches per day. The graph given below shows this astronomical growth.

YearAverage Search per day
19989800
200060000000
20071200000000
20081745000000
20092610000000
20103627000000
20114717000000
20125134000000

googlesearchperday

The major reason for google’s success is because of its pageRank algorithm. PageRank determines how trustworthy and reputable a given website is. But there is also another part. The input query entered by the user should be used to match the relevant documents and score them. In this post I will focus on the second part. I have created 3 documents to show how this works. Take some time to go through them.

Document 1: The game of life is a game of everlasting learning
Document 2: The unexamined life is not worth living

View original post 1,226 more words