HELLO Posted by OLGA-SHULMAN-LEDNICHENKO

home.html.png

Posted by OLGA-SHULMAN-LEDNICHENKO

Advertisements

GEOMERTYY

https://lednichenkoolgashulman.files.wordpress.com/2015/07/olga-katrina-look-alike-diagrams.jpg?w=1000

 

https://lednichenkoolgashulman.files.wordpress.com/2016/05/d4fd7-324368521-zero-light-infinity-olga-farzana-and-ajay-sanjay-dutt-mishra-and-ajay-mishra-bill-clinton-samurai253d72bplus2b52bslots2b-2bif2bteam2b253d122bthen2bgame2bequal2bto.jpg

https://lednichenkoolgashulman.files.wordpress.com/2016/03/olga-katrina-maps.jpg

this is YONI NETANYAHU – THE BROTHER OF PRIME MINISTER

OF ISRAEL

NOW  SEE THE LOOKS AND THE ALIKE WITH

COLOR PALLETE EVEN – COLOR ONE = WHITE – COLOR PALLET 2 = BROWN –

BUT WHO IS SIMILAR AND LIKE – AJAY WITH YONI NETANYAHU OR YONI NETANYAHU WHITE SKINNED WITH SOME OTHER WHITE SKIN SAY – BILL CLINTON OR BILL GATES?

BECAUSE? – BECEUASE -? RATIOS AND GEOMETRY – IS BECAUSE

WHO LOOKS MORE LIKE YONI NETANYAHU – AJAY MISHRA OR IDDO NETANYAHU ? – WHO? – WHY AJAY OF COURSE? – HERE IS WHY -> GEOMETRY

WHO LOOKS MORE LIKE KATRINA – OLGA OR ZARINE KHAN? – WELL, WHY –

 

HERE

[A] KATRINA SKIN = WHITER THAN ZARINE KHAN – AND OLGA IS WHITER  THAN KATRINA AND KATRINA IS WHTER THAN – ZARINE KHAN

[B] OLGA – KATRINA – GEOMERTY IS – MORE – CLOSER THAN KATRINA – ZARINE KHAN – GEOMETRY

 

Myraah IO

5:11 PM (40 minutes ago)

to me

 

 

https://lednichenkoolgashulman.files.wordpress.com/2016/04/agneepath-song.jpg

On Tue, Sep 26, 2017 at 5:11 PM, Myraah IO <myraahio.us> wrote:

http://in.reuters.com/video/2017/09/25/computer-says-no-beauty-not-in-the-eye-o?videoId=372602467&videoChannel=101

icon-signature.png Sent with Mailtrack

Hello Bong Bhai – from Jhabru need help AGAIN :)

Bhai, 

ek problem hai.. in which i need your advise, help and direction.

let me try to articulate it

Its a SEARCH COST OR CHOICE PROBLEM. 

now, this is the problem description:

[1] Now, we are able to create millions of options (literally millions in the truest mathematical sense- without taking any liberties with numbers ) , basically different styles of websites for any given category.. say – u want a lawyer or professor or real estate or yoga or resturant or any other category site .. 

ok.. so, while the competition – the DIY website builders of the likes of wix.com squarespace.com or any of the many sites.com – provides FEW PRE BUILT ONE SIZE FITS ALL COOKIE CUTTER TEMPLATES – TO CHOOSE from  – which they ask the end user to customize based on his or her preferences or needs 

we – give MANY CHOICES – OPTIONS – STYLES ETC  TO CHOOSE FROM

– NOW, by doing this – by having an AI engine that runs through various permutations and combinations of HTML elements, colors, fonts, placements, java script etc and et all – we provide CHOICE –

 

freedom from the limited constraints of pre built templates..

 

in short we solve the problem of limited choices – by generating millions of choices..

 

[2] But WHILE WE SOLVE ONE PROBLEM – THAT OF CHOICE – we create anothe problem BECAUSE of choice ..

 

Its like Confucius said – when there is no choice, there is no freedom, no liberation.. but when there are too many choices – there is a confusion..

 

right?

 

[3] ok. so, how do we structure this

 

In my way in theory of choice there are 3 issues /problem

 

 

[a] what is desireable 

 

[b] what is feasible 

 

[c] what are the constraints given the notion of feasibility against the backdrop of just what is desireable.

 

 

SO, NOW –

 

it all starts with NOT KNOWING JUST PRECISELY WHAT IS DESIREABLE ..

 

RIGHT?

 

its kinda like SEARCH – SCAN – CLICK – IN GOOGLE – IMAGINE GETS REPLACED BY

 

SEARCH -> CLICK – ACT – 

 

that wold reduce the cost

 

now,… to the nuts and bolts

 

Lets imagine

 

we ask the end user 

 

[A] ENTER THE URL – ONE OR MORE – OF THE SITES – – IE WEBSITES U LIKE – THE ONES U WOULD LIKE YOUR SITE TO LOOK LIKE -> SO, THIS GIVES US – VISUALLY – WHAT IS DESIREABLE .. albeit visually only – because there is animation also – but for now, lets say its static – like visually this style with that placemennt etc etc

 

[B] STEP 2 – WE TAKE A SNAP SHOT – AN IMAGE OF THE SITE – and store it in some place 

 

[C] STEP 3 – We already have the ability to store the image – the screen shots of the wesbites – those millions of option we create in our repository

 

now – my question and the problem

 

imagine what you like – is – say what u want to find 

 

say that is a hollywood movie – where what u want is the sketch of the fugutive murderer and 

 

 

then the image is SCANNED

 

and entered into the FBI OR CIA computer system

 

now, havent u seen movies where the witness depicts the fugutive and the artists draws a sketch and then that sketch is scanned 

 

and entered into the FBI database 

 

and then ?

 

well, then – many images are flipped through and so on

 

and then the most likely fit is somehow found 

 

ok.. so, thats somehow, in my unintelligent way could be some form of computer vision 

 

and then – the image is search through – for – LIKE – images

 

like THIS IMAGE IS VERY SMILILAR OR LOOKS THIS THIS IMAGE

 

now, so, in my thought there are two compotents

 

[a] a component that reads – an image – and somehow recognizes that

 

and

 

[b] a component that finds SIMILAR images from a databank of images 

?

 

makes sense?

 

ok.. if i am able to explain my problem statement

 

then please guide me – HOW – we can solve this problem

 

 

ps: pls see attached some screen shot of a sample database ftech of a web site option – captured- rather created – using our algo – on run time

 

 

please advise on how we can accomplish this and what all is involved 

hmm..

this?

Latent Dirichlet allocation

From Wikipedia, the free encyclopedia

Not to be confused with linear discriminant analysis.

[hide]This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)

This article is written like a personal reflection or opinion essay that states a Wikipedia editor’s personal feelings about a topic. (August 2017)

This article may be too technical for most readers to understand. (August 2017)

In natural language processing, latent Dirichlet allocation (LDA) is a generative statistical model that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar. For example, if observations are words collected into documents, it posits that each document is a mixture of a small number of topics and that each word’s creation is attributable to one of the document’s topics. LDA is an example of a topic model and was first presented as a graphical model for topic discovery by David Blei, Andrew Ng, and Michael I. Jordan in 2003.[1] Essentially the same model was also proposed independently by J. K. Pritchard, M. Stephens, and P. Donnelly in the study of population genetics in 2000.[2] Both papers have been highly influential, with 19858 and 20416 citations respectively by August 2017.[3][4]

Contents

[hide]

Topics[edit]

In LDA, each document may be viewed as a mixture of various topics where each document is considered to have a set of topics that are assigned to it via LDA. This is identical to probabilistic latent semantic analysis (pLSA), except that in LDA the topic distribution is assumed to have a sparse Dirichlet prior. The sparse Dirichlet priors encode the intuition that documents cover only a small set of topics and that topics use only a small set of words frequently. In practice, this results in a better disambiguation of words and a more precise assignment of documents to topics. LDA is a generalisation of the pLSA model, which is equivalent to LDA under a uniform Dirichlet prior distribution.[5]

For example, an LDA model might have topics that can be classified as CAT_related and DOG_related. A topic has probabilities of generating various words, such as milk, meow, and kitten, which can be classified and interpreted by the viewer as “CAT_related”. Naturally, the word cat itself will have high probability given this topic. The DOG_related topic likewise has probabilities of generating each word: puppy, bark, and bone might have high probability. Words without special relevance, such as the (see function word), will have roughly even probability between classes (or can be placed into a separate category). A topic is not strongly defined, neither semantically nor epistemologically. It is identified on the basis of automatic detection of the likelihood of term co-occurrence. A lexical word may occur in several topics with a different probability, however, with a different typical set of neighboring words in each topic.

Each document is assumed to be characterized by a particular set of topics. This is akin to the standard bag of words model assumption, and makes the individual words exchangeable.

Model[edit]


Plate notation representing the LDA model.

With plate notation, the dependencies among the many variables can be captured concisely. The boxes are “plates” representing replicates. The outer plate represents documents, while the inner plate represents the repeated choice of topics and words within a document. M denotes the number of documents, N the number of words in a document. Thus:

α is the parameter of the Dirichlet prior on the per-document topic distributions,β is the parameter of the Dirichlet prior on the per-topic word distribution,{displaystyle theta _{m}}theta _{m} is the topic distribution for document m,{displaystyle varphi _{k}}varphi _{k} is the word distribution for topic k,{displaystyle z_{mn}}{displaystyle z_{mn}} is the topic for the n-th word in document m, and{displaystyle w_{mn}}{displaystyle w_{mn}} is the specific word.
Plate notation for LDA with Dirichlet-distributed topic-word distributions

The words {displaystyle w_{ij}}w_{ij} are the only observable variables, and the other variables are latent variables. As proposed in the original paper, a sparse Dirichlet prior can be put over the topic-word distribution. This codes the intuition that the probability of topics is focussed on a small set of words. The resulting model is the today most widely applied variant of LDA. The plate notation for this model is shown on the right, where {displaystyle K}K denotes the number of topics and {displaystyle varphi _{1},dots ,varphi _{K}}{displaystyle varphi _{1},dots ,varphi _{K}} are {displaystyle V}V-dimensional vectors storing the parameters of the Dirichlet-distributed topic-word distributions ({displaystyle V}V is the number of words in the vocabulary).

Generative process[edit]

The generative process is as follows. Documents are represented as random mixtures over latent topics, where each topic is characterized by a distribution over words. LDA assumes the following generative process for a corpus {displaystyle D}D consisting of {displaystyle M}M documents each of length {displaystyle N_{i}}N_{i}:

1. Choose {displaystyle theta _{i}sim operatorname {Dir} (alpha )}{displaystyle theta _{i}sim operatorname {Dir} (alpha )}, where {displaystyle iin {1,dots ,M}}iin {1,dots ,M} and {displaystyle mathrm {Dir} (alpha )}mathrm {Dir} (alpha ) is a Dirichlet distribution with a symmetric parameter {displaystyle alpha }alpha which typically is sparse ({displaystyle alpha <1}alpha < 1)

2. Choose {displaystyle varphi _{k}sim operatorname {Dir} (beta )}{displaystyle varphi _{k}sim operatorname {Dir} (beta )}, where {displaystyle kin {1,dots ,K}}kin {1,dots ,K} and {displaystyle beta }beta typically is sparse

3. For each of the word positions {displaystyle i,j}i,j, where {displaystyle jin {1,dots ,N_{i}}}jin {1,dots ,N_{i}}, and {displaystyle iin {1,dots ,M}}iin {1,dots ,M}

(a) Choose a topic {displaystyle z_{i,j}sim operatorname {Multinomial} (theta _{i}).}{displaystyle z_{i,j}sim operatorname {Multinomial} (theta _{i}).}(b) Choose a word {displaystyle w_{i,j}sim operatorname {Multinomial} (varphi _{z_{i,j}}).}{displaystyle w_{i,j}sim operatorname {Multinomial} (varphi _{z_{i,j}}).}

(Note that multinomial distribution here refers to the multinomial with only one trial, which is also known as the categorical distribution.)

The lengths {displaystyle N_{i}}N_{i} are treated as independent of all the other data generating variables ({displaystyle w}w and {displaystyle z}z). The subscript is often dropped, as in the plate diagrams shown here.

Definition[edit]

A formal description of LDA is as follows:

Variable

Type

Meaning

{displaystyle K}K

integer

number of topics (e.g. 50)

{displaystyle V}V

integer

number of words in the vocabulary (e.g. 50,000 or 1,000,000)

{displaystyle M}M

integer

number of documents

{displaystyle N_{d=1dots M}}N_{d=1dots M}

integer

number of words in document d

{displaystyle N}N

integer

total number of words in all documents; sum of all {displaystyle N_{d}}N_{d} values, i.e. {displaystyle N=sum _{d=1}^{M}N_{d}}N=sum _{d=1}^{M}N_{d}

{displaystyle alpha _{k=1dots K}}alpha _{k=1dots K}

positive real

prior weight of topic k in a document; usually the same for all topics; normally a number less than 1, e.g. 0.1, to prefer sparse topic distributions, i.e. few topics per document

{displaystyle {boldsymbol {alpha }}}{boldsymbol {alpha }}

K-dimension vector of positive reals

collection of all {displaystyle alpha _{k}}alpha _{k} values, viewed as a single vector

{displaystyle beta _{w=1dots V}}beta _{w=1dots V}

positive real

prior weight of word w in a topic; usually the same for all words; normally a number much less than 1, e.g. 0.001, to strongly prefer sparse word distributions, i.e. few words per topic

{displaystyle {boldsymbol {beta }}}{boldsymbol {beta }}

V-dimension vector of positive reals

collection of all {displaystyle beta _{w}}beta _{w} values, viewed as a single vector

{displaystyle varphi _{k=1dots K,w=1dots V}}varphi _{k=1dots K,w=1dots V}

probability (real number between 0 and 1)

probability of word w occurring in topic k

{displaystyle {boldsymbol {varphi }}_{k=1dots K}}{boldsymbol {varphi }}_{k=1dots K}

V-dimension vector of probabilities, which must sum to 1

distribution of words in topic k

{displaystyle theta _{d=1dots M,k=1dots K}}theta _{d=1dots M,k=1dots K}

probability (real number between 0 and 1)

probability of topic k occurring in document d for any given word

{displaystyle {boldsymbol {theta }}_{d=1dots M}}{boldsymbol {theta }}_{d=1dots M}

K-dimension vector of probabilities, which must sum to 1

distribution of topics in document d

{displaystyle z_{d=1dots M,w=1dots N_{d}}}z_{d=1dots M,w=1dots N_{d}}

integer between 1 and K

identity of topic of word w in document d

{displaystyle mathbf {Z} }mathbf {Z}

N-dimension vector of integers between 1 and K

identity of topic of all words in all documents

{displaystyle w_{d=1dots M,w=1dots N_{d}}}w_{d=1dots M,w=1dots N_{d}}

integer between 1 and V

identity of word w in document d

{displaystyle mathbf {W} }mathbf {W}

N-dimension vector of integers between 1 and V

identity of all words in all documents

We can then mathematically describe the random variables as follows:

{displaystyle {begin{aligned}{boldsymbol {varphi }}_{k=1dots K}&sim operatorname {Dirichlet} _{V}({boldsymbol {beta }})\{boldsymbol {theta }}_{d=1dots M}&sim operatorname {Dirichlet} _{K}({boldsymbol {alpha }})\z_{d=1dots M,w=1dots N_{d}}&sim operatorname {Categorical} _{K}({boldsymbol {theta }}_{d})\w_{d=1dots M,w=1dots N_{d}}&sim operatorname {Categorical} _{V}({boldsymbol {varphi }}_{z_{dw}})end{aligned}}}{displaystyle {begin{aligned}{boldsymbol {varphi }}_{k=1dots K}&sim operatorname {Dirichlet} _{V}({boldsymbol {beta }})\{boldsymbol {theta }}_{d=1dots M}&sim operatorname {Dirichlet} _{K}({boldsymbol {alpha }})\z_{d=1dots M,w=1dots N_{d}}&sim operatorname {Categorical} _{K}({boldsymbol {theta }}_{d})\w_{d=1dots M,w=1dots N_{d}}&sim operatorname {Categorical} _{V}({boldsymbol {varphi }}_{z_{dw}})end{aligned}}}

Inference[edit]

See also: Dirichlet-multinomial distribution

Learning the various distributions (the set of topics, their associated word probabilities, the topic of each word, and the particular topic mixture of each document) is a problem of Bayesian inference. The original paper used a variational Bayes approximation of the posterior distribution;[1] alternative inference techniques use Gibbs sampling[6] and expectation propagation.[7]

Following is the derivation of the equations for collapsed Gibbs sampling, which means {displaystyle varphi }varphis and {displaystyle theta }thetas will be integrated out. For simplicity, in this derivation the documents are all assumed to have the same length {displaystyle N_{}}N_{}. The derivation is equally valid if the document lengths vary.

According to the model, the total probability of the model is:

{displaystyle P({boldsymbol {W}},{boldsymbol {Z}},{boldsymbol {theta }},{boldsymbol {varphi }};alpha ,beta )=prod _{i=1}^{K}P(varphi _{i};beta )prod _{j=1}^{M}P(theta _{j};alpha )prod _{t=1}^{N}P(Z_{j,t}mid theta _{j})P(W_{j,t}mid varphi _{Z_{j,t}}),}{displaystyle P({boldsymbol {W}},{boldsymbol {Z}},{boldsymbol {theta }},{boldsymbol {varphi }};alpha ,beta )=prod _{i=1}^{K}P(varphi _{i};beta )prod _{j=1}^{M}P(theta _{j};alpha )prod _{t=1}^{N}P(Z_{j,t}mid theta _{j})P(W_{j,t}mid varphi _{Z_{j,t}}),}

where the bold-font variables denote the vector version of the variables. First, {displaystyle {boldsymbol {varphi }}}{boldsymbol {varphi }} and {displaystyle {boldsymbol {theta }}}{boldsymbol {theta }} need to be integrated out.

{displaystyle {begin{aligned}&P({boldsymbol {Z}},{boldsymbol {W}};alpha ,beta )=int _{boldsymbol {theta }}int _{boldsymbol {varphi }}P({boldsymbol {W}},{boldsymbol {Z}},{boldsymbol {theta }},{boldsymbol {varphi }};alpha ,beta ),d{boldsymbol {varphi }},d{boldsymbol {theta }}\={}&int _{boldsymbol {varphi }}prod _{i=1}^{K}P(varphi _{i};beta )prod _{j=1}^{M}prod _{t=1}^{N}P(W_{j,t}mid varphi _{Z_{j,t}}),d{boldsymbol {varphi }}int _{boldsymbol {theta }}prod _{j=1}^{M}P(theta _{j};alpha )prod _{t=1}^{N}P(Z_{j,t}mid theta _{j}),d{boldsymbol {theta }}.end{aligned}}}{displaystyle {begin{aligned}&P({boldsymbol {Z}},{boldsymbol {W}};alpha ,beta )=int _{boldsymbol {theta }}int _{boldsymbol {varphi }}P({boldsymbol {W}},{boldsymbol {Z}},{boldsymbol {theta }},{boldsymbol {varphi }};alpha ,beta ),d{boldsymbol {varphi }},d{boldsymbol {theta }}\={}&int _{boldsymbol {varphi }}prod _{i=1}^{K}P(varphi _{i};beta )prod _{j=1}^{M}prod _{t=1}^{N}P(W_{j,t}mid varphi _{Z_{j,t}}),d{boldsymbol {varphi }}int _{boldsymbol {theta }}prod _{j=1}^{M}P(theta _{j};alpha )prod _{t=1}^{N}P(Z_{j,t}mid theta _{j}),d{boldsymbol {theta }}.end{aligned}}}

All the {displaystyle theta }thetas are independent to each other and the same to all the {displaystyle varphi }varphis. So we can treat each {displaystyle theta }theta and each {displaystyle varphi }varphi separately. We now focus only on the {displaystyle theta }theta part.

{displaystyle int _{boldsymbol {theta }}prod _{j=1}^{M}P(theta _{j};alpha )prod _{t=1}^{N}P(Z_{j,t}mid theta _{j}),d{boldsymbol {theta }}=prod _{j=1}^{M}int _{theta _{j}}P(theta _{j};alpha )prod _{t=1}^{N}P(Z_{j,t}mid theta _{j}),dtheta _{j}.}{displaystyle int _{boldsymbol {theta }}prod _{j=1}^{M}P(theta _{j};alpha )prod _{t=1}^{N}P(Z_{j,t}mid theta _{j}),d{boldsymbol {theta }}=prod _{j=1}^{M}int _{theta _{j}}P(theta _{j};alpha )prod _{t=1}^{N}P(Z_{j,t}mid theta _{j}),dtheta _{j}.}

We can further focus on only one {displaystyle theta }theta as the following:

{displaystyle int _{theta _{j}}P(theta _{j};alpha )prod _{t=1}^{N}P(Z_{j,t}mid theta _{j}),dtheta _{j}.}{displaystyle int _{theta _{j}}P(theta _{j};alpha )prod _{t=1}^{N}P(Z_{j,t}mid theta _{j}),dtheta _{j}.}

Actually, it is the hidden part of the model for the {displaystyle j^{th}}j^{th} document. Now we replace the probabilities in the above equation by the true distribution expression to write out the explicit equation.

{displaystyle int _{theta _{j}}P(theta _{j};alpha )prod _{t=1}^{N}P(Z_{j,t}mid theta _{j}),dtheta _{j}=int _{theta _{j}}{frac {Gamma left(sum _{i=1}^{K}alpha _{i}right)}{prod _{i=1}^{K}Gamma (alpha _{i})}}prod _{i=1}^{K}theta _{j,i}^{alpha _{i}-1}prod _{t=1}^{N}P(Z_{j,t}mid theta _{j}),dtheta _{j}.}{displaystyle int _{theta _{j}}P(theta _{j};alpha )prod _{t=1}^{N}P(Z_{j,t}mid theta _{j}),dtheta _{j}=int _{theta _{j}}{frac {Gamma left(sum _{i=1}^{K}alpha _{i}right)}{prod _{i=1}^{K}Gamma (alpha _{i})}}prod _{i=1}^{K}theta _{j,i}^{alpha _{i}-1}prod _{t=1}^{N}P(Z_{j,t}mid theta _{j}),dtheta _{j}.}

Let {displaystyle n_{j,r}^{i}}n_{j,r}^{i} be the number of word tokens in the {displaystyle j^{th}}j^{th} document with the same word symbol (the {displaystyle r^{th}}r^{th} word in the vocabulary) assigned to the {displaystyle i^{th}}i^{th} topic. So, {displaystyle n_{j,r}^{i}}n_{j,r}^{i} is three dimensional. If any of the three dimensions is not limited to a specific value, we use a parenthesized point {displaystyle (cdot )}(cdot ) to denote. For example, {displaystyle n_{j,(cdot )}^{i}}n_{j,(cdot )}^{i} denotes the number of word tokens in the {displaystyle j^{th}}j^{th} document assigned to the {displaystyle i^{th}}i^{th} topic. Thus, the right most part of the above equation can be rewritten as:

{displaystyle prod _{t=1}^{N}P(Z_{j,t}mid theta _{j})=prod _{i=1}^{K}theta _{j,i}^{n_{j,(cdot )}^{i}}.}{displaystyle prod _{t=1}^{N}P(Z_{j,t}mid theta _{j})=prod _{i=1}^{K}theta _{j,i}^{n_{j,(cdot )}^{i}}.}

So the {displaystyle theta _{j}}theta _{j} integration formula can be changed to:

{displaystyle int _{theta _{j}}{frac {Gamma left(sum _{i=1}^{K}alpha _{i}right)}{prod _{i=1}^{K}Gamma (alpha _{i})}}prod _{i=1}^{K}theta _{j,i}^{alpha _{i}-1}prod _{i=1}^{K}theta _{j,i}^{n_{j,(cdot )}^{i}},dtheta _{j}=int _{theta _{j}}{frac {Gamma left(sum _{i=1}^{K}alpha _{i}right)}{prod _{i=1}^{K}Gamma (alpha _{i})}}prod _{i=1}^{K}theta _{j,i}^{n_{j,(cdot )}^{i}+alpha _{i}-1},dtheta _{j}.}{displaystyle int _{theta _{j}}{frac {Gamma left(sum _{i=1}^{K}alpha _{i}right)}{prod _{i=1}^{K}Gamma (alpha _{i})}}prod _{i=1}^{K}theta _{j,i}^{alpha _{i}-1}prod _{i=1}^{K}theta _{j,i}^{n_{j,(cdot )}^{i}},dtheta _{j}=int _{theta _{j}}{frac {Gamma left(sum _{i=1}^{K}alpha _{i}right)}{prod _{i=1}^{K}Gamma (alpha _{i})}}prod _{i=1}^{K}theta _{j,i}^{n_{j,(cdot )}^{i}+alpha _{i}-1},dtheta _{j}.}

Clearly, the equation inside the integration has the same form as the Dirichlet distribution. According to the Dirichlet distribution,

{displaystyle int _{theta _{j}}{frac {Gamma left(sum _{i=1}^{K}n_{j,(cdot )}^{i}+alpha _{i}right)}{prod _{i=1}^{K}Gamma (n_{j,(cdot )}^{i}+alpha _{i})}}prod _{i=1}^{K}theta _{j,i}^{n_{j,(cdot )}^{i}+alpha _{i}-1},dtheta _{j}=1.}{displaystyle int _{theta _{j}}{frac {Gamma left(sum _{i=1}^{K}n_{j,(cdot )}^{i}+alpha _{i}right)}{prod _{i=1}^{K}Gamma (n_{j,(cdot )}^{i}+alpha _{i})}}prod _{i=1}^{K}theta _{j,i}^{n_{j,(cdot )}^{i}+alpha _{i}-1},dtheta _{j}=1.}

Thus,

{displaystyle {begin{aligned}&int _{theta _{j}}P(theta _{j};alpha )prod _{t=1}^{N}P(Z_{j,t}mid theta _{j}),dtheta _{j}=int _{theta _{j}}{frac {Gamma left(sum _{i=1}^{K}alpha _{i}right)}{prod _{i=1}^{K}Gamma (alpha _{i})}}prod _{i=1}^{K}theta _{j,i}^{n_{j,(cdot )}^{i}+alpha _{i}-1},dtheta _{j}\[8pt]={}&{frac {Gamma left(sum _{i=1}^{K}alpha _{i}right)}{prod _{i=1}^{K}Gamma (alpha _{i})}}{frac {prod _{i=1}^{K}Gamma (n_{j,(cdot )}^{i}+alpha _{i})}{Gamma left(sum _{i=1}^{K}n_{j,(cdot )}^{i}+alpha _{i}right)}}int _{theta _{j}}{frac {Gamma left(sum _{i=1}^{K}n_{j,(cdot )}^{i}+alpha _{i}right)}{prod _{i=1}^{K}Gamma (n_{j,(cdot )}^{i}+alpha _{i})}}prod _{i=1}^{K}theta _{j,i}^{n_{j,(cdot )}^{i}+alpha _{i}-1},dtheta _{j}\[8pt]={}&{frac {Gamma left(sum _{i=1}^{K}alpha _{i}right)}{prod _{i=1}^{K}Gamma (alpha _{i})}}{frac {prod _{i=1}^{K}Gamma (n_{j,(cdot )}^{i}+alpha _{i})}{Gamma left(sum _{i=1}^{K}n_{j,(cdot )}^{i}+alpha _{i}right)}}.end{aligned}}}{displaystyle {begin{aligned}&int _{theta _{j}}P(theta _{j};alpha )prod _{t=1}^{N}P(Z_{j,t}mid theta _{j}),dtheta _{j}=int _{theta _{j}}{frac {Gamma left(sum _{i=1}^{K}alpha _{i}right)}{prod _{i=1}^{K}Gamma (alpha _{i})}}prod _{i=1}^{K}theta _{j,i}^{n_{j,(cdot )}^{i}+alpha _{i}-1},dtheta _{j}\[8pt]={}&{frac {Gamma left(sum _{i=1}^{K}alpha _{i}right)}{prod _{i=1}^{K}Gamma (alpha _{i})}}{frac {prod _{i=1}^{K}Gamma (n_{j,(cdot )}^{i}+alpha _{i})}{Gamma left(sum _{i=1}^{K}n_{j,(cdot )}^{i}+alpha _{i}right)}}int _{theta _{j}}{frac {Gamma left(sum _{i=1}^{K}n_{j,(cdot )}^{i}+alpha _{i}right)}{prod _{i=1}^{K}Gamma (n_{j,(cdot )}^{i}+alpha _{i})}}prod _{i=1}^{K}theta _{j,i}^{n_{j,(cdot )}^{i}+alpha _{i}-1},dtheta _{j}\[8pt]={}&{frac {Gamma left(sum _{i=1}^{K}alpha _{i}right)}{prod _{i=1}^{K}Gamma (alpha _{i})}}{frac {prod _{i=1}^{K}Gamma (n_{j,(cdot )}^{i}+alpha _{i})}{Gamma left(sum _{i=1}^{K}n_{j,(cdot )}^{i}+alpha _{i}right)}}.end{aligned}}}

Now we turn our attention to the {displaystyle {boldsymbol {varphi }}}{boldsymbol {varphi }} part. Actually, the derivation of the {displaystyle {boldsymbol {varphi }}}{boldsymbol {varphi }} part is very similar to the {displaystyle {boldsymbol {theta }}}{boldsymbol {theta }} part. Here we only list the steps of the derivation:

{displaystyle {begin{aligned}&int _{boldsymbol {varphi }}prod _{i=1}^{K}P(varphi _{i};beta )prod _{j=1}^{M}prod _{t=1}^{N}P(W_{j,t}mid varphi _{Z_{j,t}}),d{boldsymbol {varphi }}\[8pt]={}&prod _{i=1}^{K}int _{varphi _{i}}P(varphi _{i};beta )prod _{j=1}^{M}prod _{t=1}^{N}P(W_{j,t}mid varphi _{Z_{j,t}}),dvarphi _{i}\[8pt]={}&prod _{i=1}^{K}int _{varphi _{i}}{frac {Gamma left(sum _{r=1}^{V}beta _{r}right)}{prod _{r=1}^{V}Gamma (beta _{r})}}prod _{r=1}^{V}varphi _{i,r}^{beta _{r}-1}prod _{r=1}^{V}varphi _{i,r}^{n_{(cdot ),r}^{i}},dvarphi _{i}\[8pt]={}&prod _{i=1}^{K}int _{varphi _{i}}{frac {Gamma left(sum _{r=1}^{V}beta _{r}right)}{prod _{r=1}^{V}Gamma (beta _{r})}}prod _{r=1}^{V}varphi _{i,r}^{n_{(cdot ),r}^{i}+beta _{r}-1},dvarphi _{i}\[8pt]={}&prod _{i=1}^{K}{frac {Gamma left(sum _{r=1}^{V}beta _{r}right)}{prod _{r=1}^{V}Gamma (beta _{r})}}{frac {prod _{r=1}^{V}Gamma (n_{(cdot ),r}^{i}+beta _{r})}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{i}+beta _{r}right)}}.end{aligned}}}{displaystyle {begin{aligned}&int _{boldsymbol {varphi }}prod _{i=1}^{K}P(varphi _{i};beta )prod _{j=1}^{M}prod _{t=1}^{N}P(W_{j,t}mid varphi _{Z_{j,t}}),d{boldsymbol {varphi }}\[8pt]={}&prod _{i=1}^{K}int _{varphi _{i}}P(varphi _{i};beta )prod _{j=1}^{M}prod _{t=1}^{N}P(W_{j,t}mid varphi _{Z_{j,t}}),dvarphi _{i}\[8pt]={}&prod _{i=1}^{K}int _{varphi _{i}}{frac {Gamma left(sum _{r=1}^{V}beta _{r}right)}{prod _{r=1}^{V}Gamma (beta _{r})}}prod _{r=1}^{V}varphi _{i,r}^{beta _{r}-1}prod _{r=1}^{V}varphi _{i,r}^{n_{(cdot ),r}^{i}},dvarphi _{i}\[8pt]={}&prod _{i=1}^{K}int _{varphi _{i}}{frac {Gamma left(sum _{r=1}^{V}beta _{r}right)}{prod _{r=1}^{V}Gamma (beta _{r})}}prod _{r=1}^{V}varphi _{i,r}^{n_{(cdot ),r}^{i}+beta _{r}-1},dvarphi _{i}\[8pt]={}&prod _{i=1}^{K}{frac {Gamma left(sum _{r=1}^{V}beta _{r}right)}{prod _{r=1}^{V}Gamma (beta _{r})}}{frac {prod _{r=1}^{V}Gamma (n_{(cdot ),r}^{i}+beta _{r})}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{i}+beta _{r}right)}}.end{aligned}}}

For clarity, here we write down the final equation with both {displaystyle {boldsymbol {phi }}}{boldsymbol {phi }} and {displaystyle {boldsymbol {theta }}}{boldsymbol {theta }} integrated out:

{displaystyle P({boldsymbol {Z}},{boldsymbol {W}};alpha ,beta )=prod _{j=1}^{M}{frac {Gamma left(sum _{i=1}^{K}alpha _{i}right)}{prod _{i=1}^{K}Gamma (alpha _{i})}}{frac {prod _{i=1}^{K}Gamma (n_{j,(cdot )}^{i}+alpha _{i})}{Gamma left(sum _{i=1}^{K}n_{j,(cdot )}^{i}+alpha _{i}right)}}times prod _{i=1}^{K}{frac {Gamma left(sum _{r=1}^{V}beta _{r}right)}{prod _{r=1}^{V}Gamma (beta _{r})}}{frac {prod _{r=1}^{V}Gamma (n_{(cdot ),r}^{i}+beta _{r})}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{i}+beta _{r}right)}}.}{displaystyle P({boldsymbol {Z}},{boldsymbol {W}};alpha ,beta )=prod _{j=1}^{M}{frac {Gamma left(sum _{i=1}^{K}alpha _{i}right)}{prod _{i=1}^{K}Gamma (alpha _{i})}}{frac {prod _{i=1}^{K}Gamma (n_{j,(cdot )}^{i}+alpha _{i})}{Gamma left(sum _{i=1}^{K}n_{j,(cdot )}^{i}+alpha _{i}right)}}times prod _{i=1}^{K}{frac {Gamma left(sum _{r=1}^{V}beta _{r}right)}{prod _{r=1}^{V}Gamma (beta _{r})}}{frac {prod _{r=1}^{V}Gamma (n_{(cdot ),r}^{i}+beta _{r})}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{i}+beta _{r}right)}}.}

The goal of Gibbs Sampling here is to approximate the distribution of {displaystyle P({boldsymbol {Z}}mid {boldsymbol {W}};alpha ,beta )}P({boldsymbol {Z}}mid {boldsymbol {W}};alpha ,beta ). Since {displaystyle P({boldsymbol {W}};alpha ,beta )}P({boldsymbol {W}};alpha ,beta ) is invariable for any of Z, Gibbs Sampling equations can be derived from {displaystyle P({boldsymbol {Z}},{boldsymbol {W}};alpha ,beta )}P({boldsymbol {Z}},{boldsymbol {W}};alpha ,beta ) directly. The key point is to derive the following conditional probability:

{displaystyle P(Z_{(m,n)}mid {boldsymbol {Z_{-(m,n)}}},{boldsymbol {W}};alpha ,beta )={frac {P(Z_{(m,n)},{boldsymbol {Z_{-(m,n)}}},{boldsymbol {W}};alpha ,beta )}{P({boldsymbol {Z_{-(m,n)}}},{boldsymbol {W}};alpha ,beta )}},}{displaystyle P(Z_{(m,n)}mid {boldsymbol {Z_{-(m,n)}}},{boldsymbol {W}};alpha ,beta )={frac {P(Z_{(m,n)},{boldsymbol {Z_{-(m,n)}}},{boldsymbol {W}};alpha ,beta )}{P({boldsymbol {Z_{-(m,n)}}},{boldsymbol {W}};alpha ,beta )}},}

where {displaystyle Z_{(m,n)}}Z_{(m,n)} denotes the {displaystyle Z}Z hidden variable of the {displaystyle n^{th}}n^{th} word token in the {displaystyle m^{th}}m^{th} document. And further we assume that the word symbol of it is the {displaystyle v^{th}}v^{th} word in the vocabulary. {displaystyle {boldsymbol {Z_{-(m,n)}}}}{boldsymbol {Z_{-(m,n)}}} denotes all the {displaystyle Z}Zs but {displaystyle Z_{(m,n)}}Z_{(m,n)}. Note that Gibbs Sampling needs only to sample a value for {displaystyle Z_{(m,n)}}Z_{(m,n)}, according to the above probability, we do not need the exact value of

{displaystyle Pleft(Z_{m,n}mid {boldsymbol {Z_{-(m,n)}}},{boldsymbol {W}};alpha ,beta right)}{displaystyle Pleft(Z_{m,n}mid {boldsymbol {Z_{-(m,n)}}},{boldsymbol {W}};alpha ,beta right)}

but the ratios among the probabilities that {displaystyle Z_{(m,n)}}Z_{(m,n)} can take value. So, the above equation can be simplified as:

{displaystyle {begin{aligned}P(&Z_{(m,n)}=kmid {boldsymbol {Z_{-(m,n)}}},{boldsymbol {W}};alpha ,beta )\[8pt]&propto P(Z_{(m,n)}=k,{boldsymbol {Z_{-(m,n)}}},{boldsymbol {W}};alpha ,beta )\[8pt]&=left({frac {Gamma left(sum _{i=1}^{K}alpha _{i}right)}{prod _{i=1}^{K}Gamma (alpha _{i})}}right)^{M}prod _{jneq m}{frac {prod _{i=1}^{K}Gamma left(n_{j,(cdot )}^{i}+alpha _{i}right)}{Gamma left(sum _{i=1}^{K}n_{j,(cdot )}^{i}+alpha _{i}right)}}left({frac {Gamma left(sum _{r=1}^{V}beta _{r}right)}{prod _{r=1}^{V}Gamma (beta _{r})}}right)^{K}prod _{i=1}^{K}prod _{rneq v}Gamma left(n_{(cdot ),r}^{i}+beta _{r}right){frac {prod _{i=1}^{K}Gamma left(n_{m,(cdot )}^{i}+alpha _{i}right)}{Gamma left(sum _{i=1}^{K}n_{m,(cdot )}^{i}+alpha _{i}right)}}prod _{i=1}^{K}{frac {Gamma left(n_{(cdot ),v}^{i}+beta _{v}right)}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{i}+beta _{r}right)}}\[8pt]&propto {frac {prod _{i=1}^{K}Gamma left(n_{m,(cdot )}^{i}+alpha _{i}right)}{Gamma left(sum _{i=1}^{K}n_{m,(cdot )}^{i}+alpha _{i}right)}}prod _{i=1}^{K}{frac {Gamma left(n_{(cdot ),v}^{i}+beta _{v}right)}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{i}+beta _{r}right)}}\[8pt]&propto prod _{i=1}^{K}Gamma left(n_{m,(cdot )}^{i}+alpha _{i}right)prod _{i=1}^{K}{frac {Gamma left(n_{(cdot ),v}^{i}+beta _{v}right)}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{i}+beta _{r}right)}}.end{aligned}}}{displaystyle {begin{aligned}P(&Z_{(m,n)}=kmid {boldsymbol {Z_{-(m,n)}}},{boldsymbol {W}};alpha ,beta )\[8pt]&propto P(Z_{(m,n)}=k,{boldsymbol {Z_{-(m,n)}}},{boldsymbol {W}};alpha ,beta )\[8pt]&=left({frac {Gamma left(sum _{i=1}^{K}alpha _{i}right)}{prod _{i=1}^{K}Gamma (alpha _{i})}}right)^{M}prod _{jneq m}{frac {prod _{i=1}^{K}Gamma left(n_{j,(cdot )}^{i}+alpha _{i}right)}{Gamma left(sum _{i=1}^{K}n_{j,(cdot )}^{i}+alpha _{i}right)}}left({frac {Gamma left(sum _{r=1}^{V}beta _{r}right)}{prod _{r=1}^{V}Gamma (beta _{r})}}right)^{K}prod _{i=1}^{K}prod _{rneq v}Gamma left(n_{(cdot ),r}^{i}+beta _{r}right){frac {prod _{i=1}^{K}Gamma left(n_{m,(cdot )}^{i}+alpha _{i}right)}{Gamma left(sum _{i=1}^{K}n_{m,(cdot )}^{i}+alpha _{i}right)}}prod _{i=1}^{K}{frac {Gamma left(n_{(cdot ),v}^{i}+beta _{v}right)}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{i}+beta _{r}right)}}\[8pt]&propto {frac {prod _{i=1}^{K}Gamma left(n_{m,(cdot )}^{i}+alpha _{i}right)}{Gamma left(sum _{i=1}^{K}n_{m,(cdot )}^{i}+alpha _{i}right)}}prod _{i=1}^{K}{frac {Gamma left(n_{(cdot ),v}^{i}+beta _{v}right)}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{i}+beta _{r}right)}}\[8pt]&propto prod _{i=1}^{K}Gamma left(n_{m,(cdot )}^{i}+alpha _{i}right)prod _{i=1}^{K}{frac {Gamma left(n_{(cdot ),v}^{i}+beta _{v}right)}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{i}+beta _{r}right)}}.end{aligned}}}

Finally, let {displaystyle n_{j,r}^{i,-(m,n)}}n_{j,r}^{i,-(m,n)} be the same meaning as {displaystyle n_{j,r}^{i}}n_{j,r}^{i} but with the {displaystyle Z_{(m,n)}}Z_{(m,n)} excluded. The above equation can be further simplified leveraging the property of gamma function. We first split the summation and then merge it back to obtain a {displaystyle k}k-independent summation, which could be dropped:

{displaystyle {begin{aligned}&propto prod _{ineq k}Gamma left(n_{m,(cdot )}^{i,-(m,n)}+alpha _{i}right)prod _{ineq k}{frac {Gamma left(n_{(cdot ),v}^{i,-(m,n)}+beta _{v}right)}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{i,-(m,n)}+beta _{r}right)}}Gamma left(n_{m,(cdot )}^{k,-(m,n)}+alpha _{k}+1right){frac {Gamma left(n_{(cdot ),v}^{k,-(m,n)}+beta _{v}+1right)}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{k,-(m,n)}+beta _{r}+1right)}}\[8pt]&=prod _{ineq k}Gamma left(n_{m,(cdot )}^{i,-(m,n)}+alpha _{i}right)prod _{ineq k}{frac {Gamma left(n_{(cdot ),v}^{i,-(m,n)}+beta _{v}right)}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{i,-(m,n)}+beta _{r}right)}}Gamma left(n_{m,(cdot )}^{k,-(m,n)}+alpha _{k}right){frac {Gamma left(n_{(cdot ),v}^{k,-(m,n)}+beta _{v}right)}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{k,-(m,n)}+beta _{r}right)}}left(n_{m,(cdot )}^{k,-(m,n)}+alpha _{k}right){frac {n_{(cdot ),v}^{k,-(m,n)}+beta _{v}}{sum _{r=1}^{V}n_{(cdot ),r}^{k,-(m,n)}+beta _{r}}}\[8pt]&=prod _{i}Gamma left(n_{m,(cdot )}^{i,-(m,n)}+alpha _{i}right)prod _{i}{frac {Gamma left(n_{(cdot ),v}^{i,-(m,n)}+beta _{v}right)}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{i,-(m,n)}+beta _{r}right)}}left(n_{m,(cdot )}^{k,-(m,n)}+alpha _{k}right){frac {n_{(cdot ),v}^{k,-(m,n)}+beta _{v}}{sum _{r=1}^{V}n_{(cdot ),r}^{k,-(m,n)}+beta _{r}}}\[8pt]&propto left(n_{m,(cdot )}^{k,-(m,n)}+alpha _{k}right){frac {n_{(cdot ),v}^{k,-(m,n)}+beta _{v}}{sum _{r=1}^{V}n_{(cdot ),r}^{k,-(m,n)}+beta _{r}}}end{aligned}}}{displaystyle {begin{aligned}&propto prod _{ineq k}Gamma left(n_{m,(cdot )}^{i,-(m,n)}+alpha _{i}right)prod _{ineq k}{frac {Gamma left(n_{(cdot ),v}^{i,-(m,n)}+beta _{v}right)}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{i,-(m,n)}+beta _{r}right)}}Gamma left(n_{m,(cdot )}^{k,-(m,n)}+alpha _{k}+1right){frac {Gamma left(n_{(cdot ),v}^{k,-(m,n)}+beta _{v}+1right)}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{k,-(m,n)}+beta _{r}+1right)}}\[8pt]&=prod _{ineq k}Gamma left(n_{m,(cdot )}^{i,-(m,n)}+alpha _{i}right)prod _{ineq k}{frac {Gamma left(n_{(cdot ),v}^{i,-(m,n)}+beta _{v}right)}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{i,-(m,n)}+beta _{r}right)}}Gamma left(n_{m,(cdot )}^{k,-(m,n)}+alpha _{k}right){frac {Gamma left(n_{(cdot ),v}^{k,-(m,n)}+beta _{v}right)}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{k,-(m,n)}+beta _{r}right)}}left(n_{m,(cdot )}^{k,-(m,n)}+alpha _{k}right){frac {n_{(cdot ),v}^{k,-(m,n)}+beta _{v}}{sum _{r=1}^{V}n_{(cdot ),r}^{k,-(m,n)}+beta _{r}}}\[8pt]&=prod _{i}Gamma left(n_{m,(cdot )}^{i,-(m,n)}+alpha _{i}right)prod _{i}{frac {Gamma left(n_{(cdot ),v}^{i,-(m,n)}+beta _{v}right)}{Gamma left(sum _{r=1}^{V}n_{(cdot ),r}^{i,-(m,n)}+beta _{r}right)}}left(n_{m,(cdot )}^{k,-(m,n)}+alpha _{k}right){frac {n_{(cdot ),v}^{k,-(m,n)}+beta _{v}}{sum _{r=1}^{V}n_{(cdot ),r}^{k,-(m,n)}+beta _{r}}}\[8pt]&propto left(n_{m,(cdot )}^{k,-(m,n)}+alpha _{k}right){frac {n_{(cdot ),v}^{k,-(m,n)}+beta _{v}}{sum _{r=1}^{V}n_{(cdot ),r}^{k,-(m,n)}+beta _{r}}}end{aligned}}}

Note that the same formula is derived in the article on the Dirichlet-multinomial distribution, as part of a more general discussion of integrating Dirichlet distribution priors out of a Bayesian network.

Faster sampling[edit]

Recent research has been focused on speeding up the inference of latent Dirichlet Allocation to support capture of a massive number of topics in large number of documents. The update equation of the collapsed Gibbs sampler mentioned in the earlier section has a natural sparsity within it that can be taken advantage of. Intuitively, since each document only contains a subset of topics {displaystyle K_{d}}K_{d}, and a word also only appears in a subset of topics {displaystyle K_{w}}K_{w}, the above update equation could be rewritten to take advantage of this sparsity.[8]

{displaystyle p(Z_{d,n}=k)propto {frac {alpha beta }{C_{k}^{neg n}+Vbeta }}+{frac {C_{k}^{d}beta }{C_{k}^{neg n}+Vbeta }}+{frac {C_{k}^{w}(alpha +C_{k}^{d})}{C_{k}^{neg n}+Vbeta }}}p(Z_{d,n}=k)propto {frac {alpha beta }{C_{k}^{neg n}+Vbeta }}+{frac {C_{k}^{d}beta }{C_{k}^{neg n}+Vbeta }}+{frac {C_{k}^{w}(alpha +C_{k}^{d})}{C_{k}^{neg n}+Vbeta }}

In this equation, we have three terms, out of which two of them are sparse, and the other is small. We call these terms {displaystyle a,b}a,b and {displaystyle c}c respectively. Now, if we normalize each term by summing over all the topics, we get:

{displaystyle A=sum _{k=1}^{K}{frac {alpha beta }{C_{k}^{neg n}+Vbeta }}}A=sum _{k=1}^{K}{frac {alpha beta }{C_{k}^{neg n}+Vbeta }}{displaystyle B=sum _{k=1}^{K}{frac {C_{k}^{d}beta }{C_{k}^{neg n}+Vbeta }}}B=sum _{k=1}^{K}{frac {C_{k}^{d}beta }{C_{k}^{neg n}+Vbeta }}{displaystyle C=sum _{k=1}^{K}{frac {C_{k}^{w}(alpha +C_{k}^{d})}{C_{k}^{neg n}+Vbeta }}}C=sum _{k=1}^{K}{frac {C_{k}^{w}(alpha +C_{k}^{d})}{C_{k}^{neg n}+Vbeta }}

Here, we can see that {displaystyle B}B is a summation of the topics that appear in document {displaystyle d}d, and {displaystyle C}C is also a sparse summation of the topics that a word {displaystyle w}w is assigned to across the whole corpus. {displaystyle A}A on the other hand, is dense but because of the small values of {displaystyle alpha }alpha & {displaystyle beta }beta, the value is very small compared to the two other terms.

Now, while sampling a topic, if we sample a random variable uniformly from {displaystyle ssim U(s|mid A+B+C)}{displaystyle ssim U(s|mid A+B+C)}, we can check which bucket our sample lands in. Since {displaystyle A}A is small, we are very unlikely to fall into this bucket; however, if we do fall into this bucket, sampling a topic takes O(K) time (same as the original Collapsed Gibbs Sampler). However, if we fall into the other two buckets, we only need to check a subset of topics if we keep a record of the sparse topics. A topic can be sampled from the {displaystyle B}B bucket in {displaystyle O(K_{d})}O(K_{d}) time, and a topic can be sampled from the {displaystyle C}C bucket in {displaystyle O(K_{w})}{displaystyle O(K_{w})} time where {displaystyle K_{d}}K_{d} and {displaystyle K_{w}}K_{w} denotes the number of topics assigned to the current document and current word type respectively.

Notice that after sampling each topic, updating these buckets are all basic {displaystyle O(1)}O(1) arithmetic operations.

Applications, extensions and similar techniques[edit]

Topic modeling is a classic problem in information retrieval. Related models and techniques are, among others, latent semantic indexing, independent component analysis, probabilistic latent semantic indexing, non-negative matrix factorization, and Gamma-Poisson distribution.

The LDA model is highly modular and can therefore be easily extended. The main field of interest is modeling relations between topics. This is achieved by using another distribution on the simplex instead of the Dirichlet. The Correlated Topic Model[9] follows this approach, inducing a correlation structure between topics by using the logistic normal distribution instead of the Dirichlet. Another extension is the hierarchical LDA (hLDA),[10] where topics are joined together in a hierarchy by using the nested Chinese restaurant process. LDA can also be extended to a corpus in which a document includes two types of information (e.g., words and names), as in the LDA-dual model.[11] Nonparametric extensions of LDA include the hierarchical Dirichlet process mixture model, which allows the number of topics to be unbounded and learnt from data and the nested Chinese restaurant process which allows topics to be arranged in a hierarchy whose structure is learnt from data.

As noted earlier, pLSA is similar to LDA. The LDA model is essentially the Bayesian version of pLSA model. The Bayesian formulation tends to perform better on small datasets because Bayesian methods can avoid overfitting the data. For very large datasets, the results of the two models tend to converge. One difference is that pLSA uses a variable {displaystyle d}d to represent a document in the training set. So in pLSA, when presented with a document the model hasn’t seen before, we fix {displaystyle Pr(wmid z)}Pr(wmid z)—the probability of words under topics—to be that learned from the training set and use the same EM algorithm to infer {displaystyle Pr(zmid d)}Pr(zmid d)—the topic distribution under {displaystyle d}d. Blei argues that this step is cheating because you are essentially refitting the model to the new data.

Variations on LDA have been used to automatically put natural images into categories, such as “bedroom” or “forest”, by treating an image as a document, and small patches of the image as words;[12] one of the variations is called Spatial Latent Dirichlet Allocation.[13]

See also[edit]

On Tue, Sep 26, 2017 at 6:43 AM, Banerjee,Arunava <arunava> wrote:

Jhabru Saab:

What you are looking for is an answer to a terribly difficult problem since it involves semantics. When I say I like a particular image, is it the shades of color that I like in it? Is it the fact that the image displays vibrant nature, that makes my heart go thump? Is it that the image has a picture of some one selling gol gappa’s that tug’s at my nostalgic heart strings? We really do’t know what we mean when we say two images are alike (semantically).

The best that the field of computer vision has managed (without tags on images) is something called “bag of words” that was originally introduced in text processing. The way bag of words is applied in vision is to take an image, break it up into patches of a certain size and then cluster the image based on a generative model.

Check out “latent dirichlet allocation” that took it to the next step from what was then prevalent (called “latent semantic indexing”)

You should probably try a vision based version of latent semantic indexing first.

-bong