Ideas from Learning the Structure of Dynamic Probabilistic Networks


From Learning the Structure of Dynamic Probabilistic Networks by Freidman, Murphy, Russell:

scores combine the likelihood of the data according to the network … with some penalty relating to the complexity of the network. When learning the structure of PNs, a complexity penalty is essential since the maximum-likelihood network is usually the completely connected network.

Well, one nice thing about my post on Estimating the order of an infinite state MDP with Gaussian Processes is that because of the regression, variables that don’t help predict the desired value increase the error.  This error itself acts as a regularizer that metrics for scoring learned DBNs need to have added artificially.

Paper also discusses Friedman’s (Learning belief networks in the presence of
missing values and hidden variables, 1997)  Structural EM (SEM) algorithm, which can modify the structure of the DBN being learned during the M-step.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: