Abstract | ||
---|---|---|
Some high-dimensional datasets can be modelled by assuming that there are many different linear constraints, each of which is Frequently Approximately Satisfied (FAS) by the data. The probability of a data vector under the model is then proportional to the product of the probabilities of its constraint violations. We describe three methods of learning products of constraints using a heavy-tailed probability distribution for the violations. |
Year | Venue | Keywords |
---|---|---|
2013 | UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence | heavy-tailed probability distribution,data vector,discovering multiple constraints,high-dimensional datasets,multiple constraint,different linear constraint,constraint violation,heavy tail,satisfiability,probability distribution |
DocType | Volume | ISBN |
Journal | abs/1301.2278 | 1-55860-800-1 |
Citations | PageRank | References |
12 | 4.73 | 4 |
Authors | ||
2 |
Name | Order | Citations | PageRank |
---|---|---|---|
geoffrey e hinton | 1 | 40435 | 4751.69 |
Yee Whye Teh | 2 | 6253 | 539.26 |