BayesiaLab Tech Talk: Measuring Bayesian Network Performance
BayesiaLab has made machine-learning Bayesian networks very easy. Within seconds, you can discover complex structures consisting of hundreds of nodes and view them in 2D, 3D, and even VR.
Such a network may look compelling and even seem plausible, yet any discovered network would be selected from an incredibly large number of possible networks, often exceeding the number of atoms in the universe. So, how do we know that a network that BayesiaLab found is the best one among the gazillion other possibilities? Actually, we typically don't know. In fact, with a only few exceptions, we can't know for sure. Instead, our objective is to ascertain that BayesiaLab found a very good model among all the possibilities.
As with any model, we need to address questions, such as: how well does the model fit the data? How complex is the model? Could it potentially be overfitted? How easily would the model change given slightly different or noisier data? These are fair questions and any analyst working with BayesiaLab must be able to answer.
BayesiaLab contains a plethora of functions that deals exclusively with validating the structure of models and assessing their inference performance. For instance, it includes target performance analysis, compression analysis, multi-target analysis, structural coefficient analysis, jackknife, bootstrap, and k-fold cross-validation, plus data perturbation, just to name a few.
In this webinar, Dr. Lionel Jouffe, Bayesia's CEO, will give you a tour of all validation-related features in BayesiaLab and explain which methods are appropriate for specific applications.