pangolin

所属分类:数学计算
开发工具:Jupyter Notebook
文件大小:0KB
下载次数:0
上传日期:2023-06-27 12:57:40
上 传 者sh-1993
说明:  注重乐趣的概率编程,
(probabilistic programming focused on fun,)

文件列表:
demos/ (0, 2023-11-22)
demos/demo-8-schools.ipynb (196176, 2023-11-22)
demos/demo-GP-regression.ipynb (47629, 2023-11-22)
demos/demo-IRT.ipynb (662368, 2023-11-22)
demos/demo-logistic-regression.ipynb (76679, 2023-11-22)
exploratory notebooks/ (0, 2023-11-22)
exploratory notebooks/demo-Hierarchical logistic regression.ipynb (11276, 2023-11-22)
exploratory notebooks/demo-JAGS.ipynb (10795, 2023-11-22)
exploratory notebooks/demo-calculate.ipynm.ipynb (73192, 2023-11-22)
exploratory notebooks/demo-indexing.ipynb (7696, 2023-11-22)
exploratory notebooks/demo-infer.ipynb (341104, 2023-11-22)
exploratory notebooks/demo-interface.ipynb (116930, 2023-11-22)
exploratory notebooks/demo-lda.ipynb (136913, 2023-11-22)
exploratory notebooks/demo-printing.ipynb (7731, 2023-11-22)
exploratory notebooks/demo-repeated-transforms.ipynb (463990, 2023-11-22)
exploratory notebooks/demo-stan.ipynb (43372, 2023-11-22)
exploratory notebooks/demo-transforms.ipynb (1898891, 2023-11-22)
exploratory notebooks/understanding-STAN-types.ipynb (22460, 2023-11-22)
exploratory notebooks/understanding-missing-data.ipynb (3247, 2023-11-22)
exploratory notebooks/understanding-numpyro.ipynb (114772, 2023-11-22)
pangolin-jags/ (0, 2023-11-22)
pangolin-jags/Example-GLM.ipynb (97552, 2023-11-22)
pangolin-jags/Example-GP-regression.ipynb (28363, 2023-11-22)
pangolin-jags/Example-Gaussian-Gaussian.ipynb (96595, 2023-11-22)
pangolin-jags/Example-Hierarchical-Regression.ipynb (26378, 2023-11-22)
pangolin-jags/Example-Indexing.ipynb (7139, 2023-11-22)
pangolin-jags/Example-Matrix-multiplication-from-vmap.ipynb (6212, 2023-11-22)
pangolin-jags/Example-Polynomial-Regression.ipynb (122119, 2023-11-22)
pangolin-jags/Example-Simple-Regression.ipynb (53013, 2023-11-22)
pangolin-jags/Example-Simulation-Based-Calibration.ipynb (82162, 2023-11-22)
pangolin-jags/Example-Time-series.ipynb (40385, 2023-11-22)
pangolin-jags/Example-Tutorial.ipynb (380542, 2023-11-22)
pangolin-jags/Example-Using-scan.ipynb (355811, 2023-11-22)
pangolin-jags/Example-vmap.ipynb (7007, 2023-11-22)
pangolin-jags/pangolin.py (65416, 2023-11-22)
pangolin-jags/regression.png (39589, 2023-11-22)
pangolin-jags/test_vmap_scan.py (11833, 2023-11-22)
... ...

Pangolin is a probabilistic inference research project. To get a quick feel for how it works, see these examples: * [8 schools](https://github.com/justindomke/pangolin/blob/master/demos/demo-8-schools.ipynb) * [logistic regression](https://github.com/justindomke/pangolin/blob/master/demos/demo-logistic-regression.ipynb) * [GP regression](https://github.com/justindomke/pangolin/blob/master/demos/demo-GP-regression.ipynb) * [Item-response theory models](https://github.com/justindomke/pangolin/blob/master/demos/demo-IRT.ipynb) It has the following goals: * **Feel like numpy.** Provide an interface for interacting with probability distributions that feels natural to anyone who's played around with numpy. As much as possible, using Pangolin should feel like a natural extension of that ecosystem rather than a new language. * **Look like math.** Where possible, calculations should resemble mathematical notation. Exceptions are allowed when algorithmic limitations make this impossible or where accepted mathematical notation is bad. * **Gradual enhancement.** There should be no barrier between using it as a "calculator" for simple one-liner probabilistic calculations and building full custom Bayesian models. * **Multiple Backends.** We have used lots of different PPLs. Some of our favorites are: * [BUGS](https://github.com/justindomke/pangolin/blob/master/https://www.mrc-bsu.cam.ac.uk/software/bugs/openbugs) * [JAGS](https://github.com/justindomke/pangolin/blob/master/https://mcmc-jags.sourceforge.io/) * [Stan](https://github.com/justindomke/pangolin/blob/master/https://mc-stan.org/) * [NumPyro](https://github.com/justindomke/pangolin/blob/master/https://num.pyro.ai/) * [PyMC](https://github.com/justindomke/pangolin/blob/master/https://www.pymc.io/) * [Tensorflow Probability](https://github.com/justindomke/pangolin/blob/master/https://www.tensorflow.org/probability) We want users to be able to write a model *once* and then seamlessly use any of these to actually do inference. * **Support program transformations.** Often, users of different PPLs need to manually "transform" their model to get good results. (E.g. manually integrating out discrete latent variables, using non-centered transformations, etc.) We want to provide an "intermediate representation" to make such transformations as easy to define as possible. * **Support inference algorithm developers.** Existing probabilistic programming languages are often quite "opinionated" about how inference should proceed. This can make it difficult to apply certain kinds of inference algorithms. We want to provide a representation that makes it as easy as possible for people to create "new" backends with new inference algorithms. * **No passing strings as variable names.** We love [NumPyro](https://github.com/justindomke/pangolin/blob/master/https://num.pyro.ai/) and [PyMC](https://github.com/justindomke/pangolin/blob/master/https://www.pymc.io/). But we don't love writing `mean = sample('mean', Normal(0, 1))` or `mean = Normal('mean', 0, 1)`. And we *really* don't love programmatically generating variable name strings inside of a for loop. We appreciate that Tensorflow Probability made this mostly optional, but we feel this idea was a mistake and we aren't going to be so compromising. It remains to be seen to what degree all these goals can be accomplished at the same time. (That's what makes this a research project!) Here are some design principles which we think serve the above goals: * **Unopinionated.** Probabilistic inference is an open research question. We don't know the best way to do it. So, where possible, we should avoid making assumptions about how inference will be done. It should be easy to write a new program transformation or create a new inference backend without having to conform to some API we invented. * **Immutability.** Pangolin enforces that all random variables and distributions are frozen after creation. This is crucial for how much of the code works. We also think it makes it easier to reason about what's happening under the hood and makes it easier to extend. * All the *current* inference methods and transformations are also immutable. (You do `xs = sample(x)` rather than something like `mcmc.run(); xs = mcmc. get_samples()`.) However, in keeping with being *unopinionated*, there's nothing stopping you from writing an inference method that works in a different way including being mutable. * **Follow existing conventions.** Wherever possible, we borrow syntax from our favorite libraries: * Distribution names and arguments are borrowed from Stan. * Random variables are (with great effort) indexed exactly as in NumPy (including full support for advanced indexing with slices and multi-dimensional indices, etc.) * For array-valued random variables, the `@` matrix-multiplication operator behaves exactly as in NumPy. * Array operations also behave as in NumPy. For example if `x` is a random variable with `x.shape=(7,2)` you can do `y=pangolin.sum(x,axis=1)` to get a new random variable with `y.shape=(7,)`. (Currently only few operations are supported) * Random variables can be placed in PyTrees and manipulated as in Jax. * `pangolin.vmap` has exactly the same syntax as `jax.vmap`. At the moment, we **do not advise** trying to use this code. However, an earlier version of Pangolin is available and based on much the same ideas, except only supporting JAGS as a backend. It can be found with documentation, in the [`pangolin-jags`](https://github.com/justindomke/pangolin/blob/master/pangolin-jags) directory.

近期下载者

相关文件


收藏者