Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

pytorch implementation of grammar variational autoencoder

NotificationsYou must be signed in to change notification settings

geyang/grammar_variational_autoencoder

Repository files navigation

This repo has implemented the grammar variational autoencoder so far,

encoder:grammar_variational_encoder

decoder:grammar_variational_decoder

training performance

  • add grammar masking
  • add MSE metric

training_loss

Todo

  • what type of accuracy metric do we use?
  • train
    • encoder convolution exact configuration
    • read dynamic convolutional network
      • what are the evaluation metrics in DCNN?
        • sentiment analysis
        • [ ]
  • think of a demo
  • closer look at the paper

Done

  • data
  • model

Usage (To Run)

All of the script bellow are included in the./Makefile. To install and run training,you can just runmake. For more details, take a look at the./Makefile.

  1. install dependencies via
    pip install -r requirement.txt
  2. Fire up avisdom server instance to show the visualizations. Run in a dedicated prompt to keep this alive.
    python -m visdom.server
  3. In a new prompt run
    python grammar_vae.py

Program Induction Project Proposal

  1. specify typical program induction problems
  2. make model for each specific problem
  3. get baseline performance for each problem

Todo

  • read more papers, get ideas for problems
  • add grammar mask
  • add text MSE for measuring the training result.

List of problems that each paper tackles with their algorithms:

Grammar Variational Autoencoderhttps://arxiv.org/abs/1703.01925

  • session 4.1, fig arithmetic expression limited to 15 rules. testMSE. exponential function has large error. use$$\log(1 + MSE)$$ instead. <= this seems pretty dumb way to measure.
  • chemical metric is more dicey, use specific chemical metric.
  • Why don’t they use math expression result? (not fine grained enough?)
  • Visualization: result is smoother (color is logP). <= trivial result
  • accuracytable 2 row 1: math expressions
methodfrac. validavg. score
GAVE0.990 ± 0.0013.47 ± 0.24
My Score0.16 ±0.001 todo: need to measure MSE
CAVE-0.31 ± 0.0014.75 ± 0.25

Automatic Chemical Designhttps://arxiv.org/abs/1610.02415

The architecture above in fact came from this paper. There are a few concerns with how the network was implemented in this paper:

  • there is a dense layer in-front of the GRU. activation is reLU
  • last GRU layer uses teacher-forcing. in my implementation$$\beta$$ is set to$$0.3$$.

Synthesizing Program Input Grammarshttps://arxiv.org/abs/1608.01723

Percy Lian, learns CFG from small examples.

A Syntactic Neural Model for General-Purpose Code Generationhttps://arxiv.org/abs/1704.01696

need close reading of model and performance.

A Hybrid Convolutional Variational Autoencoder for Text Generationhttps://arxiv.org/abs/1702.02390

tons of characterization in paper, very worth while read for understanding the methodologies.

Reed, Scott and de Freitas, Nando.Neural programmer-interpreters (ICLR), 2015.

see note in another repo.

Mou, Lili, Men, Rui, Li, Ge, Zhang, Lu, and Jin, Zhi.On end-to-end program generation from user intention by deep neural networks.arXiv preprint arXiv:1510.07211, 2015.

  • inductive programming
  • deductive programming
  • model is simple and crude and does not offer much insight (RNN).

Jojic, Vladimir, Gulwani, Sumit, and Jojic, Nebojsa.Probabilistic inference of programs from input/output examples. 2006.

Gaunt, Alexander L, Brockschmidt, Marc, Singh, Rishabh, Kushman, Nate, Kohli, Pushmeet, Taylor, Jonathan, and Tarlow, Daniel. Terpret:A probabilistic programming language for program induction. arXiv preprint arXiv:1608.04428, 2016.

Ellis, Kevin, Solar-Lezama, Armando, and Tenenbaum, Josh.Unsupervised learning by program synthesis. In Advances in Neural Information Processing Systems, pp. 973–981, 2015.

Bunel, Rudy, Desmaison, Alban, Kohli, Pushmeet, Torr, Philip HS, and Kumar, M Pawan.Adaptive neural compilation. arXiv preprint arXiv:1605.07969, 2016.

Riedel, Sebastian, Bosˇnjak, Matko, and Rockta ̈schel, Tim.Programming with a differentiable forth interpreter. arXiv preprint arXiv:1605.06640, 2016.

About

pytorch implementation of grammar variational autoencoder

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp