Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Keras Implementation of "Look, Listen and Learn" Model

NotificationsYou must be signed in to change notification settings

Kajiyu/LLLNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

About

This is a Keras implementation of"Look, Listen and Learn" Model on the research by R. Arandjelovic and A. Zisserman, at DeepMind. This model can get cross-modal features between audios and images.

Core Concept

Audio-visual correspondence task (AVC)

Different Point from Original Model

  • SqueezeNet is used for visual CNN.Model Figure

About

Keras Implementation of "Look, Listen and Learn" Model

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp