-
Contrastive Learning for Many-to-many Multilingual Neural Machine Translation
Paper • 2105.09501 • Published -
Cross-modal Contrastive Learning for Speech Translation
Paper • 2205.02444 • Published -
ByteTransformer: A High-Performance Transformer Boosted for Variable-Length Inputs
Paper • 2210.03052 • Published -
Diffusion Glancing Transformer for Parallel Sequence to Sequence Learning
Paper • 2212.10240 • Published • 1
Collections
Discover the best community collections!
Collections including paper arxiv:2405.01434
-
Be-Your-Outpainter: Mastering Video Outpainting through Input-Specific Adaptation
Paper • 2403.13745 • Published • 11 -
StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
Paper • 2405.01434 • Published • 56 -
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 115
-
Mora: Enabling Generalist Video Generation via A Multi-Agent Framework
Paper • 2403.13248 • Published • 78 -
Efficient Video Diffusion Models via Content-Frame Motion-Latent Decomposition
Paper • 2403.14148 • Published • 21 -
StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text
Paper • 2403.14773 • Published • 11 -
StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
Paper • 2405.01434 • Published • 56
-
Is Sora a World Simulator? A Comprehensive Survey on General World Models and Beyond
Paper • 2405.03520 • Published • 1 -
StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
Paper • 2405.01434 • Published • 56 -
Data-centric Artificial Intelligence: A Survey
Paper • 2303.10158 • Published • 1
-
StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
Paper • 2405.01434 • Published • 56 -
TransPixar: Advancing Text-to-Video Generation with Transparency
Paper • 2501.03006 • Published • 26 -
CPA: Camera-pose-awareness Diffusion Transformer for Video Generation
Paper • 2412.01429 • Published -
Ingredients: Blending Custom Photos with Video Diffusion Transformers
Paper • 2501.01790 • Published • 8
-
Contrastive Learning for Many-to-many Multilingual Neural Machine Translation
Paper • 2105.09501 • Published -
Cross-modal Contrastive Learning for Speech Translation
Paper • 2205.02444 • Published -
ByteTransformer: A High-Performance Transformer Boosted for Variable-Length Inputs
Paper • 2210.03052 • Published -
Diffusion Glancing Transformer for Parallel Sequence to Sequence Learning
Paper • 2212.10240 • Published • 1
-
Be-Your-Outpainter: Mastering Video Outpainting through Input-Specific Adaptation
Paper • 2403.13745 • Published • 11 -
StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
Paper • 2405.01434 • Published • 56 -
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 115
-
Mora: Enabling Generalist Video Generation via A Multi-Agent Framework
Paper • 2403.13248 • Published • 78 -
Efficient Video Diffusion Models via Content-Frame Motion-Latent Decomposition
Paper • 2403.14148 • Published • 21 -
StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text
Paper • 2403.14773 • Published • 11 -
StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
Paper • 2405.01434 • Published • 56
-
Is Sora a World Simulator? A Comprehensive Survey on General World Models and Beyond
Paper • 2405.03520 • Published • 1 -
StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
Paper • 2405.01434 • Published • 56 -
Data-centric Artificial Intelligence: A Survey
Paper • 2303.10158 • Published • 1
-
StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
Paper • 2405.01434 • Published • 56 -
TransPixar: Advancing Text-to-Video Generation with Transparency
Paper • 2501.03006 • Published • 26 -
CPA: Camera-pose-awareness Diffusion Transformer for Video Generation
Paper • 2412.01429 • Published -
Ingredients: Blending Custom Photos with Video Diffusion Transformers
Paper • 2501.01790 • Published • 8