Average Ratings 0 Ratings
Average Ratings 0 Ratings
Description
AudioLM is an innovative audio language model designed to create high-quality, coherent speech and piano music by solely learning from raw audio data, eliminating the need for text transcripts or symbolic forms. It organizes audio in a hierarchical manner through two distinct types of discrete tokens: semantic tokens, which are derived from a self-supervised model to capture both phonetic and melodic structures along with broader context, and acoustic tokens, which come from a neural codec to maintain speaker characteristics and intricate waveform details. This model employs a series of three Transformer stages, initiating with the prediction of semantic tokens to establish the overarching structure, followed by the generation of coarse tokens, and culminating in the production of fine acoustic tokens for detailed audio synthesis. Consequently, AudioLM can take just a few seconds of input audio to generate seamless continuations that effectively preserve voice identity and prosody in speech, as well as melody, harmony, and rhythm in music. Remarkably, evaluations by humans indicate that the synthetic continuations produced are almost indistinguishable from actual recordings, demonstrating the technology's impressive authenticity and reliability. This advancement in audio generation underscores the potential for future applications in entertainment and communication, where realistic sound reproduction is paramount.
Description
Meta's MusicGen is an open-source deep-learning model designed to create short musical compositions based on textual descriptions. Trained on 20,000 hours of music, encompassing complete tracks and single instrument samples, this model produces 12 seconds of audio in response to user prompts. Additionally, users can submit reference audio to extract a general melody, which the model will incorporate alongside the provided description. All generated samples utilize the melody model, ensuring consistency. Furthermore, users have the option to run the model on their own GPUs or utilize Google Colab by following the guidelines available in the repository. MusicGen features a single-stage transformer architecture combined with efficient token interleaving techniques, which streamline the process by eliminating the need for multiple cascading models. This innovative approach enables MusicGen to generate high-quality audio samples that are responsive to both textual inputs and musical characteristics, allowing users to exert greater control over the final output. The combination of these features positions MusicGen as a versatile tool for music creation and exploration.
API Access
Has API
API Access
Has API
Pricing Details
No price information available.
Free Trial
Free Version
Pricing Details
Free
Free Trial
Free Version
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Vendor Details
Company Name
Country
United States
Website
research.google/blog/audiolm-a-language-modeling-approach-to-audio-generation/
Vendor Details
Company Name
MusicGen
Website
huggingface.co/spaces/facebook/MusicGen