If you still think engineers don’t have a gift for hitting on girls, you’re wrong.

It’s not that long ago that a couple of Facebook programmers came up with a music device that can play a song on six different instruments. Then, not to be outdone, Google’s tech gurus used AI to create a music synthesizer and officially joined the club of professional hookers.

A Code Musician’s Crash Guide

CNN is known as the convolutional neural network, or Nural, that allows you to stylize your photos with filters to make your selfies look like Van Gogh.

Inspired by this, the engineers are trying to apply CNN to music, hoping to do something big, such as getting AI to whistle like a human.



(The little brothers’ conscientious coding is like Lang Lang

In practice, it is also known that the AI cannot read the notes directly, so the notes must first be converted into a pattern that the machine can recognize, and then decoded and reconstructed by CNN to regenerate the new file.

With constant tuning, they have been able to play the same piece with six different styles of instruments, including imitating human whistling.

How does Google make AI music

The app, which currently only works with stylised music, didn’t sound romantic enough, so Google’s engineers had a big idea:

“Create a unique voice for the girl.”

Magenta is an internal Google music integrated AI project that explores how machine learning can be applied to music creation. They released NSynth, a project led by Yotam Mann that relies on deep neural networks to learn the features of a sound and generate an unprecedented sound based on those features.

Yotam believes that classical instruments, whether piano, guitar or erhu, have their own unique timbre and range, have become popular. The NSynth I wrote is not a simple mix of sounds, nor is it a reconstruction of musical styles.

Instead, they use the acoustic features of the original sound to synthesize an entirely new sound, such as matching the sounds of a flute to a piano in proportion to create a new sound.

In the NSythTH algorithm established by Yotam, first by generating a compressed sound (denoted by Z). This is then converted into echoes by a network of decoders, which then train the whole system to make the output sound as close to the real thing as possible.



To lower the barrier of entry to NSynth, Magenta partnered with Google Creative Lab to create NSynth Super, an open source music composition hardware product.

With more than 100,000 preloaded sounds, NSynth Super’s algorithms already create a lot of sounds that you can select on a dial. Of course, you can also create new sounds with the touch screen.

It’s already being used by professional music producers. Artistic creation needs inspiration, and this new sound synthesis will also provide creators with some different inspirations in terms of playability and creativity.

If you’re interested in the NSynth Super source code, schematics, and design templates, you can find the demo on GitHub.

Yotam Mann, the soulful little brother, has rarely been single.

Supernerve encyclopedia

The word

Saturation [ˈ sæt ən] n

Exaggerated [ˈ z æ d ʒ ɪ g goes re ɪ t ɪ d] adj., n. expand

The phrase

fundamental frequency Pitch frequency

intuitive parametersIntuitive parameters

Single dog Single dog