Page 205 - Ai_V1.0_Class9
P. 205
To produce fresh data, VAEs learn the distribution of the data and then sample from it.
Some of the examples of VAEs are as follows:
• It can generate new images like the
given training set. For instance, a
VAE trained on images of faces can Latent Space
generate new, realistic-looking faces.
• It can produce new text that follows Input Encoder Sample Decoder Output
the same style and structure as the Distribution
training data, assisting writers with
drafts and ideas.
• It can be used for composing new
music pieces or creating sound
effects, music composition, etc.
Recurrent Neural Networks (RNNs)
RNNs are a special class of neural networks that excel at handling sequential data, like music or text. They excel
at tasks where the order of the data points is important, as they can remember previous inputs and use this
information to influence current outputs.
Some of the examples of RNNs are as follows:
Recurrence
• It can generate novel text in the style of
a specific author or genre, like creating
new sentences that mimic the style of
Shakespeare or generating dialogue for
a chatbot.
• It can predict the next character or word
in a sequence, like autocomplete features
in text editors and predictive text input
on smartphones.
• It can be used to predict future values
in a time series, such as stock prices or Output Layer
weather data, by learning patterns from Input Layer
historical data.
Hidden Layer
Autoencoders (AEs)
These are neural networks that have been trained to
learn a compressed representation of data. They work
by compressing the data into a lower-dimensional
form (encoding) and then decompressing it back to Input Encoder Latent Space Decoder Output
its original form (decoding). This process helps the
network learn the most important features of the
data.
Introduction to Generative AI 203

