gen_ai

Generative AI #

Generative AI transforms natural language input into newly generated content.

Large Language Models #

Generative AI is powered by large language models (LLM). An LLM is a deep learning model that be utilized to perform natural language processing (NLP) tasks such as:

  • Determining sentiment in text
  • Summarizing text
  • Comparing multiple texts for semantic similarity
  • Generating text
  • Generating images

A transformer model comprises two parts:

  • The encoder: creates a representation of the training dataset
  • The decoder: which generates the output

The internet has documented that BERT/Bard (Google) strictly uses the encoder part while CHATGPT (OPENAI) only uses the decoder part. Although using or or the other results in a different experience from a logic perspective it doesn’t make sense NOT to use both parts.

While this doesn’t seem relevant at this point in time, we’ll dive into the Huggingface Transformers library with Keywords are #Tokenization, #Embeddings and #Attention Layers later.

While training a generative AI model is taking a lot of resources the end-result can be used as a foundation model on which to fine-tune customized NLP models. So it is possible to train a custom model (legal, medical) model upon the generic pre-trained model.

Azure provides an environment to play with the AI models.