Glossary
LLM Collection
AblationActive Learning (Machine Learning)Adversarial Machine LearningAffective AIAI AgentsAI and EducationAI and FinanceAI and MedicineAI AssistantsAI EthicsAI Generated MusicAI HallucinationsAI HardwareAI in Customer ServiceAI Recommendation AlgorithmsAI Video GenerationAI Voice TransferApproximate Dynamic ProgrammingArtificial Super IntelligenceBackpropagationBayesian Machine LearningBinary Classification AIChatbotsConversational AIConvolutional Neural NetworksCounterfactual Explanations in AICurse of DimensionalityData LabelingDeep LearningDeep Reinforcement LearningDifferential PrivacyDimensionality ReductionEmbedding LayerEmergent BehaviorExplainable AIF1 Score in Machine LearningF2 ScoreFeedforward Neural NetworkFine Tuning in Deep LearningGated Recurrent UnitGenerative AIGraph Neural NetworksHidden LayerHyperparameter TuningIntelligent Document ProcessingLarge Language Model (LLM)Loss FunctionMachine LearningMachine Learning in Algorithmic TradingModel DriftMultimodal LearningNatural Language Generation (NLG)Natural Language Processing (NLP)Natural Language Querying (NLQ)Natural Language Understanding (NLU)Neural Text-to-Speech (NTTS)Objective FunctionPrecision and RecallPretrainingRecurrent Neural NetworksTransformersUnsupervised LearningVoice CloningZero-shot Classification Models
AI Voice AgentsAlphaGoAlphaGo ZeroAutoregressive ModelBERTChatGPTChess botsContinuous Learning SystemsDall-EDiffusion ModelsDistilBERTEco-friendly AIFoundation ModelsGenerative Teaching NetworksGoogle's BardHidden Markov Models (HMMs)Human-centered AIInference EngineLarge Language Model (LLM)Llama 2Limited Memory AILLM CollectionMambaMetacognitive Learning Models Midjourney (Image Generation)MistralMultimodal AI Models and ModalitiesNeuralinkOpenAI SoraOpenAI WhisperPerceptronProbabilistic Models in Machine LearningRoBERTaRule-Based AISelf-healing AISpeech-to-text modelsText-to-Speech ModelsWhisper v2Whisper v3XLNet
Acoustic ModelsActivation FunctionsAdaGradAI AlignmentAI Emotion RecognitionAI GuardrailsAI Speech EnhancementArticulatory SynthesisAttention MechanismsAutoregressive ModelBatch Gradient DescentBeam Search AlgorithmBenchmarkingCandidate SamplingCapsule Neural NetworkCausal InferenceClassificationClustering AlgorithmsCognitive ComputingCognitive MapComputational CreativityComputational PhenotypingConditional Variational AutoencodersConcatenative SynthesisContext-Aware ComputingContrastive LearningCURE AlgorithmData AugmentationDeepfake DetectionDiffusionDomain AdaptationDouble DescentEnd-to-end LearningEvolutionary AlgorithmsExpectation MaximizationFeature Store for Machine LearningFlajolet-Martin AlgorithmForward PropagationGaussian ProcessesGenerative Adversarial Networks (GANs)Gradient Boosting Machines (GBMs)Gradient ClippingGradient ScalingGrapheme-to-Phoneme Conversion (G2P)GroundingHyperparametersHomograph DisambiguationHooke-Jeeves AlgorithmInstruction TuningKeyphrase ExtractionKnowledge DistillationKnowledge Representation and Reasoningk-ShinglesLatent Dirichlet Allocation (LDA)Markov Decision ProcessMetaheuristic AlgorithmsMixture of ExpertsModel InterpretabilityMultimodal AINeural Radiance FieldsNeural Text-to-Speech (NTTS)One-Shot LearningOnline Gradient DescentOut-of-Distribution DetectionOverfitting and UnderfittingParametric Neural Networks Prompt ChainingPrompt EngineeringPrompt TuningQuantum Machine Learning AlgorithmsRegularizationRepresentation LearningRetrieval-Augmented Generation (RAG)RLHFSemantic Search AlgorithmsSemi-structured dataSentiment AnalysisSequence ModelingSemantic KernelSemantic NetworksStatistical Relational LearningSymbolic AITokenizationTransfer LearningVoice CloningWinnow AlgorithmWord Embeddings
Last updated on January 18, 20241 min read
LLM Collection
The table below includes some of the most influential models, and the order is sorted by their release dates.
Model | Release Date | Developer | License | Description |
---|---|---|---|---|
GPT-4 | 2023 | OpenAI | Custom | Successor to GPT-3, built on a similar architecture but with improvements. |
GPT-3 | June 2020 | OpenAI | Custom | 175 billion parameters, known for its versatility and capability. |
Turing-NLG | February 2020 | Microsoft | Custom | 17 billion parameters, aimed at natural language understanding and generation. |
GPT-2 | February 2019 | OpenAI | Modified MIT | Initially withheld from public release due to concerns over potential misuse. |
BERT | October 2018 | Apache 2.0 | Designed to understand the context of words in search queries. | |
Transformer XL | January 2019 | Google/CMU | Apache 2.0 | Extended Transformer model to handle longer sequences of text. |
GPT | June 2018 | OpenAI | Modified MIT | First Generative Pre-trained Transformer with 117M parameters. |
ELMo | March 2018 | Allen Institute | Apache 2.0 | Deep contextualized word representations, allowing for rich word meanings. |