Skip to Main Content 
Key Terms (A-Z)
- Artificial Hallucinations: "Artificial hallucinations" refer to instances where a generative AI model produces outputs not based on real-world information. These outputs can be fabricated, inaccurate, or nonsensical.
- Chain of Thought Prompting: A technique where a complex question is broken down into simpler, step-by-step questions and explanations, guiding the AI to reach the final answer through logical reasoning.
- Example for a student researching the effects of climate change on polar bear populations: "Explain how rising global temperatures affect Arctic ice levels. Then, describe how changes in Arctic ice levels impact polar bear habitats. Finally, analyze how changes in polar bear habitats influence their population numbers."
- Few-shot prompting involves providing a few examples of input-output pairs to help the model understand how it should process similar inputs.
- Log Probabilities: Log probabilities refer to the logarithms of the probabilities assigned by the model to different outcomes or tokens. Log probabilities help in evaluating the likelihood of sequences (i.e. predicting what is most likely to come next) and guiding the generation process.
- Naive Prompt: A naive prompt refers to a simple, straightforward input provided to an AI model without any additional context or complexity.
- Prompt: A prompt is the text we input to AI models when interfacing with them.
- Prompt Engineering: Prompt engineering is the process of discovering prompts which reliably yield useful or desired results.
- Prompt Hacking: Prompt hacking refers to the practice of carefully crafting input prompts to manipulate or exploit generative AI models into producing desired or specific outputs. This technique can be used to uncover biases, test the model's limits, or guide the AI in generating particular types of responses.
- Seed Words: Seed words are initial words or phrases used to guide and generate new content in natural language processing and generative AI tasks. They serve as the starting point or context for algorithms to produce relevant and coherent text based on the provided input.
- Tokens: Basic units within machine learning; a unit easily understood by a language model.
- Tokenization: Tokenization is the process of converting a sequence of text into smaller, manageable units called tokens. Tokens can be words, subwords, or even individual characters, depending on the specific tokenization method. This process is a crucial step in natural language processing (NLP) tasks because it transforms raw text into a format that can be easily analyzed and processed by machine learning models. Effective tokenization helps preserve the text's semantic meaning while enabling efficient computational processing.
- Zero-shot Prompting: Zero Shot Prompting involves presenting a generative AI model with a task it has not been specifically trained on, expecting it to understand and generate a relevant response based solely on its pre-existing knowledge and the context of the prompt.
Example for a student researching the impact of deforestation on local ecosystems: "Without prior training, analyze and explain the direct consequences of deforestation on local ecosystems, focusing on biodiversity, climate change, and soil erosion."
© 2024 New York Institute of Technology