New pages
From AI Wiki
15 December 2025
- 12:0012:00, 15 December 2025 AI Talk (hist | edit) [1,129 bytes] Whale (talk | contribs) (Created page with "== Seoul, the Republic of Korea == 2025-12-06 Sat - 12:50 PM AB Cafe at the Gangnam Station 2025-12-06 Sat - 12:50 PM AB Cafe at the Gangnam Station 2026-01-10 Sat - 12:50 PM AB Cafe at the Gangnam Station 2026-01-17 Sat - 12:50 PM AB Cafe at the Gangnam Station 2026-01-24 Sat - 12:50 PM AB Cafe at the Gangnam Station == Another City ==")
- 07:1207:12, 15 December 2025 Large Language Models (hist | edit) [2,584 bytes] Whale (talk | contribs) (Created page with "A '''large language model''' ('''LLM''') is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) and provide the core capabilities of modern chatbots. LLMs can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding synt...") Tag: Visual edit: Switched
- 07:0107:01, 15 December 2025 Alignment (AI) (hist | edit) [3,734 bytes] Whale (talk | contribs) (Created page with "<nowiki>'''</nowiki>AI Alignment<nowiki>'''</nowiki> refers to the process of directing <nowiki>Artificial Intelligence</nowiki> (AI) systems, particularly <nowiki>Large Language Models</nowiki> (LLMs), to act in accordance with human intent and ethical values. While the core objective of a pre-trained model is simply to predict the next token in a sequence based on statistical patterns, the goal of alignment is to ensure the resulting behavior is helpful, honest...") Tag: Visual edit
- 06:5306:53, 15 December 2025 Instruction Tuning (hist | edit) [4,410 bytes] Whale (talk | contribs) (Created page with "'''Instruction tuning''' is a technique used in the training of Large Language Models (LLMs) to improve their ability to follow natural language instructions. While pre-training enables a model to predict the next token in a sequence based on vast amounts of text data, it does not inherently teach the model to act as a helpful assistant or adhere to specific user commands. Instruction tuning bridges this gap by fine-tuning the pre-tra...")
14 December 2025
- 10:1010:10, 14 December 2025 Pre-training (hist | edit) [3,308 bytes] Whale (talk | contribs) (Created page with "Pretraining in AI is the initial phase of training a model on a large dataset to learn general patterns before fine-tuning it for specific tasks. === What is Pretraining? === Pretraining refers to the process of training a machine learning model on a large, diverse dataset before it is fine-tuned for a specific task. This phase is crucial as it equips the model with foundational knowledge, allowing it to learn general features and patterns that can be applied across var...") Tag: Visual edit