Alpha-Beta Pruning is an optimization of the Minimax algorithm that prunes branches which cannot affect the final decision, reducing computation from O(b^d) to O(b^(d/2)).
Ξ± (Alpha) = best value MAX can guarantee (initialized to ββ)
Ξ² (Beta) = best value MIN can guarantee (initialized to +β)
Pruning conditions: At MIN node: prune if value β€ Ξ± | At MAX node: prune if value β₯ Ξ²
Result: Optimal value at root A = 3 (MIN of MAX(3,5)=5 and MAX(6,9)=9... wait: B=min(3,5)=3, C sees F=min(1,2)=1 but Ξ±=3 so Cβ€3 β G pruned. Final answer = 3
An Expert System is AI software that uses knowledge stored in a knowledge base to solve problems that would normally require a human expert.
| Forward Chaining | Backward Chaining |
|---|---|
| Data-driven | Goal-driven |
| Bottom-up reasoning | Top-down reasoning |
| Good for planning, design | Good for diagnosis |
| Exhaustive / wider search | More focused search |
| More output hypotheses | Must query for data |
| Manages sub-goals manually | Auto-manages sub-goals |
Resolution is a rule of inference in FOL that proves theorems by combining clauses with complementary literals to derive new clauses until an empty clause (NIL) is reached.
Unification finds a substitution Ο that makes two logical expressions identical.
Solution for given clauses: Substitute a for z1, a for x, a for z2, a for z3:
C1 = {Β¬P(a,a), Β¬P(a,a)} | C2 = {P(a,F(a)), P(a,a)} | C3 = {P(F(a),a), P(a,a)}
Set of substitution = {a/z1, a/x, a/z2, a/z3}
NLP = method of communicating with intelligent systems using natural human language.
Types of Ambiguity: Lexical (word-level: "board"=noun/verb?), Syntactic (sentence structure), Referential (pronoun reference)
Applications: Data analysis, Reputation monitoring, Customer service, Automated trading, Market intelligence.
Statistical Reasoning: Uses Bayesian statistics β P(H|E) = P(E|H)ΓP(H) / P(E). Stresses conditional probability. Limitations: acquiring all probabilities is too large a task.
ANN Advantages: Adaptive learning, Self-organization, Real-time parallel operation, Fault tolerance via redundancy.
Disadvantage: Unpredictable; examples must be carefully selected.
MYCIN (1974) β one of the earliest expert systems, based on Backward Chaining. It identified bacteria causing severe infections and recommended drugs based on patient weight. It outperformed medical students in diagnosis.
| Inductive | Deductive |
|---|---|
| Specific β General | General β Specific |
| Discovery learning | Rule-based learning |
| Conclusion may be false | Conclusion always true if premises true |
| E.g.: All teachers are studious | E.g.: Shalini (65yr) is a grandmother |
A Decision Tree is a flowchart-like tree structure used for classification and regression in supervised learning.
Splitting: Partitioning dataset into subsets on a variable (e.g., split on Gender or Class).
Pruning: Reduce tree by turning branch nodes into leaf nodes. Avoids overfitting. Key factors: Entropy & Information Gain.
EBL uses a strong/flawless domain theory to generalise from training data. It can learn from just ONE training example.
CLT is a subfield of AI dealing with design and analysis of ML algorithms. Goal: understand computational properties of algorithms β ability to learn from data and generalise.
5 Types of Environments: Fully/Partially Observable Β· Episodic/Sequential Β· Static/Dynamic Β· Discrete/Continuous Β· Deterministic/Stochastic
Entropy H(S) = impurity/disorder in dataset = βΞ£ p(i)Β·logβ(p(i))
Information Gain IG(S,A) = H(S) β Ξ£(|Sv|/|S|)Β·H(Sv)
Best split = feature with highest Information Gain