Sciences‎ > ‎

Artificial Intelligence


DateContentHomework & Exams
Week of Oct 24Machine LearningAssignment 3 due Oct 30
Week of Oct 31Hidden Markov models and Bayes filtersAssignment 4 due Nov 7
Week of Nov 7Markov Decision Processes and Reinforcement LearningAssignment 5 due Nov 13
Week of Nov 14Adversarial planning (games) and belief space planning (POMDPs)MIDTERM EXAM due Nov 20
Week of Nov 21Logic and Logical Problem SolvingAssignment 6 due Nov 27
Week of Nov 28Image Processing and Computer VisionAssignment 7 due Dec 4
Week of Dec 5Robotics and robot motion planningAssignment 8 due Dec 11
Week of Dec 12Natural Language Processing and Information RetrievalFINAL EXAM due Dec 18

Introduction to A. I.

Intelligent agent

  • Fully vs. Partially Observable visibility (chess vs. poker)
  • Deterministic (outcome consistent with action, e.g. chess) vs. Stochastic(random factor, e.g. dice)
  • Discrete vs. Continuous (whether finite/infinite possibilities spacewise, e.g. chess vs. dart)
  • Benign vs. Adversarial motivation to bother or counteractiveness
Sources of uncertainty: Stochastic environments, sensor limits, adversaries, laziness, ignorance

Problem Solving

Comparison of frontier and explored set
Breadth First Search: scans all possible paths at the same step
Depth First Search: scans one particular full path at a time
Uniform Cost Search: propagates with the lowest distance first
A* Search: proceeds with lowest (distance to destination(h) + distance travelled) on condition that h < true cost (admissible)

State Spaces: product of multi-dimensional conditions
  • admissible: describing a heuristic that never overestimates the cost of reaching a goal
  • guaranteed to work when fully observable, known, deterministic, discrete and static

Statistics Uncertainty, and Bayes networks

Bayes Rule: P(A|B) = P(B|A) * P(A)/P(B)
Complex Bayes network: multiply all conditional probabilities, and analyze the provided distribution

Conditional independence: disjoint relationship, linked by known cause, linked by unknown effect
Conditional dependence: direct causal relationship, linked by unknown cause, linked by known effect, or its successor

Minimum number of parameters necessary to specify joint probability  = ∑ 2^(number of causes for each nodes)

Machine Learning

Maximum likelihood: proportionate ratio
Laplace smoothing: P = (occurrence + k) / (data + #variables)
Linear Regression: minimize the sum of errors between real value-calculated value
Perceptron Algorithm: linear separation by taking the majority class label of k(recolarizer) nearest neighbors
Unsupervised Learning

Hidden Markov models and Bayes filters

Propositional Logic

Markov Decision Processes and Reinforcement Learning

Adversarial planning (games) and belief space planning (POMDPs) 

Logic and Logical Problem Solving

Image Processing and Computer Vision

Robotics and robot motion planning

Natural Language Processing and Information Retrieval