Uninformed search
Reflex Agents
- Action decided by current percept, and possibly memory
- Can have memory or world model
- Doesn’t “think” or consider the future state of the world after action (i.e. doesn’t consider consider consequences of decisions, blindly jumping)
- Can be rational in certain scenarios (example was simple Pacman layout, reflex is to choose the closest point)
- Considers how world IS
Planning Agents
- Makes decisions base on hypothesized action consequences
- Needs a model of how the world changes in response to actions
- Considers how world WILL BE
Optimal vs. complete planning
- Complete: guaranteed to find solution if one exists
- Optimal: guaranteed to find least cost path
Planning vs. replanning
- Planning: plan entire path ahead of time
- Replanning: allow incomplete starting plan and recompute path along the way
Search Problems
Search problem consists of
- State space: space of all possible states that could be encountered
- Successor function: mapping from current state to the next
- State start
- Goal test
A solution is sequence of actions (called a plan) that transforms start state to goal state
- World state includes all details of environment
- Search state keeps only details for planning
For example, a pathing problem in Pacman might include
- States: (x,y) locations
- Actions: north, east, south, west
- Successor: update location only
- Goal test: is (x,y) = END
Notice that only the details necessary for executing a search problem are included in this list. Here state and successor information has been stripped down from the world state to only include what’s needed for finding paths.
State Space Graphs and Search Tress
- State space graph: mathematical representation of a search problem
- Each state occurs only once i.e. has a unique node
- Nodes are world configurations
- Connections show successors/transitions between states
- Goal test is set of goal nodes
- Conceptually nice but difficult to actually build in practice
- Search trees: tree of states branching on actions
- Root node of the tree is the start state
- Children of parent nodes are its successors if the action on the connection is taken
- Nodes ultimately respresent plans to achieve the state it corresponds to
- Can think of a node in the search tree as corresponding to an entire path in the search graph
- Search tree is a way of providing context for the current node
- A state space graph may be finite, but the search tree can be infinite (due to state loops)
Algorithm: Tree Search
Abstract search procedure for processing nodes in a graph, encompasses search methods like depth and breadth first search. The general tree search problem is provided with a strategy for exploring nodes. The overall idea is to
- Expand out potential plans, indicated by nodes in the search tree
- Maintain fringe of plans currently under consideration
- Expanding fewer nodes is better
Formalized Procedure
function TREE-SEARCH(problem, strategy) -> solution or failure
of problem
initialize search tree using initial state do
loop if no candidates for expansion, return failure
for expansion according to given strategy
choose leaf node if node contains goal state, return corresponding solution
else expand node and add resulting nodes to search tree
Depth-first search
- DFS explores paths deeply first i.e. the leftmost prefix of the tree
- DFS is complete (so long as there is cycle-detection)
- Not optimal, solution seen first will be chosen regardless of depth or cost
- Fringe is a FILO stack; the most recently added nodes are the first ones to get explored
Depth-first search
- BFS expands the shallowest nodes first
- Complete (no cycle detection needed)
- Optimal if all costs are 1; going to get to the shortest path solution first
- Fringe is a FIFO stack; those nodes pushed first at lower levels will get explored before those pushed later at deeper levels
Iterative Deepening
- Make tradeoff between breadth and depth first search, perform depth first search by iteratively deepening the max depth
- This encourages search to first deeply explore the current depth, but gets the breadth of the level before moving down and spending all of the time at the left of the game tree
Uniform Cost Search
- Previous algorithms do not account for the cost actions along a path. For example, BFS only finds the shortest cost in terms of the number of actions, not in terms of smallest cost.
- UCS expands the cheapest node first, and thus the first solution found is the one with the least cost
- Will reach an effective depth of , where is the cost of the optimal solution and arcs are at least cost . So while the actual depth might be larger, the levels of depth will be disproportionately explored in favor of the least cost.
- Implements fringe as a priority queue, where the priority is the cumulative cost of the current path. So nodes and their costs are maintained in a min-heap, and the current min cost node is explored first
- Is complete and optimal for any costs; will find a solution if one exists, and it will be optimal in terms of least cost