Movatterモバイル変換


[0]ホーム

URL:


PPTX, PDF9,888 views

Problem solving in Artificial Intelligence.pptx

The document explores problem-solving strategies within artificial intelligence, detailing methods for defining problem states, search techniques (uninformed and informed), and specific algorithms like A*, greedy best-first search, and local search methods. It emphasizes the importance of heuristic functions in guiding search algorithms and outlines types of problems and scenarios suited for these techniques. Additionally, it discusses the relevance of online search agents and the implications of continuous environments on problem-solving in AI.

Embed presentation

Downloaded 89 times
Problem-Solving Strategies inArtificial Intelligence1. Problem solving Methods2. Search Strategies3. Uninformed – Informed4. Heuristics5. Local Search Algorithms and Optimization ProblemsDr.J.SENTHILKUMARAssistant ProfessorDepartment of Computer Science and EngineeringKIT-KALAIGNARKARUNANIDHI INSTITUTE OF TECHNOLOGY
Problem Solving Methods• The method of solving problem through AI involves theprocess of defining the search space, deciding start and goalstates then finding the path from state to goal state throughsearch space.• The movement from start state to goal state is guided by set ofrules specifically designed for that particular problem.
• Problem• It is the question which is to solved. For solving a problem it needs to beprecisely defined.• The definition means, defining the start state, goal state, other valid states andtransitions.• Finding the Solution• After representation of the problem and related knowledge in the suitable format,the appropriate methodology is chosen which uses the knowledge andtransforms the start state to goal state.• The Techniques of finding the solution are called search techniques.• Various search techniques are developed for this purpose.
Representation of AI Problem• AI problem can be covered in following four parts:• A Lexical part: that determines which symbols are allowed in the representation of theproblem. Like the normal meaning of the lexicon, this part abstracts all fundamentalfeatures of the problem.• A Structural part: that describes constraints on how the symbols can be arranged. Thiscorresponds to finding out possibilities required for joining these symbols andgenerating higher structural unit.• A Procedural Part: that specifies access procedure that enables to create descriptions,to modify them and to answer questions using them.• A Semantic Part: That establishes a way of associating meaning with the descriptions.
WELL-DEFINED PROBLEMS AND SOLUTIONS• A problem can be defined formally by five components:• INITIAL STATE • The initial state that the agent starts in.• ACTION : A description of the possible actions available to the agent.• A description of what each action does; the formal name for this is the TRANSITION MODEL, specified by a functionRESULT(s, a) that returns the state that results from SUCCESSOR doing action a in state s. We also use the term successor torefer to any state reachable from a given state by a single action.• Together, the initial state, actions, and transition model implicitly define the STATE SPACE of the problem—the set of all states reachable from the initial state byany sequence of actions. The state space forms a directed network or GRAPH in which the nodes are states and the links between nodes are actions. A PATH in thestate space is a sequence of states connected by a sequence of actions.• The GOAL TEST, which determines whether a given state is a goal state. Sometimes there is an explicit set of possible goalstates, and the test simply checks whether the given state is one of them.• A PATH COST function that assigns a numeric cost to each path. The problem-solving agent chooses a cost function that reflectsits own performance measure.
Formulating problems• This formulation seems reasonable, but it is still a model—an abstract mathematicaldescription—and not the real thing.• The different amount of knowledge that an agent can have concerning its action and thestate that it is in.• This depends on how the agent is connected to its environment through its precepts andactions.• We find that there are four essentially different types of problem• Single state problems• Multiple-state problems• Contingency problems• Exploration problems.
• Toy problems• This can be formulated as a problem as follows:States: The state is determined by both the agent location and the dirt locations. The agent is in one oftwo locations, each of which might or might not contain dirt. Thus, there are 2 × 2^2= 8 possible worldstates. A larger environment with n locations has n · 2n states.• Initial state: Any state can be designated as the initial state.• Actions: In this simple environment, each state has just three actions: Left, Right, and Suck. Largerenvironments might also include Up and Down.• Transition model: The actions have their expected effects, except that moving Left in the leftmostsquare, moving Right in the rightmost square, and Sucking in a clean square have no effect.• Goal test: This checks whether all the squares are clean.• Path cost: Each step costs 1, so the path cost is the number of steps in the path
The 8-puzzle:• States: A state description specifies the location of each of the eight tiles and the blankin one of the nine squares.• Initial state: Any state can be designated as the initial state. Note that any given goalcan be reached from exactly half of the possible initial states.• Actions: The simplest formulation defines the actions as movements of the blank spaceLeft, Right, Up, or Down. Different subsets of these are possible depending on wherethe blank is.• Transition model: Given a state and action, this returns the resulting state; for example,if we apply Left to the start state in Figure 3.4, the resulting state has the 5 and the blankswitched.• Goal test: This checks whether the state matches the goal configuration shown in Figure3.4. (Other goal configurations are possible.)• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
Some more Example as AI Problems:• Tic-Tac-Toe• 8-Queen Problem• Chess Problem• Tower of Hanoi• Traveling salesperson problem• Monkey and Banana Problem• Cryptarithmetic problem• Block World Problem
Search:• The process of Finding a path to desired goal state from all possible future states. Themajor work in the field of search is to find the right Search Strategy for a particularproblem.• There are 2 kinds of search based on whether they use information about the goal.• uninformed search algorithms—algorithms that are given no information aboutthe problem other than its definition. Although some of these algorithms can solveany solvable problem, none of them can do so efficiently.• Informed search algorithms, on the other hand, can do quite well given someguidance on where to look for solutions.
• Search algorithms form the core of such Artificial Intelligenceprograms. And while we may be inclined to think that this haslimited applicability only in areas of gaming and puzzle-solving,such algorithms are in fact used in many more AI areas likeroute and cost optimizations, action planning, knowledgemining, robotics, autonomous driving, computational biology,software and hardware verification, theorem proving etc.• In a way, many of the AI problems can be modelled as a searchproblem where the task is to reach the goal from the initial statevia state transformation rules. So the search space is defined asa graph (or a tree) and the aim is to reach the goal from theinitial state via the shortest path, in terms of cost, length, acombination of both etc.
• All search methods can be broadly classified into two categories:1.Uninformed (or Exhaustive or Blind) methods, where the search is carried outwithout any additional information that is already provided in the problem statement.Some examples include Breadth First Search, Depth First Search etc.2.Informed (or Heuristic) methods, where search is carried out by using additionalinformation to determine the next step towards finding the solution. Best First Searchis an example of such algorithms• Informed search methods are more efficient, low in cost and high in performance ascompared to the uninformed search methods.
UNINFORMED SEARCH STRATEGIES:• Its is also called as Blind Search. The term means that the strategies haveno additional information about states beyond that provided in the problemdefinition.• All they can do is generate successors and distinguish a goal state from anon-goal state.• All search strategies are distinguished by the order in which nodes areexpanded.• Strategies that know whether one non-goal state is “more promising” thananother are called informed search or heuristic search strategies.
• Informed(Heuristic) Search Strategies• Informed search strategy is one that uses problem-specific knowledge beyond the definitionof the problem itself. It can find solutions more efficiently than uninformed strategy.• Best-first search• Greedy Best-first search• A* Search• AO* SearchBest-First Search:• It always selects the path which appears best at that moment.• It is combination of DFS and BFS• It uses the heuristics function h(n)<=h*(n) and search h (n) = heuristic cost; h* (n)=estimatedcost.• The greedy best first algorithm is implemented by priority queue.• The node with lowest evaluation is selected for expansion, because the evaluation measuresthe distance to the goal.
Advantages:• Best first search can switch between BFS & DFS by gaining theadvantages of both the algorithm.• This algorithm is more efficient than BFS & DFS algorithm.Disadvantages:• It can behave as an unguided depth – first search in the worst casescenario.• It can get stuck in a loop as DFS.• This algorithm is not optimal.
• Heuristic functions• A heuristic function or simply a heuristic is a function that ranks alternatives invarious search algorithms at each branching step basing on an availableinformation in order to make a decision which branch is to be followed during asearch.• The key component of Best-first search algorithm is a heuristic function,denoted by h(n):h(n) = estimated cost of the cheapest path from node n to a goal node.• Heuristic function are the most common form in which additional knowledge isimparted to the search algorithm.
• Greedy Best-first search• Greedy best-first search tries to expand the node that is closest to the goal, onthe grounds that this is likely to a solution quickly.• It evaluates the nodes by using the heuristic function f(n) = h(n).• Using the straight-line distance heuristic hSLD ,the goal state can be reachedfaster.• Properties of greedy search• o Complete?? No–can get stuck in loops.• Complete in finite space with repeated-state checking• o Time?? O(bm), but a good heuristic can give dramatic improvement• o Space?? O(bm)—keeps all nodes in memory• o Optimal?? No• Greedy best-first search is not optimal, and it is incomplete.• The worst-case time and space complexity is O(bm),where m is the maximum depth of thesearch space.
• A* Search:• A* search algorithm finds the shortest path through the search space using theheuristic function.• It uses h(n) & Cost to reach the node n from the start stage g (n).• This algorithm expands less search tree and provides optimal result faster.• It is similar to Uniform Cost Search (UCS) except that it uses g (n) + h (n)instead of g (n).• A* use search heuristic as well as the cost to reach the node hence we combineboth cost as• f (n) = g (n)+h (n). {Fitness Number}• f (n) – Estimated cost of the cheapest solution.• g (n) – Cost of reach node n from start state• h (n) – Cost of reach from node to goal node.
• Algorithm;
•Advantages:• It is best algorithm than other search algorithm.• It is optimal & Complete• It can solve very complex problem.•Disadvantages:• It does not always produce shortest path.• It is not practical for various large – scale Problems.
•AO * Algorithm:• AO * is the best algorithm for solving cycle AND-OR Graph.• The problem is divide into a set of sub problems, where each subproblems can be solved separately.
• Heuristic Functions• A heuristic function or simply a heuristic is a function that ranks alternatives invarious search algorithms at each branching step basing on an availableinformation in order to make a decision which branch is to be followed during asearch.
• State Space Landscape:• A landscape has both “location” (defined by the state) and “elevation”(defined by thevalue of the heuristic cost function or objective function).• If elevation corresponds to cost, then the aim is to find the lowest valley – a globalminimum; if elevation corresponds to an objective function, then the aim is to find thehighest peak – a global maximum.• Local search algorithms explore this landscape. A complete local search algorithm alwaysfinds a goal if one exists; an optimal algorithm always finds a globalminimum/maximum.
• Simple Hill Climbing:• Algorithm:• Evaluate the initial state. If it is also a goal state, then return it and quit.Otherwise, Continue with the initial state as the current state.• Loop until a solution is found or until there are no new operators left tobe applied in the current state:• Select an operator that has no yet been applied to the current state and apply itto produce a new state.• Evaluate the new state.• If it is a goal state, then return it and quit.• If it is not a goal state but it is better than the current state, then make it thecurrent state.• If it is not better than the current state, then continue in the loop.
• Steepest Ascent Hill Climbing:• Simple hill climbing considers all the moves from the current state and selectsthe best one as the next state. This method is called steepest – Ascent hillclimbing.• Algorithm:• Evaluate the initial state. If it is also a goal state, then return it and quit.Otherwise, Continue with the initial state as the current state.• Loop until a solution is found or until a complete iteration produces nochanges to current state:• Let SUCC be a state such that any possible successor of the current state will be better thanSUCC.• For each operator that applies to current state do:• Apply the operator and generate a new state.• Evaluate the new state. If it is a goal state, then return it and quit. If not, compare it toSUCC. If it is better then set SUCC to this state. If it is not better, leave SUCC alone.• If the SUCC is better than current state, then set current state to SUCC.
LOCAL SEARCH IN CONTINUOUS SPACES We have considered algorithms that work only in discrete environments, but real-worldenvironment are continuous Local search amounts to maximizing a continuous objective function in a multi-dimensional vector space. This is hard to do in general. Can immediately retreato Discretize the space near each stateo Apply a discrete local search strategy (e.g., stochastic hill climbing, simulatedannealing) Often resists a closed-form solutiono Fake up an empirical gradiento Amounts to greedy hill climbing in discretized state space Can employ Newton-Raphson Method to find maxima Continuous problems have similar problems: plateaus, ridges, local maxima, etc.
• Online Search Agents and Unknown Environments• Online search problems Offline Search (all algorithms so far) Compute complete solution, ignoring environment Carry out action sequence Online Search Interleave computation and action Compute—Act—Observe—Compute—· Online search good For dynamic, semi-dynamic, stochastic domains Whenever offline search would yield exponentially many contingencies Online search necessary for exploration problem States and actions unknown to agent Agent uses actions as experiments to determine what to do
ExamplesRobot exploring unknown buildingClassical hero escaping a labyrinth Assume agent knows Actions available in state sStep-cost function c(s,a,s′)State s is a goal state When it has visited a state s previously Admissible heuristic function h(s ) Note that agent doesn’t know outcome state (s ′ ) for a given action (a) until it tries theaction (and all actions from a state s ) Competitive ratio compares actual cost with cost agent would follow if it knew thesearch space No agent can avoid dead ends in all state spaces Robotics examples: Staircase, ramp, cliff, terrain Assume state space is safely explorable—some goal state is always reachable
Online Search Agents Interleaving planning and acting hamstrings offline search A* expands arbitrary nodes without waiting for outcome of action Onlinealgorithm can expand only the node it physically occupies Best to explore nodes inphysically local order Suggests using depth-first search Next node always a child of the current When all actions have been tried, can’t just drop state Agent must physically backtrack Online Depth-First Search May have arbitrarily bad competitive ratio (wandering past goal) Okay forexploration; bad for minimizing path cost Online Iterative-Deepening Search Competitive ratio stays small for state space a uniform tree
Online Local Search Hill Climbing Search Also has physical locality in node expansions is, in fact, already an online searchalgorithm Local maxima problematic: can’t randomly transport agent to new state in effort to escapelocal maximum Random Walk as alternative Select action at random from current state Will eventually find a goal node in a finite space Can be very slow, esp. if “backward” steps as common as “forward” Hill Climbing with Memory instead of randomness Store “current best estimate” of cost to goal at each visited state Starting estimate is justh(s ) Augment estimate based on experience in the state space Tends to “flatten out” localminima, allowing progress Employ optimism under uncertainty Untried actions assumed to have least-possible cost Encourage exploration of untried paths
Learning in Online Searcho Rampant ignorance a ripe opportunity for learning Agentlearns a “map” of the environmento Outcome of each action in each stateo Local search agents improve evaluation function accuracyo Update estimate of value at each visited stateo Would like to infer higher-level domain modelo Example: “Up” in maze search increases y -coordinateRequireso Formal way to represent and manipulate such general rules(so far, have hidden rules within the successor function)o Algorithms that can construct general rules based onobservations of the effect of actions

Recommended

PPTX
search strategies in artificial intelligence
PPTX
Informed and Uninformed search Strategies
PDF
Unit3:Informed and Uninformed search
PPT
Solving problems by searching
PPT
AI Lecture 3 (solving problems by searching)
PPTX
Problem solving agents
PPT
AI Lecture 4 (informed search and exploration)
PPT
Problems, Problem spaces and Search
PPTX
Constraint satisfaction problems (csp)
PDF
I.BEST FIRST SEARCH IN AI
PPT
Heuristic Search Techniques Unit -II.ppt
PPTX
AI_Session 11: searching with Non-Deterministic Actions and partial observati...
PPTX
State space search
PPTX
A* Algorithm
PDF
Logic programming (1)
PDF
Heuristic Search in Artificial Intelligence | Heuristic Function in AI | Admi...
PPTX
Water jug problem ai part 6
PPTX
Knowledge representation and Predicate logic
PPTX
Lecture 06 production system
PPTX
Resolution method in AI.pptx
PPTX
Lecture 16 memory bounded search
PPTX
Minmax Algorithm In Artificial Intelligence slides
PPTX
Predicate logic
PPTX
Example of iterative deepening search &amp; bidirectional search
PPTX
AI: Logic in AI
PPTX
knowledge representation using rules
PPTX
Planning
PDF
I. AO* SEARCH ALGORITHM
PPTX
Problem solving method in Artificial intelligence.pptx
PDF
Chapter 3 - Searching and prPlanning.pdf

More Related Content

PPTX
search strategies in artificial intelligence
PPTX
Informed and Uninformed search Strategies
PDF
Unit3:Informed and Uninformed search
PPT
Solving problems by searching
PPT
AI Lecture 3 (solving problems by searching)
PPTX
Problem solving agents
PPT
AI Lecture 4 (informed search and exploration)
PPT
Problems, Problem spaces and Search
search strategies in artificial intelligence
Informed and Uninformed search Strategies
Unit3:Informed and Uninformed search
Solving problems by searching
AI Lecture 3 (solving problems by searching)
Problem solving agents
AI Lecture 4 (informed search and exploration)
Problems, Problem spaces and Search

What's hot

PPTX
Constraint satisfaction problems (csp)
PDF
I.BEST FIRST SEARCH IN AI
PPT
Heuristic Search Techniques Unit -II.ppt
PPTX
AI_Session 11: searching with Non-Deterministic Actions and partial observati...
PPTX
State space search
PPTX
A* Algorithm
PDF
Logic programming (1)
PDF
Heuristic Search in Artificial Intelligence | Heuristic Function in AI | Admi...
PPTX
Water jug problem ai part 6
PPTX
Knowledge representation and Predicate logic
PPTX
Lecture 06 production system
PPTX
Resolution method in AI.pptx
PPTX
Lecture 16 memory bounded search
PPTX
Minmax Algorithm In Artificial Intelligence slides
PPTX
Predicate logic
PPTX
Example of iterative deepening search &amp; bidirectional search
PPTX
AI: Logic in AI
PPTX
knowledge representation using rules
PPTX
Planning
PDF
I. AO* SEARCH ALGORITHM
Constraint satisfaction problems (csp)
I.BEST FIRST SEARCH IN AI
Heuristic Search Techniques Unit -II.ppt
AI_Session 11: searching with Non-Deterministic Actions and partial observati...
State space search
A* Algorithm
Logic programming (1)
Heuristic Search in Artificial Intelligence | Heuristic Function in AI | Admi...
Water jug problem ai part 6
Knowledge representation and Predicate logic
Lecture 06 production system
Resolution method in AI.pptx
Lecture 16 memory bounded search
Minmax Algorithm In Artificial Intelligence slides
Predicate logic
Example of iterative deepening search &amp; bidirectional search
AI: Logic in AI
knowledge representation using rules
Planning
I. AO* SEARCH ALGORITHM

Similar to Problem solving in Artificial Intelligence.pptx

PPTX
Problem solving method in Artificial intelligence.pptx
PDF
Chapter 3 - Searching and prPlanning.pdf
PPTX
State space search and Problem Solving techniques
PPTX
Unit-2-search techniques in artificial intelligence
PPTX
Popular search algorithms
PPT
CH2_AI_Lecture1.ppt
PPTX
UNIT 2-FULL.pptxLearning (e.g., machine learning) Reasoning (solving problem...
PPTX
UNIT 2-FULL.pptxLearning (e.g., machine learning) Reasoning (solving problem...
PPTX
Learning (e.g., machine learning) Reasoning (solving problems, making decisi...
PPTX
PROBLEM SOLVING AGENTS - SEARCH STRATEGIES
PPTX
Moduleanaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaad-II.pptx
PDF
AI Chapter III for Computer Science Students
PPTX
Artificial intelligence(04)
PPTX
AI_Lecture2.pptx
PPT
22sch AI Module 2.ppt 22sch AI Module 2.ppt
PPT
22sch AI Module 2.ppt 22sch AI Module 2.ppt
PPTX
3. ArtificialSolving problems by searching.pptx
PPT
02-solving-problems-by-searching-(us).ppt
PPT
3.AILec5nkjnkjnkjnkjnkjnjhuhgvkjhbkhj-6.ppt
PDF
Lecture 3 problem solving
Problem solving method in Artificial intelligence.pptx
Chapter 3 - Searching and prPlanning.pdf
State space search and Problem Solving techniques
Unit-2-search techniques in artificial intelligence
Popular search algorithms
CH2_AI_Lecture1.ppt
UNIT 2-FULL.pptxLearning (e.g., machine learning) Reasoning (solving problem...
UNIT 2-FULL.pptxLearning (e.g., machine learning) Reasoning (solving problem...
Learning (e.g., machine learning) Reasoning (solving problems, making decisi...
PROBLEM SOLVING AGENTS - SEARCH STRATEGIES
Moduleanaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaad-II.pptx
AI Chapter III for Computer Science Students
Artificial intelligence(04)
AI_Lecture2.pptx
22sch AI Module 2.ppt 22sch AI Module 2.ppt
22sch AI Module 2.ppt 22sch AI Module 2.ppt
3. ArtificialSolving problems by searching.pptx
02-solving-problems-by-searching-(us).ppt
3.AILec5nkjnkjnkjnkjnkjnjhuhgvkjhbkhj-6.ppt
Lecture 3 problem solving

Recently uploaded

PDF
Introduction to MySQL Spatial Features and Real-World Use Cases
PPTX
Waste to Energy - G2 Ethanol.pptx to process
PDF
Small Space Big Design - Amar DeXign Scape
PPTX
Computer engineering for collage studen. pptx
PPTX
CEC369 IoT P CEC369 IoT P CEC369 IoT PCEC369 IoT PCEC369 IoT P
PPTX
Presentation 1.pptx WHAT IS ARTIFICIAL INTELLIGENCE?
PDF
OOPCodesjavapracticalkabirpawarpptinparacticalexamination
PPTX
Ship Repair and fault diagnosis and restoration of system back to normal .pptx
PDF
Why Buildings Crumble Before Their Time And How We Can Build a Legacy
PDF
Advancements in Telecommunication for Disaster Management (www.kiu.ac.ug)
PDF
ANPARA THERMAL POWER STATION[1] sangam.pdf
PPT
399-Cathodic-Protection-Presentation.ppt
 
PPTX
AI at the Crossroads_ Transforming the Future of Green Technology.pptx
PPTX
Washing-Machine-Simulation-using-PICSimLab.pptx
PPTX
DevFest Seattle 2025 - AI Native Design Patterns.pptx
PPTX
Lead-acid battery.pptx.........................
PDF
PRIZ Academy - Thinking The Skill Everyone Forgot
PDF
Reinforced Earth Walls Notes .pdf
PDF
Welcome to ISPR 2026 - 12th International Conference on Image and Signal Pro...
PPTX
Mc25104 - data structures and algorithms using PYTHON OOP_Python_Lecture_Note...
Introduction to MySQL Spatial Features and Real-World Use Cases
Waste to Energy - G2 Ethanol.pptx to process
Small Space Big Design - Amar DeXign Scape
Computer engineering for collage studen. pptx
CEC369 IoT P CEC369 IoT P CEC369 IoT PCEC369 IoT PCEC369 IoT P
Presentation 1.pptx WHAT IS ARTIFICIAL INTELLIGENCE?
OOPCodesjavapracticalkabirpawarpptinparacticalexamination
Ship Repair and fault diagnosis and restoration of system back to normal .pptx
Why Buildings Crumble Before Their Time And How We Can Build a Legacy
Advancements in Telecommunication for Disaster Management (www.kiu.ac.ug)
ANPARA THERMAL POWER STATION[1] sangam.pdf
399-Cathodic-Protection-Presentation.ppt
 
AI at the Crossroads_ Transforming the Future of Green Technology.pptx
Washing-Machine-Simulation-using-PICSimLab.pptx
DevFest Seattle 2025 - AI Native Design Patterns.pptx
Lead-acid battery.pptx.........................
PRIZ Academy - Thinking The Skill Everyone Forgot
Reinforced Earth Walls Notes .pdf
Welcome to ISPR 2026 - 12th International Conference on Image and Signal Pro...
Mc25104 - data structures and algorithms using PYTHON OOP_Python_Lecture_Note...

Problem solving in Artificial Intelligence.pptx

  • 1.
    Problem-Solving Strategies inArtificialIntelligence1. Problem solving Methods2. Search Strategies3. Uninformed – Informed4. Heuristics5. Local Search Algorithms and Optimization ProblemsDr.J.SENTHILKUMARAssistant ProfessorDepartment of Computer Science and EngineeringKIT-KALAIGNARKARUNANIDHI INSTITUTE OF TECHNOLOGY
  • 2.
    Problem Solving Methods•The method of solving problem through AI involves theprocess of defining the search space, deciding start and goalstates then finding the path from state to goal state throughsearch space.• The movement from start state to goal state is guided by set ofrules specifically designed for that particular problem.
  • 3.
    • Problem• Itis the question which is to solved. For solving a problem it needs to beprecisely defined.• The definition means, defining the start state, goal state, other valid states andtransitions.• Finding the Solution• After representation of the problem and related knowledge in the suitable format,the appropriate methodology is chosen which uses the knowledge andtransforms the start state to goal state.• The Techniques of finding the solution are called search techniques.• Various search techniques are developed for this purpose.
  • 4.
    Representation of AIProblem• AI problem can be covered in following four parts:• A Lexical part: that determines which symbols are allowed in the representation of theproblem. Like the normal meaning of the lexicon, this part abstracts all fundamentalfeatures of the problem.• A Structural part: that describes constraints on how the symbols can be arranged. Thiscorresponds to finding out possibilities required for joining these symbols andgenerating higher structural unit.• A Procedural Part: that specifies access procedure that enables to create descriptions,to modify them and to answer questions using them.• A Semantic Part: That establishes a way of associating meaning with the descriptions.
  • 5.
    WELL-DEFINED PROBLEMS ANDSOLUTIONS• A problem can be defined formally by five components:• INITIAL STATE • The initial state that the agent starts in.• ACTION : A description of the possible actions available to the agent.• A description of what each action does; the formal name for this is the TRANSITION MODEL, specified by a functionRESULT(s, a) that returns the state that results from SUCCESSOR doing action a in state s. We also use the term successor torefer to any state reachable from a given state by a single action.• Together, the initial state, actions, and transition model implicitly define the STATE SPACE of the problem—the set of all states reachable from the initial state byany sequence of actions. The state space forms a directed network or GRAPH in which the nodes are states and the links between nodes are actions. A PATH in thestate space is a sequence of states connected by a sequence of actions.• The GOAL TEST, which determines whether a given state is a goal state. Sometimes there is an explicit set of possible goalstates, and the test simply checks whether the given state is one of them.• A PATH COST function that assigns a numeric cost to each path. The problem-solving agent chooses a cost function that reflectsits own performance measure.
  • 7.
    Formulating problems• Thisformulation seems reasonable, but it is still a model—an abstract mathematicaldescription—and not the real thing.• The different amount of knowledge that an agent can have concerning its action and thestate that it is in.• This depends on how the agent is connected to its environment through its precepts andactions.• We find that there are four essentially different types of problem• Single state problems• Multiple-state problems• Contingency problems• Exploration problems.
  • 8.
    • Toy problems•This can be formulated as a problem as follows:States: The state is determined by both the agent location and the dirt locations. The agent is in one oftwo locations, each of which might or might not contain dirt. Thus, there are 2 × 2^2= 8 possible worldstates. A larger environment with n locations has n · 2n states.• Initial state: Any state can be designated as the initial state.• Actions: In this simple environment, each state has just three actions: Left, Right, and Suck. Largerenvironments might also include Up and Down.• Transition model: The actions have their expected effects, except that moving Left in the leftmostsquare, moving Right in the rightmost square, and Sucking in a clean square have no effect.• Goal test: This checks whether all the squares are clean.• Path cost: Each step costs 1, so the path cost is the number of steps in the path
  • 9.
    The 8-puzzle:• States:A state description specifies the location of each of the eight tiles and the blankin one of the nine squares.• Initial state: Any state can be designated as the initial state. Note that any given goalcan be reached from exactly half of the possible initial states.• Actions: The simplest formulation defines the actions as movements of the blank spaceLeft, Right, Up, or Down. Different subsets of these are possible depending on wherethe blank is.• Transition model: Given a state and action, this returns the resulting state; for example,if we apply Left to the start state in Figure 3.4, the resulting state has the 5 and the blankswitched.• Goal test: This checks whether the state matches the goal configuration shown in Figure3.4. (Other goal configurations are possible.)• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
  • 10.
    Some more Exampleas AI Problems:• Tic-Tac-Toe• 8-Queen Problem• Chess Problem• Tower of Hanoi• Traveling salesperson problem• Monkey and Banana Problem• Cryptarithmetic problem• Block World Problem
  • 11.
    Search:• The processof Finding a path to desired goal state from all possible future states. Themajor work in the field of search is to find the right Search Strategy for a particularproblem.• There are 2 kinds of search based on whether they use information about the goal.• uninformed search algorithms—algorithms that are given no information aboutthe problem other than its definition. Although some of these algorithms can solveany solvable problem, none of them can do so efficiently.• Informed search algorithms, on the other hand, can do quite well given someguidance on where to look for solutions.
  • 12.
    • Search algorithmsform the core of such Artificial Intelligenceprograms. And while we may be inclined to think that this haslimited applicability only in areas of gaming and puzzle-solving,such algorithms are in fact used in many more AI areas likeroute and cost optimizations, action planning, knowledgemining, robotics, autonomous driving, computational biology,software and hardware verification, theorem proving etc.• In a way, many of the AI problems can be modelled as a searchproblem where the task is to reach the goal from the initial statevia state transformation rules. So the search space is defined asa graph (or a tree) and the aim is to reach the goal from theinitial state via the shortest path, in terms of cost, length, acombination of both etc.
  • 13.
    • All searchmethods can be broadly classified into two categories:1.Uninformed (or Exhaustive or Blind) methods, where the search is carried outwithout any additional information that is already provided in the problem statement.Some examples include Breadth First Search, Depth First Search etc.2.Informed (or Heuristic) methods, where search is carried out by using additionalinformation to determine the next step towards finding the solution. Best First Searchis an example of such algorithms• Informed search methods are more efficient, low in cost and high in performance ascompared to the uninformed search methods.
  • 15.
    UNINFORMED SEARCH STRATEGIES:•Its is also called as Blind Search. The term means that the strategies haveno additional information about states beyond that provided in the problemdefinition.• All they can do is generate successors and distinguish a goal state from anon-goal state.• All search strategies are distinguished by the order in which nodes areexpanded.• Strategies that know whether one non-goal state is “more promising” thananother are called informed search or heuristic search strategies.
  • 16.
    • Informed(Heuristic) SearchStrategies• Informed search strategy is one that uses problem-specific knowledge beyond the definitionof the problem itself. It can find solutions more efficiently than uninformed strategy.• Best-first search• Greedy Best-first search• A* Search• AO* SearchBest-First Search:• It always selects the path which appears best at that moment.• It is combination of DFS and BFS• It uses the heuristics function h(n)<=h*(n) and search h (n) = heuristic cost; h* (n)=estimatedcost.• The greedy best first algorithm is implemented by priority queue.• The node with lowest evaluation is selected for expansion, because the evaluation measuresthe distance to the goal.
  • 18.
    Advantages:• Best firstsearch can switch between BFS & DFS by gaining theadvantages of both the algorithm.• This algorithm is more efficient than BFS & DFS algorithm.Disadvantages:• It can behave as an unguided depth – first search in the worst casescenario.• It can get stuck in a loop as DFS.• This algorithm is not optimal.
  • 19.
    • Heuristic functions•A heuristic function or simply a heuristic is a function that ranks alternatives invarious search algorithms at each branching step basing on an availableinformation in order to make a decision which branch is to be followed during asearch.• The key component of Best-first search algorithm is a heuristic function,denoted by h(n):h(n) = estimated cost of the cheapest path from node n to a goal node.• Heuristic function are the most common form in which additional knowledge isimparted to the search algorithm.
  • 20.
    • Greedy Best-firstsearch• Greedy best-first search tries to expand the node that is closest to the goal, onthe grounds that this is likely to a solution quickly.• It evaluates the nodes by using the heuristic function f(n) = h(n).• Using the straight-line distance heuristic hSLD ,the goal state can be reachedfaster.• Properties of greedy search• o Complete?? No–can get stuck in loops.• Complete in finite space with repeated-state checking• o Time?? O(bm), but a good heuristic can give dramatic improvement• o Space?? O(bm)—keeps all nodes in memory• o Optimal?? No• Greedy best-first search is not optimal, and it is incomplete.• The worst-case time and space complexity is O(bm),where m is the maximum depth of thesearch space.
  • 21.
    • A* Search:•A* search algorithm finds the shortest path through the search space using theheuristic function.• It uses h(n) & Cost to reach the node n from the start stage g (n).• This algorithm expands less search tree and provides optimal result faster.• It is similar to Uniform Cost Search (UCS) except that it uses g (n) + h (n)instead of g (n).• A* use search heuristic as well as the cost to reach the node hence we combineboth cost as• f (n) = g (n)+h (n). {Fitness Number}• f (n) – Estimated cost of the cheapest solution.• g (n) – Cost of reach node n from start state• h (n) – Cost of reach from node to goal node.
  • 22.
  • 23.
    •Advantages:• It isbest algorithm than other search algorithm.• It is optimal & Complete• It can solve very complex problem.•Disadvantages:• It does not always produce shortest path.• It is not practical for various large – scale Problems.
  • 24.
    •AO * Algorithm:•AO * is the best algorithm for solving cycle AND-OR Graph.• The problem is divide into a set of sub problems, where each subproblems can be solved separately.
  • 26.
    • Heuristic Functions•A heuristic function or simply a heuristic is a function that ranks alternatives invarious search algorithms at each branching step basing on an availableinformation in order to make a decision which branch is to be followed during asearch.
  • 28.
    • State SpaceLandscape:• A landscape has both “location” (defined by the state) and “elevation”(defined by thevalue of the heuristic cost function or objective function).• If elevation corresponds to cost, then the aim is to find the lowest valley – a globalminimum; if elevation corresponds to an objective function, then the aim is to find thehighest peak – a global maximum.• Local search algorithms explore this landscape. A complete local search algorithm alwaysfinds a goal if one exists; an optimal algorithm always finds a globalminimum/maximum.
  • 29.
    • Simple HillClimbing:• Algorithm:• Evaluate the initial state. If it is also a goal state, then return it and quit.Otherwise, Continue with the initial state as the current state.• Loop until a solution is found or until there are no new operators left tobe applied in the current state:• Select an operator that has no yet been applied to the current state and apply itto produce a new state.• Evaluate the new state.• If it is a goal state, then return it and quit.• If it is not a goal state but it is better than the current state, then make it thecurrent state.• If it is not better than the current state, then continue in the loop.
  • 30.
    • Steepest AscentHill Climbing:• Simple hill climbing considers all the moves from the current state and selectsthe best one as the next state. This method is called steepest – Ascent hillclimbing.• Algorithm:• Evaluate the initial state. If it is also a goal state, then return it and quit.Otherwise, Continue with the initial state as the current state.• Loop until a solution is found or until a complete iteration produces nochanges to current state:• Let SUCC be a state such that any possible successor of the current state will be better thanSUCC.• For each operator that applies to current state do:• Apply the operator and generate a new state.• Evaluate the new state. If it is a goal state, then return it and quit. If not, compare it toSUCC. If it is better then set SUCC to this state. If it is not better, leave SUCC alone.• If the SUCC is better than current state, then set current state to SUCC.
  • 31.
    LOCAL SEARCH INCONTINUOUS SPACES We have considered algorithms that work only in discrete environments, but real-worldenvironment are continuous Local search amounts to maximizing a continuous objective function in a multi-dimensional vector space. This is hard to do in general. Can immediately retreato Discretize the space near each stateo Apply a discrete local search strategy (e.g., stochastic hill climbing, simulatedannealing) Often resists a closed-form solutiono Fake up an empirical gradiento Amounts to greedy hill climbing in discretized state space Can employ Newton-Raphson Method to find maxima Continuous problems have similar problems: plateaus, ridges, local maxima, etc.
  • 32.
    • Online SearchAgents and Unknown Environments• Online search problems Offline Search (all algorithms so far) Compute complete solution, ignoring environment Carry out action sequence Online Search Interleave computation and action Compute—Act—Observe—Compute—· Online search good For dynamic, semi-dynamic, stochastic domains Whenever offline search would yield exponentially many contingencies Online search necessary for exploration problem States and actions unknown to agent Agent uses actions as experiments to determine what to do
  • 33.
    ExamplesRobot exploring unknownbuildingClassical hero escaping a labyrinth Assume agent knows Actions available in state sStep-cost function c(s,a,s′)State s is a goal state When it has visited a state s previously Admissible heuristic function h(s ) Note that agent doesn’t know outcome state (s ′ ) for a given action (a) until it tries theaction (and all actions from a state s ) Competitive ratio compares actual cost with cost agent would follow if it knew thesearch space No agent can avoid dead ends in all state spaces Robotics examples: Staircase, ramp, cliff, terrain Assume state space is safely explorable—some goal state is always reachable
  • 34.
    Online Search AgentsInterleaving planning and acting hamstrings offline search A* expands arbitrary nodes without waiting for outcome of action Onlinealgorithm can expand only the node it physically occupies Best to explore nodes inphysically local order Suggests using depth-first search Next node always a child of the current When all actions have been tried, can’t just drop state Agent must physically backtrack Online Depth-First Search May have arbitrarily bad competitive ratio (wandering past goal) Okay forexploration; bad for minimizing path cost Online Iterative-Deepening Search Competitive ratio stays small for state space a uniform tree
  • 35.
    Online Local SearchHill Climbing Search Also has physical locality in node expansions is, in fact, already an online searchalgorithm Local maxima problematic: can’t randomly transport agent to new state in effort to escapelocal maximum Random Walk as alternative Select action at random from current state Will eventually find a goal node in a finite space Can be very slow, esp. if “backward” steps as common as “forward” Hill Climbing with Memory instead of randomness Store “current best estimate” of cost to goal at each visited state Starting estimate is justh(s ) Augment estimate based on experience in the state space Tends to “flatten out” localminima, allowing progress Employ optimism under uncertainty Untried actions assumed to have least-possible cost Encourage exploration of untried paths
  • 36.
    Learning in OnlineSearcho Rampant ignorance a ripe opportunity for learning Agentlearns a “map” of the environmento Outcome of each action in each stateo Local search agents improve evaluation function accuracyo Update estimate of value at each visited stateo Would like to infer higher-level domain modelo Example: “Up” in maze search increases y -coordinateRequireso Formal way to represent and manipulate such general rules(so far, have hidden rules within the successor function)o Algorithms that can construct general rules based onobservations of the effect of actions

[8]ページ先頭

©2009-2025 Movatter.jp