0% found this document useful (0 votes)
31 views6 pages

Ai Theory Assignmnet (120 E)

The document presents a comparative case study of classic AI systems, including General Problem Solver, Eliza, Student, and Macsyma, outlining their architectures, problem domains, use cases, and limitations. It contrasts these classic systems with modern AI equivalents like GPT-4, ChatGPT, Wolfram Alpha, and Mathematica, highlighting key improvements and persistent challenges. The influence of classic AI on modern systems is also discussed, emphasizing the importance of symbolic reasoning, human-AI interaction models, and problem decomposition in contemporary AI applications.

Uploaded by

Zain Malik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views6 pages

Ai Theory Assignmnet (120 E)

The document presents a comparative case study of classic AI systems, including General Problem Solver, Eliza, Student, and Macsyma, outlining their architectures, problem domains, use cases, and limitations. It contrasts these classic systems with modern AI equivalents like GPT-4, ChatGPT, Wolfram Alpha, and Mathematica, highlighting key improvements and persistent challenges. The influence of classic AI on modern systems is also discussed, emphasizing the importance of symbolic reasoning, human-AI interaction models, and problem decomposition in contemporary AI applications.

Uploaded by

Zain Malik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Name : M Zain ul Hassan

Roll No: Fall-2022-BSCS-120

Section: E

Comparative Case Study: Classic vs Modern AI Systems


Part 1: Case Analysis of Classic AI Systems

1. General Problem Solver (GPS)


Architecture and How It Worked:

 Developed by Newell and Simon (1957), GPS used means-ends analysis to break down
goals into subgoals.

 It operated with a problem space, rules, and strategies, representing problems as formal
symbol structures.

 Search-based strategy, simulating human-like reasoning.

Problem Domain:

 Generic problem-solving, logic puzzles, theorem proving.

Use Case:

 Solving logic problems like the Tower of Hanoi or geometric proof derivation.

Limitations:

 Domain dependence: needed problems in a specific formal format.


 Brittleness: couldn’t handle ambiguity or real-world complexity.

 No learning capability—static rule sets.

2. Eliza
Architecture and How It Worked:

 Created by Joseph Weizenbaum (1966), based on simple pattern-matching and keyword


substitution.

 Used scripts like DOCTOR to simulate a Rogerian therapist.

Problem Domain:

 Natural Language Processing (basic conversational simulation).

Use Case:

 Simulated conversation with humans; e.g., a person discussing personal concerns.

Limitations:

 No real understanding of language or context.

 Lacked memory, reasoning, or actual empathy.

 Fragile to unexpected input.

3. Student
Architecture and How It Worked:

 Built by Daniel Bobrow (1964) as a natural language understanding system.

 Translated algebra word problems into formal equations using parsing and symbolic
matching.

Problem Domain:

 Natural language to algebra translation.

Use Case:

 Could solve math word problems like: "If John has three times as many apples as Tom..."
Limitations:

 Limited vocabulary and problem types.

 No semantic understanding beyond fixed patterns.

4. Macsyma
Architecture and How It Worked:

 Developed at MIT in the late 1960s, a symbolic math manipulation system.

 Used rules for algebraic simplification, integration, differentiation, equation solving.

Problem Domain:

 Symbolic mathematics.

Use Case:

 Performing symbolic calculus operations (e.g., ∫x² dx) or algebraic factorization.

Limitations:

 Required manual input of expressions in specific syntax.

 No natural language input or adaptive behavior.

Classic AI Modern Equivalent Key Improvements Persistent Challenges

Can plan multi-step tasks using


GPT-4 with ReAct / Still struggles with long-term
contextual awareness and external
GPS LLaMA with planning, hallucination in
tools; handles more complex
Toolformer steps.
domains.

Sometimes lacks true


Deep understanding of context,
ChatGPT / Google understanding; can still
Eliza memory, emotion simulation, vast
Bard produce misleading
training data.
responses.

Interprets and solves wide variety Struggles with ambiguous


Wolfram Alpha /
Student of math problems, understands input or creative math
Symbolab
natural language queries. reasoning.
Classic AI Modern Equivalent Key Improvements Persistent Challenges

Complexity and
High-precision symbolic math,
Mathematica / interpretability of output can
Macsyma automated theorem proving,
Maple still be a hurdle for non-
natural language interface.
experts.

In-Depth Comparisons

GPS vs. GPT-4 with ReAct


 Architecture Shift: From symbolic search-based to deep learning with attention and
reasoning chains (ReAct).

 Problem Scope: GPT-4 handles code generation, creative writing, real-world planning vs.
GPS’s symbolic puzzles.

 Lesson Retained: Structured problem solving via decomposition is still used in LLM
planning tools.

Eliza vs. ChatGPT


 Conversation Depth: Eliza used fixed rules; ChatGPT uses transformer-based models
trained on billions of tokens.

 Understanding: ChatGPT can maintain coherent multi-turn dialogue, remember prior


context, and generate varied, human-like responses.

 Shared Limitation: No “true” understanding or consciousness.

Student vs. Wolfram Alpha


 Input Processing: Student parsed narrow problem types; Wolfram Alpha handles vast
types of math and science queries via NLP and symbolic engines.
 Capability Expansion: Alpha integrates factual data and supports multiple subjects.

 Remaining Gaps: Still limited in commonsense reasoning or out-of-syllabus math


intuition.

Macsyma vs. Mathematica


 Tool Scope: From symbolic algebra to a full computational environment, data
visualization, AI integration.

 Advancements: Modern tools can automatically optimize, prove theorems, or simulate


real-world models.

 Challenge: High complexity can create barriers for beginners.

Influence of Classic AI on Modern Systems


 Symbolic Reasoning: Still critical in hybrid systems (symbolic + neural), like in
mathematics and logic applications.

 Human-AI Interaction Models: Eliza inspired chatbot research, which matured into LLM-
based systems.

 Problem Decomposition: GPS’s method remains embedded in many AI planning and


decision-making models.

 Formal Language Parsing: Student’s work laid groundwork for today’s math solvers and
compilers.

References (APA style)

1. Newell, A., & Simon, H. A. (1961). Computer simulation of human thinking. Science,
134(3495), 2011–2017.

2. Weizenbaum, J. (1966). ELIZA – A computer program for the study of natural language
communication between man and machine. Communications of the ACM, 9(1), 36–45.

3. Bobrow, D. G. (1964). Natural language input for a computer problem-solving system.


MIT Project MAC.

4. Moses, J. (1971). The Macsyma system. Symbolic Mathematical Computation, MIT LCS.
5. OpenAI. (2023). GPT-4 Technical Report. https://openai.com/research/gpt-4

You might also like