Se4ai (Unit 5)
Se4ai (Unit 5)
UNIT-5
AI into Practical Software
Page | 1
SE for AI AITK UNIT-V
1.Support environments:
In traditional programming, the goal is to take a formal specification, or a set of detailed
requirements, and convert it into a formal, computable algorithm that runs correctly. This process is
often seen as a straightforward transformation between notations, though it can still be challenging.
Famous computer scientist Edsger Dijkstra emphasized that programming should involve proving that
the resulting program meets its formal specifications. In this view, programming is seen as a precise,
mathematical process of creating a solution that fits a fixed problem.
However, in the field of Artificial Intelligence (AI), programming has a different focus. AI programming
is more about exploring potential solutions to problems that are often vague, incomplete, or too
complex to fully specify. This process is more experimental, requiring flexibility and evolution of the
system over time. AI developers often build prototypes and adjust them based on how they behave,
making the process less rigid than conventional programming. Some experts argue that AI
programming is closer to real-world software engineering, where discovering and adapting to a
problem’s needs is more important than strictly adhering to a pre-defined specification.
Lastly, AI programming environments, such as those supporting the LISP language, have been
essential in helping programmers manage complexity and experiment effectively. These
environments provide tools and support that are critical in AI because the problems are often more
complex and less structured than in traditional programming. While languages like Pascal are good
for conventional programming tasks that prioritize correctness and efficiency, AI programming
demands flexibility and adaptability. Therefore, support environments have always been a key part of
AI development, helping programmers navigate the evolving and complex nature of AI problems.
Modern AI environments must support the entire software life cycle, including documenting design
decisions, managing requirements, and helping with system maintenance, which is expected to be
harder due to machine learning mechanisms. These environments reduce complexity by using the
computer’s strengths in storing and retrieving information, making AI development more
manageable rather than just faster. Tools like the Eiffel system, which generate automated
documentation, offer early examples of such support. By offering "moderately stupid assistance,"
these environments help developers build and maintain complex AI systems more effectively.
Page | 2
SE for AI AITK UNIT-V
Page | 3
SE for AI AITK UNIT-V
Despite these ideas being decades old, they’re still difficult to fully implement because they require
the system to have a lot of intelligence. Tools like expert systems and knowledge-based editors have
made some of these functions possible, but there’s still room for growth. Some systems, like
Intellicorp's KEE and the Programmer’s Apprentice by Rich and Waters, have taken steps toward
providing a useful assistant. These systems offer predefined solutions for common problems (called
"clichés") that developers can customize to save time.
Page | 4
SE for AI AITK UNIT-V
Rich and Waters’ Programmer’s Apprentice also introduced the idea of "plans," which are abstract
templates for common programming tasks, like search loops. These plans can be reused in different
contexts, making development faster and more reliable. This formal approach to supporting
programmers reflects a shift from simply bundling useful features together to creating more
structured and intelligent support environments.
Page | 5
SE for AI AITK UNIT-V
4. An engineering toolbox:
This section discusses two common metaphors for how computers can help programmers: the
assistant and the toolbox. The assistant metaphor suggests that the computer can actively help
programmers by giving advice, warnings, or doing tasks for them. In contrast, the toolbox metaphor
represents a collection of tools that the programmer can choose from as needed, without the system
interrupting them. Although these ideas overlap, the toolbox is less controversial because it doesn’t
interfere with the programmer's process but also offers less potential to fully leverage the
computer’s capabilities.
Sheil (1983) suggests that as software systems become more complex and specifications harder to
define, programming should be treated as a design process where the design and code develop
together through experimentation. Early programming environments were simple, mainly focused on
providing basic tools like text editors and file management systems. However, Barstow and Shrobe
(1984) describe how modern programming environments have evolved, focusing on the whole
lifecycle of software development, handling large projects with many programmers, or managing
incremental development processes that blend coding, debugging, and maintenance into a unified
workflow.
Finally, the text discusses the importance of programming environments in supporting exploratory
programming, allowing programmers to try out ideas and make quick changes without restarting
everything. But this flexibility can also encourage bad practices like "code-and-fix." Therefore,
environments should act as intelligent filters that allow good programming behaviors while
discouraging poor ones. Implementing this might be difficult, but even partial success could
significantly improve the quality of software development.
5. Self-reflective software:
This section explores the idea of self-reflective software—programs that can understand and reflect
on their own structure and functioning. The idea builds on the concept of complete life-cycle
environments, which store knowledge about how and why a system was designed, to assist with
future maintenance. Moving this knowledge into the software itself would allow the system to reflect
on its own design, offering advantages like easier maintenance and automated support. However,
this would also make the software more complex and larger in size.
The main benefit of self-reflective software is that it could manage and support itself without relying
on external tools. By understanding its own structure and decisions, the software would become
more self-contained and capable of adjusting or explaining its behavior. However, this approach
comes with trade-offs, as it would add complexity to every system, making development and
management more challenging.
An example of basic self-reflectivity can already be seen in expert systems, which often include
simple self-explanation features. These systems can explain their reasoning when asked "why" or
"how" they made a decision. Although this is mostly just a trace of their logical process, it is an
effective and easily implemented feature that shows how self-reflection can be useful, even in a basic
form.
Page | 6
SE for AI AITK UNIT-V
6. Overengineering software:
This passage explores the idea of overengineering AI software to improve its reliability.
Overengineering in traditional engineering, such as adding extra support to bridges or buildings, is
common to ensure safety. However, applying the same concept to software is challenging because
software systems are much more complex and sensitive to errors. The challenge is how to add
redundancy (extra checks or safety features) in software, especially when errors can come from
unpredictable combinations of events. While we cannot guarantee that software will be perfect, we
can still add defensive strategies like extra checks and traps to catch potential errors, which is known
as defensive programming. An example of this would be inserting checks into a program to verify
that a value falls within expected limits.
A common method to achieve reliability through redundancy is ASSERT statements. These are added
to programs to ensure certain conditions are met. For instance, if you're calculating the average of a
set of numbers, an ASSERT statement could verify that the calculated average lies between the
smallest and largest numbers in the set. While this method doesn't guarantee correctness, it helps
identify certain errors that might occur. However, this strategy has its limitations, such as being ad
hoc and focusing too much on the fine details of implementation, making the system larger and
more complex without solving all the problems.
Page | 7
SE for AI AITK UNIT-V
Another approach to redundancy is N-version programming, where the same task is coded by
multiple programmers in different ways, and the outputs are compared. By using different
programming languages (like Pascal and Prolog), we can reduce errors related to the coding style or
language-specific issues. The goal is to make sure that the software performs reliably, even if one
version contains mistakes. Though promising, this technique has its challenges, such as difficulty in
comparing intermediate steps of different programs, which makes it hard to verify the correctness of
the computations.
Lastly, the Eiffel programming language provides an example of how redundancy can be built into
software at the design stage through object-oriented programming. Eiffel allows developers to write
assertions (preconditions, postconditions, and invariants) at a higher level, which helps maintain
software reliability without getting bogged down in implementation details. This high-level approach
ensures that certain conditions are always met before and after a function runs, making it easier to
detect errors early in the design phase. This overengineering strategy shows that while adding
redundancy can be complicated, it’s essential for building reliable AI software.
Software is highly flexible but also fragile. Unlike physical tools, software can be easily modified, but
those changes are tricky to get right. People tend to use software in creative ways that might cause
problems because developers cannot foresee every possible use. To address this, software
development needs more than just technical fixes—it requires attention to human behavior and
communication within teams, as well as between developers and users. Building systems that can
adapt and evolve, while managing user expectations, is crucial for future software reliability.
Page | 8
SE for AI AITK UNIT-V
Page | 9
SE for AI AITK UNIT-V
Page | 10