Sam Mini

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

Architecture and Design:

The project follows a client-server architecture, where the front-end interface is


developed using Python and the back-end database system handles data storage and
retrieval. The modules are designed to interact with the database to perform various tasks
such as registration, fee management, enquiry handling, and student details management.

1. Module 1: Registration:

The Registration module allows college administrators to enter and manage student
details. It includes fields such as name, contact information, academic qualifications, and
other relevant information. Data validation techniques are implemented to ensure the
accuracy and completeness of the entered data.

2. Module 2: Fees Details:

The Fees Details module enables administrators to manage and track student fee
payments. It allows the entry of fee details, due dates, and payment status. The module
provides functionalities to generate fee reports, track outstanding payments, and send
reminders to students.

3. Module 3: Enquiry:

The Enquiry module handles inquiries from prospective students or their parents.
It captures essential information such as the student's name, contact details, course
preferences, and any specific queries. The module facilitates efficient handling and tracking
of inquiries, ensuring timely responses.

4. Module 4: View Enquiry:

The View Enquiry module allows administrators to view and analyze the
collected inquiries. It provides options to filter and search for specific inquiries based on
various criteria such as course preference, date, or student name. The module offers features
to export inquiry data for further analysis and reporting.

5. Module 5: Student Details:

The Student Details module provides a comprehensive view of student


information. It enables administrators to search, view, and update student details, including
personal information, academic records, attendance, and disciplinary actions. The module
facilitates efficient management of student records throughout their academic journey.

6. Front end technology:

Python is an interpreted, object-oriented, high-level programming language with


dynamic semantics. Its high-level built in data structures, combined with dynamic typing
and dynamic binding, make it very attractive for Rapid Application Development, as well as
for use as a scripting or glue language to connect existing components together. Python's
simple, easy to learn syntax emphasizes readability and therefore reduces the cost of
program maintenance. Python supports modules and packages, which encourages program
modularity and code reuse. The Python interpreter and the extensive standard library are
available in source or binary form without charge for all major platforms, and can be freely
distributed.
Often, programmers fall in love with Python because of the increased
productivity it provides. Since there is no compilation step, the edit-test-debug cycle is
incredibly fast. Debugging Python programs is easy: a bug or bad input will never cause a
segmentation fault. Instead, when the interpreter discovers an error, it raises an exception.
When the program doesn't catch the exception, the interpreter prints a stack trace.

LibreOffice includes Python and intends to replace Java with Python. Its Python
Scripting Provider is a core feature since Version 4.0 from 7 February 2013.

 FEATURES AND BENEFITS OF PYTHON:


 Compatible with a variety of platforms including Windows, Mac, Linux,
Raspberry Pi, and others
 Uses a simple syntax comparable to the English language that lets developers
use fewer lines than other programming languages
 Operates on an interpreter system that allows code to be executed
immediately, fast-tracking prototyping
 Can be handled in a procedural, object-orientated, or functional way

 PYTHON FLEXIBILITY
Python, a dynamically typed language, is especially flexible, eliminating hard
rules for building features and offering more problem-solving flexibility with a variety
of methods. It also allows uses to compile and run programs right up to a problematic
area because it uses run-time type checking rather than compile-time checking.

 THE LESS GREAT PARTS OF PYTHON


On the down side, Python isn’t easy to maintain. One command can have
multiple meanings depending on context because Python is a dynamically typed
language. And, maintaining a Python app as it grows in size and complexity can be
increasingly difficult, especially finding and fixing errors. Users will need experience
to design code or write unit tests that make maintenance easier.

Speed is another weakness in Python. Its flexibility, because it is dynamically


typed, requires a significant amount of referencing to land on a correct definition,
slowing performance. This can be mitigated by using alternative implementation of
Python (e.g. PyPy).

 PYTHON AND AI
AI researchers are fans of Python. Google TensorFlow, as well as other libraries
(scikit-learn, Keras), establish a foundation for AI development because of the
usability and flexibility it offers Python users. These libraries, and their availability, are
critical because they enable developers to focus on growth and building.

 GOOD TO KNOW
The Python Package Index (PyPI) is a repository of software for the Python
programming language. PyPI helps users find and install software developed and
shared by the Python community.

7. Back end technology:

 ABOUT MICROSOFT SQLSERVER 2008:


Microsoft SQL Server is a Structured Query Language (SQL) based,
client/server relational database. Each of these terms describes a fundamental
part of the architecture of SQL Server.

 DATABASE:
A database is similar to a data file in that it is a storage place for data.
Like a data file, a database does not present information directly to a user; the
user runs an application that accesses data from the database and presents it to
the user in an understandable format.

A database typically has two components: the files holding the physical
database and the database management system (DBMS) software that
applications use to access data. The DBMS is responsible for enforcing the
database structure, including:

Maintaining the relationships between data in the database.

 Ensuring that data is stored correctly and that the rules defining data
relationships are not violated.
 Recovering all data to a point of known consistency in case of
system failures.

 RELATIONAL DATABASE:

There are different ways to organize data in a database but relational


databases are one of the most effective. Relational database systems are an
application of mathematical set theory to the problem of effectively organizing
data. In a relational database, data is collected into tables (called relations in
relational theory).

When organizing data into tables, you can usually find many different
ways to define tables. Relational database theory defines a process,
normalization, which ensures that the set of tables you define will organize
your data effectively.

 CLIENT/SERVER:
In a client/server system, the server is a relatively large computer in a
central location that manages a resource used by many people. When
individuals need to use the resource, they connect over the network from their
computers, or clients, to the server.

Examples of servers are: In a client/server database architecture, the


database files and DBMS software reside on a server. A communications
component is provided so applications can run on separate clients and
communicate to the database server over a network. The SQL Server
communication component also allows communication between an application
running on the server and SQL Server.

Server applications are usually capable of working with several clients at


the same time. SQL Server can work with thousands of client applications
simultaneously. The server has features to prevent the logical problems that
occur if a user tries to read or modify data currently being used by others.

While SQL Server is designed to work as a server in a client/server


network, it is also capable of working as a stand-alone database directly on the
client. The scalability and ease-of-use features of SQL Server allow it to work
efficiently on a client without consuming too many resources.

 STRUCTURED QUERY LANGUAGE (SQL):

To work with data in a database, you must use a set of commands and
statements (language) defined by the DBMS software. There are several
different languages that can be used with relational databases; the most
common is SQL. Both the American National Standards Institute (ANSI) and
the International Standards Organization (ISO) have defined standards for
SQL. Most modern DBMS products support the Entry Level of SQL-92, the
latest SQL standard (published in 1992).

 SQL SERVER FEATURES

Microsoft SQL Server supports a set of features that result in the following
benefits:

 Ease of installation, deployment, and use .


 SQL Server includes a set of administrative and development tools that
improve your ability to install, deploy, manage, and use SQL Server
across several sites.
 Scalability
 The same database engine can be used across platforms ranging from
laptop computers running Microsoft Windows® 95/98 to large,
multiprocessor servers running Microsoft Windows NT®, Enterprise
Edition.
 Data warehousing
 SQL Server includes tools for extracting and analyzing summary data
for online analytical processing (OLAP). SQL Server also includes
tools for visually designing databases and analyzing data using
English-based questions.
 System integration with other server software
 SQL Server integrates with e-mail, the Internet, and Windows.
 Databases
 A database in Microsoft SQL Server consists of a collection of tables
that contain data, and other objects, such as views, indexes, stored
procedures, and triggers, defined to support activities performed with
the data. The data stored in a database is usually related to a particular
subject or process, such as inventory information for a manufacturing
warehouse.

SQL Server can support many databases, and each database can store
either interrelated data or data unrelated to that in the other databases. For
example, a server can have one database that stores personnel data and another
that stores product-related data. Alternatively, one database can store current
customer order data, and another; related database can store historical
customer orders that are used for yearly reporting. Before you create a
database, it is important to understand the parts of a database and how to
design these parts to ensure that the database performs well after it is
implemented.

 NORMALIZATION THEORY:

Relations are to be normalized to avoid anomalies. In insert, update


and delete operations. Normalization theory is built around the concept of
normal forms. A relation is said to be in a particular form if it satisfies a
certain specified set if constraints. To decide a suitable logical structure for
given database design the concept of normalization, which are briefly
described below.

8. Testing and Validation:


The College Management project undergoes comprehensive testing and
validation to ensure its functionality, performance, and security. Unit testing, integration
testing, and user acceptance testing are carried out to identify and fix any issues or bugs.
Data validation techniques are implemented to prevent data inconsistencies and errors.
Testing is a process of executing a program with the indent of finding an error. Testing is a
crucial element of software quality assurance and presents ultimate review of specification,
design and coding.

System Testing is an important phase. Testing represents an interesting anomaly


for the software. Thus a series of testing are performed for the proposed system before the
system is ready for user acceptance testing.

A good test case is one that has a high probability of finding an as undiscovered
error. A successful test is one that uncovers an as undiscovered error.

 TESTING OBJECTIVES:

 Testing is a process of executing a program with the intent of finding an


error
 A good test case is one that has a probability of finding an as yet
undiscovered error
 A successful test is one that uncovers an undiscovered error

 TESTING PRINCIPLES:

 All tests should be traceable to end user requirements


 Tests should be planned long before testing begins
 Testing should begin on a small scale and progress towards testing in
large
 Exhaustive testing is not possible
 To be most effective testing should be conducted by a independent third
party
The primary objective for test case design is to derive a set of tests that has the highest
livelihood for uncovering defects in software.
To accomplish this objective two different categories of test case design techniques are
used. They are White box testing & Black box testing.

 WHITE-BOX TESTING:
White box testing focus on the program control structure. Test cases
are derived to ensure that all statements in the program have been executed at
least once during testing and that all logical conditions have been executed.

 BLOCK-BOX TESTING:
Black box testing is designed to validate functional requirements
without regard to the internal workings of a program. Black box testing mainly
focuses on the information domain of the software, deriving test cases by
partitioning input and output in a manner that provides through test coverage.

 TESTING STRATEGIES:

A strategy for software testing must accommodate low-level tests that


are necessary to verify that all small source code segment has been correctly
implemented as well as high-level tests that validate major system functions
against customer requirements.

 TESTING FUNDAMENTALS:

Testing is a process of executing program with the intent of finding


error. A good test case is one that has high probability of finding an
undiscovered error. If testing is conducted successfully it uncovers the errors in
the software. Testing cannot show the absence of defects, it can only show that
software defects present.

 TESTING INFORMATION FLOW:

Information flow for testing flows the pattern. Two class of input
provided to test the process. The software configuration includes a software
requirements specification, a design specification and source code.

Test configuration includes test plan and test cases and test tools. Tests
are conducted and all the results are evaluated. That is test results are compared
with expected results. When erroneous data are uncovered, an error is implied
and debugging commences.

 UNIT TESTING:

Unit testing is essential for the verification of the code produced during
the coding phase and hence the goal is to test the internal logic of the modules.
Using the detailed design description as a guide, important paths are tested to
uncover errors with in the boundary of the modules. These tests were carried
out during the programming stage itself. All units of ViennaSQL were
successfully tested.

 INTEGRATION TESTING :
Integration testing focuses on unit tested modules and build the program
structure that is dictated by the design phase.

 SYSTEM TESTING:
System testing tests the integration of each module in the system. It
also tests to find discrepancies between the system and it’s original objective,
current specification and system documentation. The primary concern is the
compatibility of individual modules. Entire system is working properly or not
will be tested here, and specified path ODBC connection will correct or not, and
giving output or not are tested here these verifications and validations are values
to the system and by comparing with expected output.

Top-down testing implementing here.

 ACCEPTANCE TESTING:

This testing is done to verify the readiness of the system for the
implementation. Acceptance testing begins when the system is complete. Its
purpose is to provide the end user with the confidence that the system is ready
for use. It involves planning and execution of functional tests, performance tests
and stress tests in order to demonstrate that the implemented system satisfies its
requirements.

Tools to special importance during acceptance testing include:

 Test coverage Analyzer records the control paths followed for


each test case.

 Timing Analyzer also called a profiler, reports the time spent in


various regions of the code are areas to concentrate on to improve
system performance.
 Coding standards static analyzers and standard checkers are used
to inspect code for deviations from standards and guidelines.

 TEST CASES:

Test cases are derived to ensure that all statements in the program
have been executed at least once during testing and that all logical conditions
have been executed. Using White-Box testing methods, the software engineer
can drive test cases that
 Guarantee that logical decisions on their true and false sides.
 Exercise all logical decisions on their true and false sides.
 Execute all loops at their boundaries and within their operational
bounds.
 Exercise internal data structure to assure their validity.

You might also like